Kaisa Miettinen Nonlinear Multiobjective Optimization
Kaisa Miettinen Nonlinear Multiobjective Optimization
OPTIMIZATION
INTERNATIONAL SERIES IN
OPERATIONS RESEARCH & MANAGEMENT SCIENCE
Saigal, Romesh
LINEAR PROGRAMMING: A Modern Integrated Analysis
Vanderbei, Robert J.
LINEAR PROGRAMMING: Foundations and Extensions
Jaiswal, N.K.
MILITARY OPERATIONS RESEARCH: Quantitative D~cision Making
Prabhu, N. U.
FOUNDATIONS OF QUEUEING THEORY
Yu. Gang
OPERATIONS RESEARCH IN THE AIRLINE INDUSTRY
by
Kaisa Miettinen, PhD
University ofJyvaskyla
.....
"
Springer Science+Business Media, LLC
Library of Congress Cataloging-in-Publication Data
Miettinen, Kaisa.
Non1inear multiobjective optimization / by Kaisa Miettinen.
p. cm. -- (International ser ies in operations research &
management science : 12)
lncludes bibliographical references and index.
ISBN 978-1-4613-7544-9 ISBN 978-1-4615-5563-6 (eBook)
DOI 10.1007/978-1-4615-5563-6
1. Multiple criteria decis ion making. 2. Non1inear programming.
1. Title. II. Series.
T57.95.M52 1999
658.4 '03--dc21 98-37888
CIP
Copyright © 1998 by Springer Science+Business Media New York. Fourth Printing 2004.
Origina11y published by Kluwer Academic Publishers in 1998
Softcover reprint of the hardcover 1st edition 1998
AU rights reserved. No part of this publication may be reproduced, stored in a
retrieval system or transmitted in any form or by any means, mechanica1, photo-
copying, recording, or otherwise, without the prior written permission of the
publisher, Springer Science+Business Media, LLC.
PREFACE xiii
ACKNOWLEDGEMENTS xix
NOTATION AND SYMBOLS xxi
1. INTRODUCTION 3
2. CONCEPTS 5
2.1. Problem Setting and General Notation 5
2.1.1. Multiobjective Optimization Problem 5
2.1.2. Background Concepts 6
2.2. Pareto Optimality 10
2.3. Decision Maker 14
2.4. Ranges of the Pareto Optimal Set 15
2.4.1. Ideal Objective Vector 15
2.4.2. Nadir Objective Vector 16
2.4.3. Related Topics 18
2.5. Weak Pareto Optimality 19
2.6. Value Function 21
2.7. Efficiency 23
2.8. From One Solution to Another 25
2.8.1. '!fade-Offs 26
2.8.2. Marginal Rate of Substitution 27
2.9. Proper Pareto Optimality 29
2.10. Pareto Optimality Tests with Existence Results 33
viii Contents
3. THEORETICAL BACKGROUND 37
3.1. Differentiable Optimality Conditions 37
3.1.1. First-Order Conditions 37
3.1.2. Second-Order Conditions 42
3.1.3. Conditions for Proper Pareto Optimality 43
3.2. Nondifferentiable Optimality Conditions 45
3.2.1. First-Order Conditions 47
3.2.2. Second-Order Conditions 52
3.3. More Optimality Conditions 54
3.4. Sensitivity Analysis and Duality 56
Part II METHODS
1. INTRODUCTION 61
2. NO-PREFERENCE METHODS 67
2.1. Method of the Global Criterion 67
2.1.1. Different Metrics 67
2.1.2. Theoretical Results 69
2.1.3. Concluding Remarks 71
2.2. Multiobjective Proximal Bundle Method 71
2.2.1. Introduction 71
2.2.2. MPB Algorithm 73
2.2.3. Theoretical Results 75
2.2.4. Concluding Remarks 75
3. A POSTERIORI METHODS 77
3.1. Weighting Method 78
3.1.1. Theoretical Results 78
3.1.2. Applications and Extensions 82
3.1.3. Weighting Method as an A Priori Method 83
3.1.4. Concluding Remarks 84
3.2. c-Constraint Method 85
3.2.l. Theoretical Results on Weak and Pareto Optimality 85
3.2.2. Connections with the Weighting Method 88
3.2.3. Theoretical Results on Proper Pareto Optimality 89
3.2.4. Connections with Trade-Off Rates 92
3.2.5. Applications and Extensions 94
3.2.6. Concluding Remarks 95
3.3. Hybrid Method 96
Contents ix
2. SOFTWARE 233
2.1. Introduction 233
2.2. Review 235
Life inevitably involves decision making, choices and searching for compro-
mises. It is only natural to want all of these to be as good as possible, in other
words, optimal. The difficulty here lies in the (at least partial) conflict between
our various objectives and goals. Most everyday decisions and compromises are
made on the basis of intuition, common sense, chance or all of these. However,
there are areas where mathematical modelling and programming are needed,
such as engineering and economics. Here, the problems to be solved vary from
designing spacecraft, bridges, robots or camera lenses to blending sausages,
planning and pricing production systems or managing pollution problems in
environmental control. Many phenomena are of a nonlinear nature, which is
why we need tools for nonlinear programming capable of handling several con-
flicting or incommensurable objectives. In this case, methods of traditional
single objective optimization are not enough; we need new ways of thinking,
new concepts, and new methods - nonlinear multiobjective optimization.
Problems with multiple objectives and criteria are generally known as mul-
tiple criteria optimization or multiple criteria decision-making (MCDM) prob-
lems. The area of multiple criteria decision making has developed rapidly, as
the statistics collected in Steuer et al. (1996) demonstrate. For example, by the
year 1994, a number of 144 conferences had been held and over 200 books and
proceedings volumes had appeared on the topic. Moreover, some 1216 refereed
journal articles were published between 1987 and 1992.
The MCDM field is so extensive that there is good reason to classify prob-
lems on the basis of their characteristics. They can be divided into two distinct
types (in accordance with MacCrimmon (1973)). Depending on the properties
of the feasible solutions, we distinguish multiattribute decision analysis and
muItiobjective optimization. In multiattribute decision analysis, the set of fea-
sible alternatives is discrete, predetermined and finite. Specific examples are
the selection of the locations of power plants and dumping sites or the pur-
chase of cars and houses. In multiobjective optimization problems, the feasible
alternatives are not explicitly known in advance. An infinite number of them
exists and they are represented by decision variables restricted by constraint
functions. These problems can be called continuous. In these cases, one has to
generate the alternatives before they can be valuated.
As far as multiattribute decision analysis is concerned, we refer to the mono-
graphs by Hwang and Yo on (1981) and Keeney and Raiffa (1976). More ref-
XIV Preface
between alternative solutions and who continues from the point where math-
ematical tools end. Here we assume that a single decision maker is involved.
With several decision makers, the whole question of problem setting is very dif-
ferent. In addition to the mathematical side of the solution process, there is also
the aspect of negotiation and consensus striving between the decision makers.
The number of decision makers affects the means of approaching and solving
the problem significantly. A summary of group decision making is given in the
monograph of Hwang and Lin (1987). Here we settle for one decision maker.
A number of specific problem types requires special handling (not included
here). Among these are problems in which the feasible solutions must have inte-
ger values or 0-1 values, multiobjective trajectory optimization problems, where
the multiple objectives have multiple observation points, multiobjective net-
works or transportation networks and multiobjective dynamic programming.
Here we shall not go into these areas but adhere to standard methods.
Thus far we have outlined our interest here as being in deterministic contin-
uous multiobjective optimization with a single decision maker. This definition
still contains two broad areas, namely linear and nonlinear cases. Because lin-
ear programming utilizes the special characteristics of the problem, its methods
are not usually applicable to nonlinear problems. Further, linear multiobjective
optimization theory and methodology have been extensively treated in the lit-
erature, so there is no reason to repeat them here. One of the best presentations
focusing mainly on linear problems is Steuer (1986). However, the methodol-
ogy of nonlinear multiobjective optimization has not been drawn together since
Hwang and Masud (1979) (currently out of print). One more fact to notice is
that improved computational capacity enables problems to be handled with-
out linearizations and simplifications. Finally, linear problems are a subset of
nonlinear problems and that is why nonlinear methods can be used in both
cases. For these reasons, this book concentrates on nonlinear multiobjective
optimization.
The aim here is to provide an up-to-date, self-contained and consistent
survey and review of the literature and the state of the art on nonlinear (de-
terministic) multiobjective optimization starting with basic results.
The amount of literature on multiobjective optimization is immense. The
treatment in this book is based on about 1500 publications in English printed
mainly after the year 1980. Almost 700 of them are cited and listed in the bib-
liography. This extensive list of references supplements the contents regarding
areas not covered.
Problems related to real-life applications often contain irregularities and
nonsmoothnesses. The treatment of nondifferentiable multiobjective optimiza-
tion in the literature is rather rare. For this reason we also include in this book
material about the possibilities, background, theory and methods of nondiffer-
entiable multiobjective optimization.
Theory and methods for multiobjective optimization have been developed
chiefly during the last four decades. Here we do not go into the history as
xvi Preface
the origin and the achievements in this field of research from 1776 to 1960
are widely treated in Stadler (1979). A brief summary of the history is also
given in Gal and Hanne (1997). There it is demonstrated that multiobjective
optimization has its foundations in utility theory and economics, game theory,
mathematical research on order relations and vector norms, linear production
theory, and nonlinear programming.
Let us mention some further readings. The monographs of Chankong and
Haimes (1983b), Cohon (1978), Hwang and Masud (1979), Osyczka (1984),
Sawaragi et al. (1985), Steuer (1986) and Yu (1985) provide an extensive
overview of the area of multiobjective optimization. Further noteworthy mono-
graphs on the topic are those of Rietveld (1980), Vincke (1992) and Zeleny
(1974, 1982). A significant part of Vincke (1992) deals, however, with multiat-
tribute decision analysis. The behavioural aspects of multiobjective optimiza-
tion are mostly treated in Ringuest (1992), whereas the theoretical aspects are
extensively handled in the monographs by Jahn (1986a) and Luc (1989).
As far as this book is concerned, the contents are divided into three parts.
Part I provides the theoretical background. Chapter 1 leads into the topic and
Chapter 2 presents important notation, concepts and definitions in multiob-
jective optimization with some illustrative figures. Various theoretical aspects
appear in Chapter 3. For example, analogous optimality conditions for dif-
ferentiable and nondifferentiable problems are considered. A solid, conceptual
basis and foundation for the remainder of the book is laid. Throughout the
book we keep to problems involving only finite-dimensional Euclidean spaces.
(Dauer and Stadler (1986) provide a survey on multiobjective optimization in
infinite-dimensional spaces.)
The methodology is handled in Part II. Methods are divided into four classes
in Chapter 1 according to the role of a (single) decision maker in the solution
process. The state of the art in method development is portrayed by describing
a number of different methods accompanied by their theoretical background in
Chapters 2 to 5. For ease of comparison, all the methods are presented using a
uniform notation. The good and the weak properties of the methods are also in-
troduced with references to extensions and applications. The class of interactive
methods in Chapter 5 contains most of the methods, and it is the most exten-
sively handled. Linear problems and methods are only occasionally touched
on. In addition to describing solution methods, we introduce some implemen-
tations. In connection with every method described, some author's comments
appear in the concluding remarks. Some of the methods are depicted in more
detail and some only mentioned. Appropriate references to the literature are
always included.
Part III is Related Issues. After the presentation of a set of different so-
lution methods, some comparison is appropriate in Chapter 1. Naturally, no
absolute order of superiority can be given, but some points can be raised. A
table comparing some of the features of the interactive methods described is
included. In addition, we present brief summaries of some of the comparisons
Preface xvii
Kaisa Miettinen
Part I
For clarity and simplicity of the treatment we assume that all the objective
functions are to be minimized. If an objective function Ji is to be maximized,
it is equivalent to minimize the function - Ii-
In what follows, whenever we refer to a multiobjective optimization prob-
lem, it is problem (2.1.1) unless stated otherwise. Finding a solution to (2.1.1)
in one way or another is called a solution pmcess in the continuation.
First, we present some general concepts and notations. We use bold face
and superscripts for vectors, for example, Xl, and subscripts for components
of vectors, for example, Xl. All the vectors here are assumed to be column
vectors. For two vectors, x and x· ERn, the notation x T x· denotes their
scalar pmduct and the vector inequality x ~ x* means that Xi ~ xi for all
i = 1, ... , n. Correspondingly x < x· stands for Xi < xi for all i = 1, ... ,n.
The nonnegative orthant of R n is denoted by R+. In other words, R+ =
{x E R n I Xi ~ 0 for i = 1, ... , n}. The Euclidean n01'm of a vector x E Rn
is denoted by Ilxll = (E~=l xT) 1/2. The Euclidean distance Junction between a
point x* and a set S is denoted by dist(x*, S) = infxEs IIx· - xII. The symbol
B(x*,6) denotes an open ball with a centre x· and a radius 15 > 0, B(x·, 15) =
{x ERn IlIx· - xII < c5}. The notation int S stands for the interior of a set S.
The vectors Xi, i = 1, ... , m, are linearly independent if the only weighting
coefficients (3i for which E::1 (3ixi = 0 are (3i = 0, i = 1, ... , m. The sum
E:: 1 (3ixi is called a convex combination of the vectors Xl, x , ... ,x
2 m E S, if
(3i ~ 0 for all i and E::1 (3i = 1. The convex hull of a set S eRn, denoted by
conv S, is the set of all convex combinations of vectors in S,
A set 8 eRn is a cone if (3x = ((3x1, ... , (3xm)T E S whenever x E 8 and
(3 ~ O. The negative of a cone is -8 = {-x ERn I x E 8}. A cone 8 is said to
be pointed if it satisfies S n -S = {OJ. A cone -S transformed to x* ERn is
denoted by x* - S = {x ERn I x = x* + d, where d E -S},
It is said that d ERn is a feasible direction emanating from xES if there
exists a* > 0 such that x + ad E S for 0 ~ a ~ a*.
In some connections we assume that the feasible region is formed of inequal-
ity constraints, that is, 8 = {x E R n I g(x) = (gl(X),g2(X), ... ,gm(x))T ~ OJ.
An inequality constraint gj is said to be active at a point x· if gj(x*) = 0,
and the set of active constraints at x* is denoted by J(x*) = {j E {l, ... , m} I
gj(x*) = OJ.
Different types of multiobjective optimization problems can be defined.
Definition 2.1.1. When all the objective functions and the constraint func-
tions forming the feasible region are linear, then the multiobjective optimiza-
tion problem is called linear. In brief, it is an MOLP (multiobjective linear
programming) problem.
If at least one of the objective or the constraint functions is nonlinear, the
problem is called a nonlinear multiobjective optimization problem.
2.1. Problem Setting and General Notation 7
'~' by':::;' to get the definition for pseudoconcave functions. Notice that if a
function J; is quasiconvex, all of its level sets {x ERn I Ii (x) :::; a} are convex
and if it is quasi concave, all of its level sets {x ERn I J; (x) ~ a} are convex
(see, for example, Mangasarian (1969, pp. 133-134)).
Sometimes we also need strict definitions.
where '\7 fi(X·) is the gradient, the symmetric nxn matrix '\7 2 h(x·) is a Hessian
matrix of h at x· and c(x·, d) -t 0 as Ildll -t O. The Hessian matrix of a
twice-differentiable function consists of second-order partial derivatives 8;!;c:,,~)
(j, 1 = 1, ... , n). In other words,
~)
8x l 8x n
8 2/i\X*)
8xa
8fi(X*) = conv {{ ERn I{= lim V' h(xl ); xl -+ x', xl ERn \ ill.}
l-too
Definition 2.2.1. A decision vector x" E S is Pareto optimal if there does not
exist another decision vector xES such that fi(X) ~ fi(X") for all i = 1, ... , k
and fJ(x) < fJ(x*) for at least one index j.
An objective vector z* E Z is Pareto optimal if there does not exist another
objective vector z E Z such that Zi ~ z; for all i = 1, ... , k and Zj < z] for
at least one index jj or equivalently, z* is Pareto optimal if the decision vector
corresponding to it is Pareto optimal.
Figure 2.2.1. The sets Sand Z and the Pareto optimal objective vectors.
12 Part I - 2. Concepts
Proof. Let x* E S be locally Pareto optimal. Thus there exist some /j > 0 and
a neighbourhood B(x*, /j) of x* such that there is no xES n B(x*, /j) for which
h(x) ~ fi(X*) for all i = 1, ... , k and for at least one index j is Ji(x) < Ji(x*).
Let us assume that x* is not globally Pareto optimal. In this case, there
exists some other point XO E S such that
(2.2.1) fi(XO) ~ fi(X*) for all i = 1, ... , k and Ji(XO) < Ji(x*) for some j.
Let us define x = (3xo + (1 - (3)x*, where 0 < (3 < 1 is selected such that
x E B(x*, 6). The convexity of S implies that xES.
By the convexity of the objective functions and employing (2.2.1), we obtain
fi(X) ~ (3fi(XO) + (1 - (3)fi(X*) ~ (3fi(X*) + (1 - (3)h(x*) = f;(x*) for every
i = 1, ... , k. Because x* is locally Pareto optimal and x E B(x*, /j), we must
have fi(X) = fi(X*) for all i.
Further, h(x*) ~ (3h(xO) + (1 - (3)h(x*) for every i = 1, ... , k. Because
(3 > 0, we can divide by it and obtain f;(x*) ~ fi(XO) for all i. According
to assumption (2.2.1), we have Ji(x*) > fj(xO) for some j. Here we have a
contradiction. Thus, x* is globally Pareto optimal. 0
Proof. Let x* E S be locally Pareto optimal. Thus there exist some J > 0 and
a neighbourhood B(x*, J) of x* such that there is no x E SnB(x*, J) for which
J;(x) :::; J;(x*) for all i = 1, ... , k and for at least one index j is hex) < h(x*).
Let us assume that x· is not globally Pareto optimal. In this case, there
exists some other point XO E S such that
(2.2.2) fi(XO):::; J;(x*) for all i = 1, ... , k and h(xO) < h(x*) for some j.
Let us define x = f3x + (1 - f3)x*, where 0 < f3 <
o 1 is selected such that
x E B(x*, 8). The convexity of S implies that XES.
Employing (2.2.2) and by the quasiconvexity of the objective functions,
respectively, for each index i such that fi(XO) = J;(x·), we obtain
and for each index j such that h(xO) < fj(x*), we have
For the sake of brevity, we shall usually speak only about Pareto optimality
in the sequel. In practice, however, we only have locally Pareto optimal solu-
tions computationally available, unless some additional requirement, such as
convexity, is fulfilled.
Usually, we are interested in Pareto optimal solutions and can forget the
other feasible solutions. Exceptions to this practice are problems where one of
the objective functions is an approximation of an unknown function or there
are underlying unexpressed objective functions involved. Then, the real Pareto
optimal set is unknown.
According to the definition of Pareto optimality, moving from one Pareto
optimal solution to another necessitates trading off. This is one of the basic
concepts in multiobjective optimization. Let us, however, mention that the
idea of trading off can be called into question, as suggested, for example, in
Zeleny (1997). It is not perhaps always necessary to trade off in order to attain
improved results. One can argue that it has been possible to produce things
14 Part 1 - 2. Concepts
both at lower cost and with higher quality. Changing the way of approaching the
problem and its formulation may produce better results than simply trading off
in the old formulation. (This can also be regarded as an example of expanding
habitual domains, to be introduced in Section 2.3.) Zeleny goes so far as to
claiming that trade-offs are properties of inadequately designed systems. For
that reason one can claim that we should aim at designing systems better.
Let us for a while investigate the ranges of the set of Pareto optimal so-
lutions. We assume that the objective functions are bounded over the feasible
region S.
minimize 1; (x)
subject to XES,
for i = 1, ... , k.
It is obvious that if the ideal objective vector were feasible (that is, z* E Z),
it would be the solution of the multiobjective optimization problem (and the
Pareto optimal set would be reduced to it). This is not possible in general since
there is some conflict among the objectives. Even though the ideal objective
vector is not attainable, it can be considered a reference point, something to go
for. From the ideal objective vector we obtain the lower bounds of the Pareto
optimal set for each objective function.
Note that in practice some caution is in order with nonconvex problems.
The definition of the ideal objective vector assumes that we know the global
minima of the individual objective functions. Guaranteeing global optimality
in numerical calculations is not that simple. This must be kept in mind with
practical problems. Properties of ideal objective vectors, for example, their
uniqueness, are treated in Skulimowski (1992).
Sometimes we also need a vector that is strictly better than, in other words,
strictly dominates, every Pareto optimal solution.
for all i = 1, ... , k, where zi is a component of the ideal objective vector and
Ci > 0 is a relatively small but computationally significant scalar.
The upper bounds of the Pareto optimal set, that is, the components of
a nadir objective vector (or imperfect objective vector) znact, are much more
difficult to obtain. However, they can be estimated from a payoff table.
A payoff table is formed by using the decision vectors obtained when calcu-
lating the ideal objective vector. Row i of the payoff table displays the values of
all the objective functions calculated at the point where Ii obtained its minimal
value. Hence, zi is at the main diagonal of the table. The maximal value of the
column i in the payoff table can be selected as an estimate of the upper bound
of the objective Ii for i = 1, ... , k over the Pareto optimal set.
The black points in Figure 2.4.1 represent ideal objective vectors, and the
grey ones are nadir objective vectors. The nadir objective vector may be feasible
2.4. Ranges of the Pareto Optimal Set 17
z 2
•.............
ideal
Z I
Note that the objective vectors in the rows of the payoff table are Pareto
optimal if they are unique. In other words, if the individual objective functions
have alternative optima, the obtained objective vector may not be Pareto op-
timal. This fact can weaken the approach and it can happen in linear as well
as in nonlinear problems.
It is important to note that the estimates based on the payoff table are
not necessarily equal to the real components of the nadir objective vector as
demonstrated, for example, in Korhonen et al. (1997) and Weistroffer (1985).
Instead of being correct, the nadir objective value approximate may be either
far too low or too high.
The difference between the complete Pareto optimal set and the subset of
the Pareto optimal set bounded by the ideal objective vector and the upper
bounds obtained from the payoff table in linear cases is explored in Reeves and
Reid (1988). It is proposed that relaxing (i.e., increasing) the approximated up-
per bounds by a relatively small tolerance should improve the approximation,
although it is ad hoc in nature. However, small tolerances may not necessar-
ily help because the error between the correct and the approximated nadir
objective value may be significant.
For nonlinear problems, there is no constructive method for calculating the
nadir objective vector. That is why we here mention some treatments for MOLP
problems. Isermann and Steuer (1988) include an examination of how many of
the Pareto optimal extreme solutions of some MOLP problems are above the
upper bounds obtained from the payoff table. Three methods for determining
the exact nadir objective vector in a linear case are also suggested. None of
them is especially economical computationally. In Dessouky et al. (1986), three
heuristics are presented for calculating the nadir objective vector when the
18 Part I - 2. Concepts
are defined for every i = 1, ... , k, and finally each objective function is multi-
plied by K i .
A simple alternative for normalizing the objective function values is to di-
vide each objective function by its (nonzero) ideal objective value. This has
been suggested, for example, in Osyczka (1984, 1992). This is not as exact as
the previous methods but does not necessitate information about the nadir
objective vector.
It is usually advisable to use normalized objective values only in calcu-
lations and to display restored objective values in the original scales to the
decision maker. In this way the different scales do not confuse computation
and significant objective values are offered to the decision maker.
It is possible that (some) objective functions are unbounded, for instance,
from below. In this case some caution is in order. In multiobjective optimiza-
tion problems this does not necessarily mean that the problem is formulated
2.5. Weak Pareto Optimality 19
incorrectly. There may still exist Pareto optimal solutions. However, if, for in-
stance, some component of the ideal objective vector is unbounded and it is
replaced by a small but finite number, methods utilizing the ideal objective
vector may not be able to overcome the replacement.
Finally, let us look at some examples of the problem of optimizing a function
over the Pareto optimal set of a multiobjective optimization problem. This is a
more general problem than just looking for the ranges of the Pareto optimal set.
In Benson and Sayin (1994), the authors deal with the maximization of a linear
function over the Pareto optimal set of an MOLP problem. A general function
is minimized over the Pareto optimal set of an MOLP problem in Dauer and
Fosnaugh (1995), and a convex function is optimized over the Pareto optimal set
of linear objective functions and a convex feasible region by duality techniques
in Thach et al. (1996). Maximization of a function over the Pareto optimal set
is also considered in Horst and Thoai (1997).
The bold line in Figure 2.5.1 represents the set of weakly Pareto optimal
objective vectors. The fact that the Pareto optimal set is a subset of the weakly
Pareto optimal set can also be seen in the figure. The Pareto optimal objective
vectors are situated along the line between the dots.
Similarly to Pareto optimality, local weak Pareto optimality can be defined
in addition to the global weak Pareto optimality of Definition 2.5.1. It must
still be kept in mind that usually only locally weakly Pareto optimal solutions
are computationally available. Nevertheless, for the sake of brevity, we shall
usually refer only to weak Pareto optimality.
Let us state as a curiosity that if the feasible region is convex and the objec-
tive functions are quasiconvex with at least one strictly quasiconvex function,
the set of locally Pareto optimal solutions is a subset of the set of weakly Pareto
20 Part I - 2. Concepts
weakly Pareto
..-- optimal set
I Pareto optimal
setl
Z I
It is often assumed that the decision maker makes decisions on the basis of
an underlying function of some kind. This function is called a value function.
Let Zl and z2 E Z be two different objective vectors. If U(zl) > U(Z2), then
the decision maker prefers Zl to z2. If U(zl) = U(Z2), then the decision maker
finds the objective vectors equally desirable, that is, they are indifferent.
It must be pointed out that the value function is totally a decision maker-
dependent concept. Different decision makers may have different value functions
for the same problem.
Sometimes the term utility function is used instead of the value function.
Here we follow the common way of referring to value functions in deterministic
problems. The term utility function is reserved for stochastic problems (not to
be handled here). See Keeney and Raiffa (1976) for a more extended discussion
of both terms.
If we had at our disposal the mathematical expression of the decision
maker's value function, it would be easy to solve the multiobjective optimiza-
tion problem. The value function would simply be maximized by some method
of single objective optimization. The value function would offer a total (com-
plete) ordering of the objective vectors. However, there are several reasons why
this seemingly easy way is not generally used in practice. The most important
reason is that it is extremely difficult, if not impossible, for a decision maker
to specify mathematically the function behind her or his preferences. Secondly,
even if the function were known, it could be difficult to optimize because of its
possible complicated nature. An example of such situations is the nonconcavity
of the value function. In this case, only a local maximum may be found instead
of the global one. In addition, as pointed out in Steuer and Gardiner (1991), it
is not necessarily all to the good that optimizing the value function results in
a single solution. After specifying the value function, the decision maker may
have doubts about its validity. This is why (s)he may want to explore different
alternatives before selecting the final solution.
One more thing to keep in mind about value functions is that their existence
is not necessarily guaranteed. At least it may be restricting to assume that a
fixed and stable function can explain the behaviour and the preferences of the
decision maker.
Even though value functions are seldom explicitly used in solving multi-
objective optimization problems, they are very important in the development
of solution methods and as a theoretical background. In many multiobjective
optimization methods, the value function is assumed to be known implicitly
and the decision maker is assumed to make selections on this basis. In several
22 Part I - 2. Concepts
Different properties and forms of value functions are widely treated in Hem-
ming (1978). Some references handling the existence of value functions are listed
in Stadler (1979) where different value functions are also presented.
The way a final solution was earlier defined means that a solution is final if
it maximizes the decision maker's value function. Sometimes another concept,
that of the satisficing solution, is distinguished.
Satisficing solutions are connected with so-called satisficing decision making.
Satisficing decision making means that the decision maker does not intend to
maximize any general value function but tries to achieve certain aspirations.
A solution which satisfies all the aspirations of the decision maker is called
a satisficing solution. In the most extreme case, one can define a solution to
be satisficing independent of whether it is Pareto optimal or not. Here we,
however, always assume that a satisficing solution is Pareto optimal or at least
weakly Pareto optimal.
2.7. Efficiency 23
2.7. Efficiency
of ordering cones, see Sawaragi et al. (1985, pp. 25-31) and Yu (1974, 1985,
pp. 163-209).
Theorem 2.6.2 gives a relationship between Pareto optimal solutions and
value functions. Relations can also be established between efficient solutions and
value functions. To give an idea of them, let us consider a pseudoconcave value
function U. According to pseudoconcavity whenever '\7U(Zl V(Z2 - Zl) ~ 0, we
have U(Z2) ~ U(Zl). We can now define an ordering cone as a map D(z) =
{d E R k I '\7 U (z) T d ~ o}. This ordering cone can be used to determine efficient
solutions. Note that if we have a value function, we can derive its domination
structure, but not generally vice versa. See Yu (1974) for an example.
Weakly efficient decision and objective vectors can be defined in a corre-
sponding fashion to efficient ones. If the set Z of objective vectors is ordered
by an ordering cone D, weakly efficient vectors may be characterized in the
following way (see Jahn (1987) and Wierzbicki (1986b»:
2.8.1. Trade-Offs
Note that in the case of two objective functions there is no difference be-
tween partial and total trade-offs. If partial trade-offs are presented to the de-
cision maker, (s)he can compare changes in two objective functions at a time.
This is usually a more comfortable procedure than comparing several objec-
tives. If the points Xl and x 2 are Pareto optimal, then there always exist some
objective functions hand Ij for which the trade-off is negative. A concept
related to the trade-off is the trade-off rate.
Differing from the idea of the definitions above, a so-called global trade-
off is defined in Kaliszewski and Michalowski (1995, 1997). A global trade-off
involves two objective functions and one decision vector which does not have to
be Pareto optimal. It is the largest pairwise trade-off of two objective functions
for one decision vector. Let us consider x* E S and modify the definitions for
minimization problems. We define a subset of the feasible decision vectors in
the form
It is said that two feasible solutions are situated on the same indifference
curve (or isopreference curve) if the decision maker finds them equally desir-
able, that is, neither of them is preferred to the other one. This means that
indifference curves are contours of the underlying value function. There may
also be a 'wider' indifference band. In this case we do not have any well-defined
boundary between preferences, but a band where indifference occurs. This con-
cept is studied in Passy and Levanon (1984).
28 Part I - 2. Concepts
For any two solutions on the same indifference curve there is a trade-off
involving a certain increment in the value of one objective function (/j) that
the decision maker is willing to tolerate in exchange for a certain amount of
decrement in some other objective function (h) while the preferences of the
two solutions remain the same. This is called the marginal rate of substitution.
This kind of trading between different solutions is characteristic of multiobjec-
tive optimization problems when moving from one Pareto optimal solution to
another. The marginal rate of substitution (sometimes also called indifference
trade-off) is the negative of the slope of the tangent to the indifference curve
at a certain point.
Note that in the definition the starting and the resulting objective vectors lie
on the same indifference curve and i, j = 1, ... , k, i i- j.
It can be stated that the final solution of a multiobjective optimization
problem is a Pareto optimal solution where the indifference curve is tangent to
the Pareto optimal set. This tangency condition means finding an indifference
curve intersecting the feasible objective region that is farthest to the southwest.
This property is illustrated in Figure 2.8.1.
indifference curve
.. ( *) _ aU(f(x*)) /aU(f(X*))
mtJ x - afJ ali'
If the Pareto optimal set is smooth (that is, at every Pareto optimal point
there exists a unique tangent), we have the following result. When one examines
the definition of a trade-off rate at some point, one sees that it is the slope of
the tangent of the Pareto optimal set at that point. We can also define that
when a Pareto optimal solution is a final solution, then the tangents of the
indifference curve and the Pareto optimal set coincide at it, that is,
Kuhn and Thcker were the first to note that some of the Pareto optimal solu-
tions had undesirable properties (see Kuhn and Tucker (1951)). To avoid such
properties, they introduced properly Pareto optimal solutions and suggested
that Pareto optimal solutions be divided into properly and improperly Pareto
optimal ones. The idea of properly Pareto optimal solutions is that unbounded
trade-offs between objectives are not allowed. Practically, a properly Pareto op-
timal solution with very high or very low trade-offs does not essentially differ
from a weakly Pareto optimal solution for a human decision maker.
There exist several definitions for proper Pareto optimality. The idea is
easiest to understand from the following definition.
Note that this definition differs from that of Pareto optimality so that a
larger set R~ is used instead of the set Ri. The set of e-properly Pareto optimal
solutions is depicted in Figure 2.9.1 and denoted by a bold line. The solutions
are obtained by intersecting the feasible objective region with a blunt cone.
The end points of the Pareto optimal set, Zl and Z2, have also been marked to
ease the comparison.
An alternative formulation of Definition 2.9.2 is that a decision vector x* E
S and the corresponding z* E Z are e-properly Pareto optimal if (z* -R~)nZ =
z*. (The definition can be generalized into proper efficiency by using a convex
cone D such that Ri C int D U {o}.)
Z2
z*- R~
z*- R! -------
Z I
and
Vy'(X*)T d ~ 0
for alll satisfying YI(X*) = 0, that is, for all active constraints at x*.
An objective vector z* E Z is properly Pareto optimal if the decision vector
corresponding to it is properly Pareto optimal.
Kuhn and Tucker also derived necessary and sufficient conditions for proper
Pareto optimality in Kuhn and Tucker (1951), Those conditions will be pre-
sented in the next section,
A comparison of the definitions of Kuhn and Tucker and Geoffrion is pre-
sented in Geoffrion (1968), For example, in convex cases the definition of Kuhn
and Thcker implies the, definition of Geoffrion. The reverse result is valid if
the so-called Kuhn-Tucker constraint qualification (see Definition 3,1.3) is sat-
isfied. The relationships of these two definitions are also treated, for example,
in Sawaragi et al. (1985, pp, 42-46), Several practical examples are given in
Tamura and Arai (1982) to illustrate the fact that properly Pareto optimal so-
lutions according to the definitions of Kuhn and Tucker and Geoffrion (and one
more definition by Klinger; see Klinger (1967)) are not necessarily consistent.
Conditions under which (local) proper Pareto optimality in the sense of Kuhn
and Tucker implies (local) proper Pareto optimality in the sense of Geoffrion
are proved as well. More mathematical results concerning the properties and
the relationships of the definitions of Kuhn and Tucker, Geoffrion and Klinger
are given in White (1983a).
Borwein (1977) and Benson (1979a) have both defined proper efficiency
when a closed, convex cone D is used as an ordering cone. Borwein's definition
is based on tangent cones and Benson's on so-called projecting cones. Let us
mention that proper efficiency according to Benson's definition implies proper
efficiency in the sense of Borwein, (The reverse is valid in convex cases.) These
two definitions are generalized in Henig (1982b) using convex ordering cones.
The ordering cone D used in defining efficiency is utilized in the following,
Let us have a look at how the Pareto optimality of feasible decision vectors
can be tested. The procedures presented can also be used to find an initial
Pareto optimal solution for (interactive) solution methods or to examine the
existence of Pareto optimal and properly Pareto optimal solutions.
Specific results for MOLP problems are presented in Ecker and Kouada
(1975). They are generalized for nonlinear problems with the help of duality
theory in Wendell and Lee (1977). The treatment is based on an auxiliary
problem
k
minimize Lli(x)
(2.10.1) i=l
subject to li(x) ~ h(fc.) for all i = 1, ... , k,
xES,
where fc. is any vector in S. Let us denote the optimal objective function value
by ¢(fc.).
Theorem 2.lO.1 means that if problem (2.lO.1) has an optimal solution for
some x E 5, then either x is Pareto optimal or the optimal solution of (2.10.1)
is.
When studying the (primal) problem (2.lO.1) and its dual, a duality gap is
said to occur if the optimal value of the primal problem is not equivalent to
the optimal value of the dual problem.
where both x ERn and E E Ri are variables. Then the following results are
valid.
(1) The vector x* is Pareto optimal if and only if problem (2.10.2) has an
optimal objective function value of zero.
(2) If problem (2.10.2) has a finite nonzero optimal objective function value
obtained at a point x, then x is Pareto optimal.
(3) If the multiobjective optimization problem is convex and if problem
(2.10.2) does not have a finite optimal objective function value, then
the set of properly Pareto optimal solutions is empty.
2.10. Pareto Optimality Tests with Existence Results 35
Proof. See Benson (1978) or Chankong and Haimes (1983b, pp. 151-152).
(3.1.1)
minimize {II(x), h(x), ... , fk(x)}
subject to xES = {x E R n I g(x) = (gl(X),g2(X), ... ,gm(x)f ~ o}.
We denote the set of active constraints at a point x* by
We assume in this section that the objective and the constraint functions are
continuously differentiable. In Section 3.2 we treat non differentiable functions.
Similar optimality results are also handled, for example, in Da Cunha and
Polak (1967), Kuhn and Tucker (1951), Marusciac (1982), Simon (1986) and
Yu (1985, pp. 35-38, 49-50). In order to highlight the ideas, the theorems are
here presented in a simplified form as compared to the general practice. For
this reason, the proofs have been modified.
Theorem 3.1.1. (Fr'itz John necessary condition for Pareto optimality) Let
the objective and the constraint functions of problem (3.1.1) be continuously
differentiable at a decision vector x* E S. A necessary condition for x* to be
Pareto optimal is that there exist vectors 0 ::; ~ E Rk and 0 ::; pERm for
which (>",p) =1= (0,0) such that
k m
(1) LAiV'fi(X*) + LJ.ljV'gj(x*) = 0
i=1 j=1
We do not present the proof here because it is quite extensive. The theorem
can be considered a special case of the corresponding theorem for nondifferen-
tiable problems, which is proved in Subsection 3.2.1. For convex problems, nec-
essary optimality conditions can be derived by using separating hyperplanes.
This is realized, for example, in Zadeh (1963). A separation theorem is also
employed in the proof of the general case in Da Cunha and Polak (1967).
Corollary 3.1.2. (Fritz John necessary condition for weak Pareto optimality)
The condition of Theorem 3.1.1 is also necessary for a decision vector x* E S
to be weakly Pareto optimal.
The difference between Fritz John type and Karush-Kuhn-Tucker type op-
timality conditions in single objective optimization is that the multiplier (A) of
the objective function is assumed to be positive in the latter case. This elimi-
nates degeneracy since it implies that the objective function plays its important
role in the optimality conditions. To guarantee the positivity of A, some regu-
larity has to be assumed in the problem. Different regularity conditions exist
and they are called constraint qualifications.
In the multiobjective case it is equally important that all the multipliers
of the objective functions are not equal to zero. Sometimes the multipliers
connected to Karush-Kuhn-Tucker optimality conditions are called Karush-
K uhn- Tucker multipliers. This concept will be used later.
In order to present the Karush-Kuhn-Tucker optimality conditions we must
formulate some constraint qualification. From among several different alterna-
tives we here present the so-called Kuhn-Tucker constraint qualification.
a(O) = x*, g(a(t))::; 0 for all 0::; t::; 1 and a'(O) = ad.
3.1. Differentiable Optimality Conditions 39
Ax < 0, Cx ~ 0
Proof. Let x' E S be Pareto optimal. The idea of this proof is to apply
Theorem 3.1.4. For this reason we prove that there does not exist any d E R n
such that
V' f;(x*)T d < 0 for all i = 1, ... , k, and
(3.1.2)
V'gj(X,)T d :::; 0 for all j E J(x").
After utilizing the assumption V' Ji(X*)T d* < 0 for all i = 1, ... , k (and
t ~ 0), we have Ji(a(t)) < fi(x·) for all i = 1, ... , k for a sufficiently small t.
This contradicts the Pareto optimality of x·.
Thus we have proved statement (3.1.2). Now we conclude from Theorem
3.1.4 that there exist multipliers Ai ~ 0 for i = 1, ... , k, .\ =1= 0, and Jlj ~ 0
for j E l(x·) such that E~=l AiV' Ji(x·) + EjEJ(x+) JljV'gj(x*) = O. We obtain
statement (1) of Theorem 3.1.1 by setting Jlj = 0 for all j E {I, ... , m} \ l(x·).
If gj(x·) < 0 for some j = 1, ... ,m, then according to the above setting
Jlj = 0 and equalities (2) of Theorem 3.1.1 follow. 0
j=1
Proof. Let the vectors.\ and" be such that the conditions stated are satisfied.
We define a function F: Rn ---+ R as F(x) = 2:7=1 Adi(X), where XES.
Trivially F is convex because all the functions fi are and we have .\ > o.
Now from statements (1) and (2), we obtain V'F(x*) + 2:;:1 f.ljV'gj(X*) = 0
and f.ljgj (x*) = 0 for all j = 1, ... , m. Thus, according to Theorem 3.1. 7, the
sufficient condition for F to attain its minimum at x* is satisfied. So F(x*) ~
F(x) for all xES. In other words,
k k
(3.1.3) L Adi(X*) ~ L Adi(X)
i=1 i=1
they are not only nonnegative real scalars but belong to a dual cone D* , where
D* = {l E Rk IlTy ~ 0 for all y ED}. Because of the close resemblance, we
do not here handle optimality conditions separately for efficiency. For details
see, for example, Chen (1984) and Luc (1989, pp. 74-79).
Ax ~ 0, Ax =I- 0, Cx ~ 0
We can now present the necessary condition for proper Pareto optimality.
Proof. Let x· be properly Pareto optimal (in the sense of Kuhn and Tucker).
From the definition we know that no vector dE R n exists such that \7 fi(x*)T d
:::; 0 for all i = 1, ... , k, \7 fj(x*)T d < 0 for some indexj, and \791(X*)T d :::; 0 for
alii E J(x*). Then, from Theorem 3.1.13 we know t.hat there exist multipliers
Ai > 0 for i = 1, ... ,k and /Lj 2:: 0 for j E J(x*) such that 2:::=1 Ai\7fi(x*) +
LjEJ(x') /Lj \7 gj (x*) = O. We obtain statement (1) by setting /Lj = 0 for all
j E {l, ... ,rn} \ J(x*).
If gj(x*) < 0 for some j, then according to the above setting /Lj = 0 and
equalities (2) follow. 0
It is proved in Geoffrion (1968) and Sawaragi et al. (1985, p. 90), that if the
Kuhn-Tucker constraint qualification (Definition 3.1.3) is satisfied at a decision
vector x· E S, then the condition in Theorem 3.1.14 is also necessary for x* to
be properly Pareto optimal in the sense of Geoffrion. Finally, we write down
the sufficient condition for proper Pareto optimality.
Proof. See Sawaragi et al. (1985, p. 90) or Shimizu et al. (1997, p. 112).
Let us finally mention that necessary and sufficient conditions for proper
Pareto optimality in the sense of Geoffrion are presented in Gulati and Islam
(1990) for pseudolinear objective (i.e., differentiable functions that are both
pseudo convex and pseudoconcave) and quasiconvex constraint functions.
3.2. Nondifferentiable Optimality Conditions 45
(3.2.1)
minimize {!1 (x), hex), ... ,!k (x)}
subject to xES = {x E R n I g(x) = (91(X),92(X), ... ,9m(x»T ~ OJ.
The two sets are equal if at least k - 1 of the functions Ii are continuously
differentiable, or if the functions are convex and the weights are positive.
Proof. See, for example, Makela and Neittaanmaki (1992, p. 39) and Clarke
(1983, pp. 38-39).
Proof. See, for example, Makela and Neittaanmaki (1992, pp. 47-49).
o E 8J;(x*).
If the function Ii is convex, then the condition is also sufficient and the mini-
mum is global.
Proof. See, for example, Makela and Neittaanmaki (1992, pp. 70-71).
Before moving on to the optimality conditions of the Fritz John and Karush-
Kuhn-Thcker type we should point out the following. If a single objective func-
tion is defined on a set, the counterpart of the condition in Theorem 3.2.3
says that zero belongs to the algebraic sum of two sets formed at the point
considered. The sets are the sub differential of the objective function and the
normal cone of the feasible region. This result is adapted for convex multiobjec-
tive optimization problems involving continuous objective functions and closed
feasible regions in Plastria and Carrizosa (1996). The necessary and sufficient
condition for weak Pareto optimality is that zero belongs to the sum of the
union of the sub differentials of the objective functions and the normal cone
of the feasible region. Note that the functions do not have to be even locally
Lipschitzian. According to Clarke (1983, pp. 230-231), the same condition is
necessary for weak Pareto optimality in general problems as well. We do not
treat these results more thoroughly here. Instead, we present one more result
for single objective nondifferentiable optimization.
Proof. See, for example, Clarke (1983, pp. 228-230) or Kiwiel (1985c, p. 16).
Theorem 3.2.5. (Fritz John necessary condition lor Pareto optimality) Let
the objective and the constraint functions of problem (3.2.1) be locally Lip-
schitzian at a point x* E S. A necessary condition for the point x* to be
Pareto optimal is that there exist multipliers 0 :::; ..\ E R k and 0 :=:; pERm for
which (..\,p) =I (0,0) such that
k m
(1) 0 E :~::>'i8h(x*) +L fj j 8g j (x*)
i=l j=l
(2) J.Lj9j(X*) =0 for all j = 1, ... , m.
Proof. Because it is assumed that (..\,p) =I (0,0), we can normalize the mul-
tipliers to sum up to one. We shall here prove a stronger condition, where
L:~=1 Ai + L:;:1 fjj = l.
Let x* E S be Pareto optimal. At first we define an additional function
F: Rn -t R by
(3.2.2) F(x) ~ O.
Let us on the contrary assume that for some XO E Rn is F(xO) < O. Then
9j(XO) < 0 for all j = 1, ... ,m and the point XO is thus feasible in problem
(3.2.1). In addition, h(xO) < li(x*) for all i = 1, ... ,k, which contradicts the
Pareto optimality of x*. Thus (3.2.2) must be true.
Noting that the point x* is feasible in problem (3.2.1), we obtain g(x*) :::; O.
This implies F(x*) = O. Combining this fact with property (3.2.2), we know
that F attains its (global) minimum at x*. As all the functions Ii and 9j are
locally Lipschitzian at x*, likewise F (according to Theorem 3.2.2). We deduce
from Theorem 3.2.3 that 0 E 8F(x*).
Note that
(3.2.3)
Employing the definition of a convex hull, we know that there exist vectors
.\ and p of real multipliers for which Ai ::::: 0 for all i = 1, ... , k, /1j ::::: 0 for all
j E J(x*) and 2:~=1 Ai + 2: jEJ (x*) /1j = 1, such that
k
Now we can set /1j = 0 for all j E {I, ... , m} \ J(x*). Statement (1) follows
from this setting.
Part (2) is trivial. If gj(x*) < 0 for some j, then j E {I, ... , m} \ J(x*) and
we have /1j = O. This completes the proof. 0
Corollary 3.2.7. (Fritz John necessary condition for weak Pareto optimality)
The condition of Theorem 3.2.5 is also necessary for a decision vector x* E S
to be weakly Pareto optimal.
Definition 3.2.8. Let the objective and the constraint functions of problem
(3.2.1) be locally Lipschitzian at a point x* E S. Problem (3.2.1) satisfies the
Cottle constraint qualification at x* if either gj (x*) < 0 for all j = I, ... ,m or
Of/. conv{8gj (x*) I gj(x*) = OJ.
Assuming the Cottle constraint qualification, we obtain the Karush-Kuhn-
Tucker necessary condition for Pareto optimality.
3.2. Nondifferentiable Optimality Conditions 49
F(x*) = o.
We continue by first assuming that gj(x*) < 0 for all j = 1, ... , m. In this case,
F(x*) > gj(x*) for all j. Now we can apply Theorem 3.2.2 and equation (3.2.3)
and obtain
o E conv{8fi(X*) Ii = 1, ... ,k}.
From the definition of a convex hull we know that there exists a vector
o ~..\ E Rk
of multipliers for which 2:~=1 Ai = 1 (thus..\:f; 0) such that
k
oE L Ai 8 fi(X*).
i=1
In this case, we deduce from Theorem 3.2.2 and result (3.2.3) that
Applying the definition of a convex hull, we know that there exist multipliers
Ai ~ 0, i = 1, ... ,k, and /-tj ~ 0, j E J(x*), for which 2:~=1 Ai + 2: jE J(x*) /-tj =
1, and by assumption (3.2.4), especially ..\:f; 0, such that
k
oE L Ai8 fi(x*) + L /-tj8g j (x*).
i=1 jEJ(x*)
Corollary 3.2.10. (K arush-K uhn- Tucker necessary condition for weak Pareto
optimality) The condition of Theorem 3.2.9 is also necessary for a decision
vector x' E S to be weakly Pareto optimal.
k m k m
= L Ad;(xO) + L Jljgj(XO) - L '\di(X*) - L Jljgj(x*).
;=1 j=1 i=1 j=1
Employing assumption (2), the fact that g(XO) ::; 0 and p ~ 0, we obtain
k k
(3.2.5) L Adi(X*) :s :L Adi(XO)
i=1 i=1
for any XO E S.
Let us assume that x* is not Pareto-optimal. Then there exists some feasible
x such that j;(x) ::; j;(x*) for all i = 1, ... , k and for at least one index
j is fj(x) < fJ(x*), Because every Ai was assumed to be positive, we have
L:~=1 Adi(X) < L:~=1 Ajj;(X*). This contradicts inequality (3.2.5) and X* is
thus Pareto optimal. 0
3.2. Nondifferentiable Optimality Conditions 51
Theorem 3.2.9 and Corollary 3.2.10 can now be reformulated for convex
problems assuming the Slater constraint qualification. Remember that convex-
ity implies that functions are locally Lipschitzian at any point in the feasible
region.
Proof. The proof is a trivial modification of the proof of Theorem 3.2.9 when
we note the following. In case the set J(x*) is nonempty we denote g(x) =
max[gj(x) I j = l, ... ,mj. Now g(x*) = gj(x*) for j E J(x*). By the Slater
constraint qualification there exists some XO such that gj(XO) < 0 for all j.
Thus, x* cannot be the global minimum of the convex function g. According
to Theorem 3.2.3 we derive
concave functions are formulated in Bhatia and Datta (1985). In addition, nec-
essary Fritz John and Karush-Kuhn-Tucker type optimality conditions for weak
Pareto optimality involving so-called semidifferentiable pre-invex functions are
treated in Preda (1996).
If an ordering cone D is used in defining efficiency, then the optimality
conditions are similar to those presented above, except for the multipliers Ai
(simply as in the differentiable case). The multipliers belong to the dual cone
D* = {A E Rk I >7y 2: 0 for all y ED}. Because of the similarity, we do
not present here separate optimality conditions for efficiency. Necessary and
sufficient conditions for efficiency and weak efficiency are handled, for example,
in Wang (1984). Furthermore, in Craven (1989) and EI Abdouni and Thibault
(1992), necessary conditions for weak efficiency in normed spaces and Banach
spaces, respectively, are presented. The objective and the constraint functions
are still assumed to be locally Lipschitzian.
Direct counterparts of optimality conditions for proper Pareto optimality
in the sense of Kuhn and Tucker, presented in Section 3.1, cannot be stated
in the nondifferentiable case. The reason is that the definition of Kuhn and
Tucker assumes continuous differentiability. However, a sufficient condition for
proper Pareto optimality in the sense of Geoffrion, when the objective and the
constraint functions are compositions of convex, locally Lipschitzian functions,
is formulated in Jeyakumar and Yang (1993). This treatment naturally includes
ordinary convex, locally Lipschitzian functions. The authors also present nec-
essary conditions for weak Pareto optimality and sufficient conditions of their
own for Pareto optimality in problems with convex composite functions. A nec-
essary and sufficient condition for proper efficiency (in the sense of Henig) is
derived in Henig and Buchanan (1994, 1997) for convex problems.
At the end of this section we shall say a few words about the case where the
functions involved are continuously differentiable and their gradients are locally
Lipschitzian. Such functions are called Cl,l-functions. Second-order optimality
conditions for multiobjective problems with CI,I-functions are handled in Liu
(1991). Here we briefly state the main results. First we must introduce one
concept according to Liu.
k m
(1) LAiVfi(X*)+LJljVgj(x*)=O
i=l j=1
(4) ~Ai¢i+dT(~J.LjV2gj(X*))d>O
for all d E {O oj:. d E Rn I V /;(x*f d ::; 0 for all i = 1, ... , k, Vgj(x*)T d ::;
o for all jE J(x*)} and ¢i E aUi(x*)(d, d).
Actually, the results in Liu (1991) are given in a more general form for
efficient solutions and for problems where the constraint functions belong to a
polyhedral convex cone.
Many necessary and sufficient conditions for weak, proper or Pareto opti-
mality (or efficiency) have been suggested in the literature. They are based on
different kinds of assumptions as to the properties and form of the problem.
Many of them are based on a scalarization of the original problem and con-
ditions are se.t to both the original functions and the scalarization parameters
(some such conditions are presented in Part II in connection with the scalariza-
tion methods). In this book, we settle for a closer handling of the Fritz John and
the Karush-K uhn-Tucker type conditions, presented in the two earlier sections.
For the interested reader we list some other references.
Necessary conditions for proper and improper Pareto optimality in the sense
of Kuhn and Tucker are derived with the help of cones in Tamura and Arai
(1982). Geoffrion (1968) was the first to give the basic characterization of prop-
erly Pareto optimal solutions in terms of a scalar problem, called a weighting
problem (see Section 3.1 of Part II). He extended the results by a compre-
hensive theorem into necessary and sufficient conditions for local and global
proper Pareto optimality. Geoffrion's treatment is closely followed in Chou et
al. (1985), where properly Pareto optimal solutions are characterized for multi-
objective optimization problems with set-valued functions. In addition, neces-
sary and sufficient Karush-Kuhn-Tucker type optimality conditions for (-Pareto
optimality in convex problems using the weighting method for the objectives
and exact penalty functions for the constraints are handled in Liu (1996).
3.3. More Optimality Conditions 55
METHODS
1. INTRODUCTION
In most methods we are interested in the objective space instead of the de-
cision variable space. One reason for this is that the dimension of the objective
space is usually considerably smaller than the dimension of the decision vari-
able space. Another reason is that decision makers are often more interested
in the objective values. However, calculation still takes place in the decision
variable space because we do not usually know the explicit form of the feasi-
ble objective region. In brief, decision makers usually handle objective values
whereas mathematical programming takes place in the decision variable space.
In general, multiobjective optimization problems are solved by scalarization.
The most important exceptions to. this are MOLP problems, which are not
to be dealt with here, where some simplex-based solution methods can find
Pareto optimal extreme points or, in some cases, the whole Pareto optimal
set. Another exception, which is presented here, is the multiobjective proximal
bundle method for nondifferentiable problems. It is not based on scalarization
in the traditional sense.
As mentioned in Part I, scalarization means converting the problem into a
single or a family of single objective optimization problems with a real-valued
objective function, termed the scalarizing junction, depending possibly on some
parameters. This enables the use of the theory and the methods of scalar opti-
mization, that is, nonlinear programming. Of fundamental importance is that
the optimal solutions of multiobjective optimization problems can be charac-
terized as solutions of certain single objective optimization problems. Because
scalarizing functions usually depend on certain auxiliary parameters, some nu-
merical difficulties may appear if the single objective optimization problem has
feasible solutions only with very few parameter values or it is not solvable with
all the parameter values. Thus the seemingly promising idea of simplifying the
problem into single objective optimizations has also its weaknesses. In what
follows, we assume that solutions to scalarizing functions exist.
In Sawaragi et al. (1985), three requirements are set for a scalarizing func-
tion:
(1) It can cover any Pareto optimal solution.
(2) Every solution is Pareto optimal.
If the scalarizing function is based on aspiration levels, then, in addition
(3) Its solution is satisficing if the aspiration levels used are feasible.
Unfortunately, there is no scalarizing function that can satisfy all three require-
ments.
An important fact to keep in mind is that standard routines for single objec-
tive optimization problems can only find local optima. This is why only locally
Pareto optimal solutions are usually obtained and handled when dealing with
scalarizing functions. Global Pareto optimality can be guaranteed, for exam-
ple, if the objective functions and the feasible. region are convex (as stated in
Theorem 2.2.3 of Part I) or quasi convex and convex, respectively (see Theorem
1. Introduction 63
(1973). A wide collection of methods available (up to the year 1983) is assembled
also in Despontin et al. (1983). Almost 100 methods for both multiobjective
and multiattribute cases are included.
As far as different nationalities are concerned, overviews of multiobjective
optimization methods in the former Soviet Union are presented in Lieberman
(1991a, b) and of theory and applications in China in Hu (1990). Nine multi-
objective optimization methods developed in Germany are briefly introduced
in Ester and Holzmiiller (1986).
A great number of interactive multiobjective optimization methods is col-
lected in Shin and Ravindran (1991) and Vanderpooten and Vincke (1989).
Interactive methods are also presented in Narula and Weistroffer (1989a) and
White (1983b). Information about applications of the methods is also reported.
Some literature on interactive multiobjective optimization between the years
1965 and 1988 is gathered in Aksoy (1990). A set of scalarizing functions is out-
lined in Wierzbicki (1986b) with special attention to whether weakly, properly
or Pareto optimal solutions are produced.
As to different problem types, an overview of methods for MOLP problems
can be found in Zionts (1980, 1989). Methods for hierarchical multiobjective
optimization problems are reviewed in Haimes and Li (1988). Such methods are
needed in large-scale problems. A wide survey on the literature of hierarchical
multiobjective analysis is also provided.
Methods with applications to large-scale systems and industry are presented
in the monographs Haimes et al. (1990) and Tabucanon (1988), respectively.
Several groups of methods applicable to computer-aided design systems are
presented briefly in Eiduks (1983). Methods for applications in structural op-
timization are reported in Eschenauer (1987), Jendo (1986), Koski and Silven-
noinen (1987) and Osyczka and Koski (1989). The collections of papers edited
by Eschenauer et al. (1990a) and Stadler (1988a) contain mainly applications
in engineering.
In the following, we present several methods (in four classes) for multiob-
jective optimization. Some of them will be described in more detail and some
only briefly mentioned. It must be kept in mind that the existing methodology
is very wide. We do not intend to cover every existing method but to introduce
several philosophies and ways of approaching multiobjective optimization prob-
lem solving. Where possible we try to link references to some of the applications
and extensions available in the literature with the methods presented here. The
description of each method ends with concluding remarks by the author taking
up important aspects of the method. Unless stated otherwise, we assume that
we solve problem (2.1.1) defined in Part I.
In connection with methods, a mention is made only of such implementa-
tions as have been made available to the author for testing purposes. By a
user we mean either a decision maker or an analyst who uses the solution pro-
gram. If the user is a decision maker, it is usually assumed that the problem
has been formulated earlier (and perhaps loaded in the system) so that the
decision maker can concentrate on the actual solution process.
2. NO-PREFERENCE METHODS
Here we examine the method where the ideal objective vector is used as a
reference point and Lp-metrics are used for measuring. In this case, the Lp-
problem to be solved is
(2.1.1)
minimize (t 1J;(x) - z:I') 'I,
subject to xES.
From the definition of the ideal objective vector z* we know that fi(X) ~ zt
for all i = 1, ... ,k and all xES. This is why no absolute values are needed if
we know the global ideal objective vector. If the global ideal objective vector
K. Miettinen, Nonlinear Multiobjective Optimization
© Springer Science+Business Media New York 1998
68 Part II - 2. No-Preference Methods
is not known, the method does not necessarily work as it should. In order to
emphasize this fact, we keep the absolute value signs in the notations when
introducing the method.
If the ideal objective vector is replaced by some other vector, it must be se-
lected carefully. Pessimistic reference points must be avoided since the method
cannot find solutions better than the reference point.
The exponent lip may be dropped. Problems with or without the exponent
lip are equivalent for 1 ::; p < 00, since Lp-problem (2.1.1) is an increasing
function of the corresponding problem without the exponent.
If p = 00, the metric is also called a Tchebycheff metric and the Loo- or
the Tchebycheff problem is of the form
minimize
(2.1.2)
subject to XES.
minimize a:
subject to a: 2: Ji (x) - zt for all i = 1, ... , k,
xES,
, .,'
, . I ",
/.... ideal
.... ~ criterion vector
.: I •
• ••••1
r ..
'"" .',.f
,, ". .... .' ,
,
,,' I
, ". ,,'
.... L 1 - metric
D Lao - metric
(2.1.4)
minimize
i~~"~.k [max [ lfi(X)-ztllh(X)-ztll]
zt ' hex)
subject to xES.
Theorem 2.1.1. The solution of Lp-problem (2.1.1) (where 1 ::; p < 00) is
Pareto optimal.
Proof. Let x· E S be a solution of problem (2.1.1) with 1 ::; p < 00. Let us
suppose that x· is not Pareto optimal. Then there exists a point xES such
that fi(X) ::; fi(X*) for all i = 1, ... , k and fJ(x) < fJ(x*) for at least one j.
Now (h(x) - zt)p ::; (h(x*) - zt)P for all i and (fJ(x) - zr)p < (fJ(x*) - zr)P.
From this we obtain
k k
L (h(x) - zt)P < L (h(x*) - zt)p.
i=1 i=1
When both sides of the inequality are raised into the power 1/P we have a
contradiction to the assumption that x· is a solution of problem (2.1.1). This
completes the proof. D
Yu has pointed out in Yu (1973) that if Z is a convex set, then for 1 < p < 00
the solution of problem (2.1.1) is unique.
Theorem 2.1.3. Tchebycheff problem (2.1.2) has at least one Pareto optimal
solution.
Proof. Let us suppose that none of the optimal solutions of problem (2.1.2) is
Pareto optimal. Let x* E S be an optimal solution of problem (2.1.2). Since we
assume that it is not Pareto optimal, there must exist a solution xES which is
not optimal for problem (2.1.2) but for which J;(x) ::;: J;(x·) for all i = 1, ... , k
and fj(x) < fJ(x*) for at least one j.
We have now fi(X) - zt ::; J;(x·) - zt for all i with the strict inequality
holding for at least one index j, and further maxi[Ji(X)-Zt] ::;: maxdfi(x*)-ztl.
Because x* is an optimal solution of problem (2.1.2), x has to be an optimal
solution, as well. This contradiction completes the proof. D
The method of the global criterion is a simple method to use if the aim
is simply to obtain a solution where no special hopes are set. The properties
of the metrics imply that if the objective functions are not normalized in any
way, then an objective function whose ideal objective value is situated nearer
the feasible objective region receives more importance.
The solution obtained with the Lp-metric (1 ~ p < 00) is guaranteed to be
Pareto optimal. If the Tchebycheff metric is used, the solution may be weakly
Pareto optimal. In the latter case, for instance, problem (2.10.2) of Part I can
be used to produce Pareto optimal solutions. It is up to the analyst to select
an appropriate metric.
2.2.1. Introduction
different from that of the other methods described here, and why implemen-
tational aspects that can be forgotten with other methods have to be touched
on.
Let us now suppose that the feasible region is of the form
Let us first prove a result about the optimal solutions of improvement func-
tions. The sufficient condition necessitates the Slater constraint qualification
(Definition 3.2.13 in Part J).
Proof. The necessity follows immediately from the proof of Theorem 3.2.5
(and Corollary 3.2.7) in Section 3.2 of Part I.
As to the sufficiency component, let the assumptions stated be valid and
let x· E Rn be a minimal solution of H(x, x*). Let us assume that x· is not
weakly Pareto optimal. Then, there exists some x E R n such that gj(x) ::; 0
for all j = 1, ... , m and Ji(x) < fi(x*) for all i = 1, ... , k. If gj(x) < 0 for all
j = 1, ... ,m, then
H(x, x*) < 0 = H(x·, x·),
which contradicts the assumption that H(x,x·) attains its minimum at x*.
Otherwise, that is, if gj(x) = 0 for some index j, it follows from the Slater
constraint qualification that there exists some x E R n such that gj (x) < 0 for
all j = 1, ... , m. If fi(X) < fi(X*) for all i = 1, ... , k, then
H(x, x*) < 0 = H(x·, x*)
(2.2.1)
(2.2.2)
h(x*) - fi(X) _ ( fi(X*) -h(x»)
hex) _ fi(X) fi(X) + hex) -hex) hex) -
A _ A
According to Theorem 2.2.1 we, on the one hand, know that minimizing an
improvement function produces weakly Pareto optimal solutions. On the other
hand, any weakly Pareto optimal solution of a convex problem can be found
under minor conditions. While we do not optimize the improvement function
but its approximation, the optimality results of the MPB method are somewhat
different. Here we only present some results without proofs, since giving these
would necessitate explicit expression of the MPB algorithm.
Theorem 2.2.3. Let the objective and the constraint functions of the multi-
objective optimization problem be upper semidifferentiable at every XES. If
the MPB method stops with a finite number of iterations, then the solution is a
substationary point. On the other hand, any accumulation point of an infinite
sequence of solutions generated by the MPB method is a substationary point.
Note that only the substationarity of the solutions of the MPB routine is
guaranteed for general multiobjective optimization problems.
The MPB method can be used as a method where no opinions of the decision
maker are sought. In this case, we must select the starting point so that it is not
(weakly) Pareto optimal but that every component of the objective vector can
76 Part II - 2. No-Preference Methods
be improved. The method can also handle other than nonlinear constraints, but
they have not been included here for the sake of the clarity of the presentation.
The MPB routine can also be used as a black-box optimized within inter-
active multiobjective optimization methods. This is the case with the vector
version of NIMBUS (see Section 5.12).
The accuracy of the computation in the MPB method is an interesting mat-
ter. Accuracy can be considered in a more extensive meaning as a separating
factor between ordinary scalarizing functions and inner scalarizing function, as
in the MPB method. If some ordinary scalarizing function is employed, then it
is the accuracy of that additional function that can be followed along with the
solution process. It may happen that when the accuracy of the scalarizing func-
tion has reached the desired level, the values of the actual objective functions
could still change considerably.
Many scalarizing functions have positive features whose importance is not to
be underestimated, such as producing only Pareto optimal solutions. However,
employing some scalarizing function usually brings along extra parameters and
the difficulty of specifying their values. This causes additional stability concern.
To put it briefly, scalarizing functions add extra characteristics to the problem.
Scalarization cannot completely be avoided even in the MPB routine. How-
ever, the scalarization is carried out under the surface, invisible to the user.
Whatever additional parameters or phases are needed, they cannot be seen
and the user does not have to be bothered with them. The weakness of the
MPB routin'e is that the Pareto optimality of the solutions obtained cannot
be guaranteed. In theory, only the substationarity of the solutions is certain.
In practice, it is, however, very likely that the solutions are at least weakly
Pareto optimal. As a matter of fact, in the numerical experiments performed,
the final solutions obtained have usually proved to be Pareto optimal at the
final testing.
For problems with nondifferentiable functions the MPB routine represents
an efficient proximal bundle-based solution approach. The implementation of
the MPB routine (called MPBNGC) is described in Makela (1993). It calls a
quadratic solver derived in Kiwiel (1986).
3. A POSTERIORI METHODS
In the weighting method, presented, for example, in Gass and Saaty (1955)
and Zadeh (1963), the idea is to associate each objective function with a weight-
ing coefficient and minimize the weighted sum of the objectives. In this way, the
multiple objective functions are transformed into a single objective function.
We suppose that the weighting coefficients Wi are real numbers such that Wi ~ 0
for all i = 1, ... , k. It is also usually supposed that the weights are normalized,
that is, L~=l Wi = 1. To be more exact, the multiobjective optimization prob-
lem is modified into the following problem, to be called a weighting problem:
minimize
(3.1.1)
i=l
subject to xES,
As Theorems 3.1.2 and 3.1.3 state, the solution of the weighting method is
always Pareto optimal if the weighting coefficients are all positive or if the solu-
tion is unique, without any further assumptions. The weakness of the weighting
method is that not all of the Pareto optimal solutions can be found unless the
problem is convex. This feature can be relaxed to some extent by convexifying
the nonconvex Pareto optimal set as suggested in Li (1996). The convexifica-
tion is realized by raising the objective functions to a high enough power under
certain assumptions. However, the main result is the following:
Remark 3.1.5. According to Theorem 3.1.4, all the Pareto optimal solutions
of MOLP problems can be found by the weighting method.
Let us have a look at linear cases for a while. In practice, Remark 3.1.5 is
not quite true. The single objective optimization routines for linear problems
80 Part II -~~ 3. A Posteriori Methods
Z I Z I
Figure 3.1.1. Weighting method with convex and non convex problems.
usually find only extreme point solutions. Thus, if some facet of the feasible re-
gion is Pareto optimal, then the infinity of Pareto optimal non-extreme points
must be described in terms of linear combinations of the Pareto optimal ex-
treme solutions. On the other hand, note that if two adjacent Pareto optimal
extreme points for an MOLP problem are found, the edge connecting them is
not necessarily Pareto optimal.
The conditions under which the whole Pareto optimal set can be gener-
ated by the weighting method with positive weighting coefficients are presented
in Censor (1977). The solutions that it is possible to reach by the weighting
method with positive weighting coefficients are characterized in Belkeziz and
Pirlot (1991). Some generalized results are also given. More relations between
nonnegative and positive weighting coefficients, convexity of Sand Z, and
Pareto optimality are studied in Lin (1976b).
If the weighting coefficients in the weighting method are all positive, we
can say more about the solutions than that they are Pareto optimal. The fol-
lowing results concerning proper Pareto optimality were originally presented in
Geoffrion (1968).
for all j such that h(x*) < fj(x). We can now write
3.1. Weighting Method 81
k ~ 1 (Ji(X") -/i(x» °
> wi(h(x) - h(x"» ( > ~ WI(JI(X) - fl(x"») ,
where l differs from the fixed index i and the indices j, which were specified
earlier. After this reasoning we can sum over all j :f:. i and obtain
k
w;(/i(x") -/i(x» > L(Wi(Jj(X) - h(x"»),
j=l
Noi
which means
k k
L wih(x*) > L wih(x).
j=1 i=l
Here we have a contradiction to the assumption that x" is a solution of the
weighting problem. Thus, x" has to be properly Pareto optimal. 0
The ratio of the weighting coefficients gives an upper bound to global trade-
offs.
Some results concerning weak, proper and Pareto optimality of the solutions
obtained by the weighting method are combined in Wierzbicki (1986b). Proper
Pareto optimality and the weighting method are also discussed in Belkeziz and
Pirlot (1991) and Luc (1995). The weighting method is used in Isermann (1974)
82 Part II - 3. A Posteriori Methods
in proving that for linear multiobjective optimization problems all the Pareto
optimal solutions are also properly Pareto optimal. More results concerning
MOLP problems and the weighting method are assembled in Chankong and
Haimes (1983a, b, pp. 153-159).
The weighting method can be used so that the decision maker specifies a
weighting vector representing her or his preference information. In this case, the
weighting problem can be considered (a negative of) a value function (remember
that value functions are maximized). Note that according to Remark 2.8.7
of Part I, the weighting coefficients provided by the decision maker are now
nothing but marginal rates of substitution (mii = Wj/Wi). When the weighting
method is used in this fashion, it can be considered to belong to the class of a
priori methods. Related to this, a method for assisting in the determination of
the weighting coefficients is presented in Batishchev et al. (1991). This method
can also be extended into an interactive form by letting the decision maker
modify the weighting vectors after each iteration.
The objective functions should be normalized or scaled so that their ob-
jective values are approximately of the same magnitude (see Subsection 2.4.3
in Part I). Only in this way can one control and manoeuvre the method to
produce solutions of a desirable nature in proportion to the ranges of the ob-
84 Part II - 3. A Posteriori Methods
jective functions. Otherwise the role of the weighting coefficients may be greatly
misleading.
If the weighting method is used as an a priori method one can ask what
the weighting coefficients in fact represent. Often they are said to reflect the
relative importance of the objective functions. However, it is not at all clear
what underlies this notion, as discussed in Roy and Mousseau (1996). It is
remarked in Hobbs (1986) that instead of relative importance, the weighting
coefficients should represent the rate at which the decision maker is willing to
trade off values of the objective functions.
It must be noted that if some of the objective functions correlate with each
other, then seemingly 'good' weighting vectors may produce poor results and
seemingly 'bad' weighting vectors may produce useful results (see Steuer (1986,
pp. 198-199) for an illustrative example).
On the basis of practical experience it is emphasized in Wierzbicki (1986b)
that weighting coefficients are not easy to interpret and understand for average
decision makers.
function (see Section 2.6 in Part I). This is in many cases a rather simplifying
assumption. In addition, it must be noted that altering the weighting vectors
linearly does not have to mean that the values of the objective functions also
change linearly. It is, moreover, difficult to control the direction of the solutions
by the weighting coefficients, as illustrated in Nakayama (1995).
Proof. Necessity: Let x· E S be Pareto optimal. Let us assume that it does not
solve the €-constraint problem for some e where €j = h(x·) for j = 1, ... , k,
j =f. e. Then there exists a solution XES such that hex) < h(x·) and
hex) :s: Ij(x*) when j =f. e. This contradicts the Pareto optimality of x·. In
other words, x* has to solve the problem for any objective function.
Sufficiency: Since x* E S is by assumption a solution of the €-constraint
problem for every e = 1, ... , k, there is no xES such that hex) < h(x·)
and hex) :s: /j(x*) when j =f. e. This is the definition of Pareto optimality for
x·. 0
shown by a bold line. The upper bound level 6"1 is too tight and so the feasible
region is empty. On the other hand, the level 6"4 does not restrict the region at
all. If it is used as the upper bound, the point Z4 is obtained as a solution. It
is Pareto optimal according to Theorem 3.2.4. Correspondingly, for the upper
bound 6"3 the point z3 is obtained as a Pareto optimal solution. The point Z2
is the optimal solution for the upper bound 6"2. Its Pareto optimality can be
proved according to Theorem 3.2.3. Theorem 3.2.2 can be applied as well.
------------------------------
Proof. Let x" E S be a solution of the weighting problem for some weighting
vector 0 ~ w E Rk.
(1) We assume that We > O. We have
k k
(3.2.2) L wd;(x) 2 L wd;(x')
;=1 i=l
minimized and Cj = h (x") for j = 1, ... ,k, j f. l, then there exists a weighting
vector 0 ::; W E Rk, E~=1 Wi = I, such that x" is also a solution of weighting
problem (3.1.1).
Proof. The proof needs a so-called generalized Gordan theorem. See Chankong
and Haimes (1983b, p. 121) and references therein.
We have now appropriate tools for proving Theorem 3.1.4 from the previous
section.
subject to x E 8,
where u = (U1, ••. ,Uj_1,Uj+1, ... ,Uk)T and Ui :? 0 for all if. j. The jth
Lagrangian method is from a computational viewpoint almost equal to the
weighting method. This is why it is not studied more closely here. Chankong
and Haimes have treated the problems separately to emphasize two ways of
arriving at the same point.
Let us now return to the c-constraint problem and the proper Pareto op-
timality of its solutions. In Benson and Morin (1977), an auxiliary function,
called the perturbation /unction, v: Rk-1 --+ R associated with the c-constraint
problem is defined in the form (modified here for the minimization problem)
After this, a theorem concerning the proper Pareto optimality of the solu-
tions of the £-constraint problem can be presented.
Proof. See Benson and Morin (1977) or Sawaragi et al. (1985, p. 88).
6-constraint problem is that there exist vectors 0 ::;;\ E R k - 1 and 0 ::; I' E Rm
such that
k m
(1) "V1t(x*) + :~=>.j"V(/j(x*) -C:j) + Ll1i"Vgi(X*) = 0
j=l i=l
j#
(2) Aj(/j(X*)-€j) =0 for all j=lf, l1;g;(X*) =0 for all i=1, ... ,m.
Note that the (Lagrange) multipliers .\ are in what follows called Karush-
Kuhn-Tucker multipliers, when they are associated with the Karush-Kuhn-
Tucker optimality condition. The condition states, for example, that if the
constraint concerning /j is not active, the corresponding multiplier Aj must be
equal to zero.
We can now present the following theorem.
Theorem 3.2.10. Let all the objective and the constraint functions be con-
tinuously differentiable at x* E S which is a regular point of the constraints of
the €-constraint problem. Then the following is valid.
(1) If x* is properly Pareto optimal, then x· solves the €-constraint prob-
lem for some It being minimized and 6j = /j(x*) (for j = 1, ... , k,
j =I f) with all the Karush-Kuhn-Tucker multipliers associated with the
constraints /j(x) ::; 6j for j = 1, ... , k, j =I f, being positive.
(2) If the multiobjective optimization problem is convex, then x* is prop-
erly Pareto optimal if it is a solution of the c:-constraint problem with
the Karush-Kuhn-Tucker multipliers associated with the constraints
fJ(x) S 6j for j = 1, ... , k, j =I e, being positive.
Proof. The proof is based on the implicit function theorem, see Luenberger
(1984, p. 313).
3.2. c:-Constraint Method 93
From the assumption A;(f;(X*) - Cj) = 0 for all j = 1, ... , k, j :f. l of the
Karush-Kuhn-Tucker necessary optimality condition and the nondegeneracy of
the constraints we know that I;(x*) = Cj for all j =I i. Thus, from Theorem
3.2.12 we have the trade-off rates
Al; = - oJt(x*)
01;
.
for all J =I i.
An important result concerning the relationship between Karush-Kuhn-
Tucker multipliers and trade-off rates in a more general situation, where zero-
valued multipliers also are accepted, is presented in the following. For notational
simplicity we now suppose that the function to be minimized in the c-constraint
problem is fk (Le., we set It = fk). In addition, the upper bounds e" E Rk-l
are assumed to be chosen so that feasible solutions exist. This does not lose
any generality. For details and a more extensive form of the theorem we refer
to Chankong and Haimes (1983b, pp. 161-163).
Let Ak; be the Karush-Kuhn-Tucker multipliers associated with the con-
straints I;(x) $ c'J, j = 1, ... , k - 1. Without loss of generality we can as-
sume that the first p (1 $ p :::; k - 1) of the multipliers are strictly positive
(i.e., Ak; > 0 for j = 1, ... ,p) and the rest k - 1 - p multipliers equal zero
(i.e., Akj = 0 for j = p + 1, ... , k - 1). We denote the objective vector corre-
sponding to x* by z* E Z.
Let us now consider the contents of Theorem 3.2.13. Part 1) says that under
the given conditions there are exactly k - 1 degrees of freedom in specifying
a point on the (locally) Pareto optimal surface in the objective space in the
neighbourhood of z*. In other words, when the values for Zl, Z2, •.. , Zk-l have
been chosen from the neighbourhood of z* , then the value for Zk can be calcu-
lated from the given function and the resulting point z will lie on the (locally)
Pareto optimal surface in the objective space.
Part 2) of Theorem 3.2.13 extends the result of part 1) by relaxing the
assumption that all the constraints iJ(x) ~ f:j, j = 1, ... ,k - 1, should be
active and nondegenerate, that is, Akj > 0 for all j = 1, ... , k - 1. When
the number of nondegenerate constraints is p « k - 1), then the degree of
freedom in specifying a point on the (locally) Pareto optimal surface in the
objective space in the neighbourhood of z* is the number of non degenerate
active constraints (p). The results of Theorem 3.2.13 will be needed in Section
5.1 when the f:-constraint method is used as a part of an interactive method.
nonconvex parts. The positive feature of the weighting method that the feasi-
ble region is not disturbed in the solution process is utilized in the convex parts,
and the capability of the €-constraint method to find all the Pareto optimal
solutions is utilized in the non convex parts. Therefore, merits of both these
basic methods are exploited.
A method related to the €-constraint method is presented in Youness (1995).
It generates the Pareto optimal set for problems with quasiconvex (and lower
semi continuous) objective functions. The method is based on level sets. If we
consider an objective vector zh and level sets Li(zf) = {x E S I Ii(x) :::; zf}
for i = 1, ... , k, and if we have nf=l Li(zf) = {zh}, then the vector zh is Pareto
optimal.
An entropy-based formulation of the €-constraint method is suggested in
Sultan and Templeman (1996). The entropy-based objective function to be op-
timized has only one parameter no matter what the number of the original
objective functions is. A representation of the Pareto optimal set can be gener-
ated by varying the value of the single parameter. The entropy-based function
contains logarithms and exponential functions.
minimize
(3.3.1) i=l
subject to Ji(x) '5. C'j for all j = 1, ... , k,
x E 5,
where Wi> 0 for all i = 1, ... ,k.
Notice that problem (3.3.1) is equivalent to problem (2.10.1) in Part I if we
set Wi = 1 for every i = 1, ... , k. In Corley (1980), the problem is formulated in
a more general setting with a pointed convex ordering cone defining efficiency.
Optimality results were already handled in Section 2.10 of Part I. Nevertheless,
we write them down here as well.
Theorem 3.3.1. The solution of hybrid problem (3.3.1) is Pareto optimal for
any upper bound vector E E R k. On the other hand, if x* E 5 is Pareto optimal,
then it is a solution of problem (3.3.1) for E = f(x*).
The set of Pareto optimal solutions can be found by solving problem (3.3.1)
with methods for parametric constraints (where the parameter is the vector of
upper bounds E), see, for example, Rao (1984, pp. 418-421). This means that
the weighting coefficients do not have to be altered.
Optimality conditions for the solutions of problem (3.3.1) to be properly
Pareto optimal are presented in Wendell and Lee (1977).
We can say that the positive features of the weighting method and the C'-
constraint method are combined in the hybrid method. Namely, any Pareto
optimal solution can be found independently of the convexity of the problem
and one does not have to solve several problems or think about uniqueness
to guarantee the Pareto optimality of the solutions. On the other hand, the
specification of the parameter values may still be difficult. Computationally,
the hybrid method is similar to the C'-constraint method (with the increased
number of constraint functions).
3.4. Method of Weighted Metrics 97
3.4.1. Introduction
minimize
(3.4.1)
subject to xES
for 1 :::; p < 00. The weighted Tchebycheff problem is of the form
minimize . max [wilh(x) - ztl)
(3 ..
4 2) I=l •...• k
subject to XES.
Problem (3.4.2) was originally introduced in Bowman (1976). Again, denomina-
tors may be included. Further, the absolute value signs can be dropped because
of the definition of the ideal objective vector, if it is known globally. Weighting
vectors can also be used in connection with problems of form (2.1.4).
Ifp = 1, the sum of weighted deviations is minimized and the problem to be
solved is equal to the weighting problem except for a constant (if z* is known
globally). If p = 2, we have a method of least squares. When p gets larger,
the minimization of the largest deviation becomes more and more important.
Finally, when p = 00, the only thing that matters is the weighted relative
deviation of one objective function.
Problem (3.4.2) is nondifferentiable like its unweighted counterpart. Corre-
spondingly, it can be solved in a differentiable form as long as the objective
and the constraint functions are differentiable and z* is known globally. In this
case, instead of problem (3.4.2), the problem
minimize a
(3.4.3) subject to a ~ Wi (ji(X) - z;) for all i = 1, ... , k,
XES,
is solved, where both x ERn and a E R are variables. This formulation will
be utilized later.
98 Part II - 3. A Posteriori Methods
Theorem 3.4.1. The solution of weighted Lp-problem (3.4.1) (when 1 :::; p <
00) is Pareto optimal if either
(i) the solution is unique or
(ii) all the weighting coefficients are positive.
Proof. The proof is not presented here since it follows directly from the proofs
of Theorems 3.1.2, 3.1.3 and 2.1.1. See Chankong and Haimes (1983b, p. 144)
or Yu (1973).
Theorem 3.4.3. Weighted Tchebycheff problem (3.4.2) has at least one Pareto
optimal solution.
Proof. The proof follows directly from the proof of Theorem 2.1.3.
Theorem 3.4.5. Let x' E S be Pareto optimal. Then there exists a weighting
vector 0 < w E R k such that x' is a solution of weighted Tchebycheff problem
(3.4.2), where the reference point is the utopian objective vector z**.
Proof. Let x· E S be Pareto optimal. Let us assume that there does not exist
a weighting vector w > 0 such that x' is a solution of the weighted Tchebycheff
problem. We know that J;(x) > zt* for all i = 1, ... , k and for all xES. Now
we choose Wi = fJ/(fi(X') - zt*) for all i = 1, ... , k, where fJ > 0 is some
normalizing factor.
3.4. Method of Weighted Metrics 99
Thus wi(fi(XO) - zi*) < f3 for all i = 1, ... , k. This means that
for all i = 1, ... ,k. Here we have a contradiction with the Pareto optimality of
x*, which completes the proof. 0
3.4.3. Comments
Theorem 3.4.5 above sounds quite promising for the weighted Tchebycheff
problem. Unfortunately, this is not the whole truth. In addition to the fact that
every Pareto optimal solution can be found, weakly Pareto optimal solutions
may also be included. Auxiliary calculation is needed in order to identify the
weak ones. Remember that as far as the weighted Lp-problem (1 ~ p < 00) is
concerned, it produces Pareto optimal solutions but does not necessarily find
all of them.
Selecting the value for the exponent p is treated in Ballestero (1997b) from
the point of view of risk aversion. The conclusion is that for greater risk aversion
we should use greater values for p. Another guideline is that for a smaller
number of objective functions we should select greater p values.
More results concerning the properties of the Lp-metrics (1 ~ P ~ 00) with
and without the weighting coefficients can be found, for example, in Bowman
(1976), Chankong and Haimes (1983b, pp. 144-146), Koski and Silvennoinen
(1987), Nakayama (1985a) and Yu (1973), the first of these treating especially
the Tchebycheff metric. Some results concerning the proper efficiency (in the
sense of Henig) of the solutions of the weighted Lp-problem are presented briefly
in Wierzbicki (1986b).
100 Part II - 3. A Posteriori Methods
Useful results concerning trade-off rates and the weighted Tchebycheff prob-
lem are proved in Yano and Sakawa (1987). The approach is closely related to
what was presented in Subsection 3.2.4 in connection with the e-constraint
problem.
Let us once again suppose that the feasible region is of the form
All the objective and the constraint functions are assumed to be twice con-
tinuously differentiable, which is why problem (3.4.3) is the one to be dealt
with.
Problem (3.4.3) is first formulated as an unconstrained minimization prob-
lem with one objective function, the Lagrange function, of the form
k m
(3.4.4) 0+ 2: Ai (Wi(!i(X) - zi) - 0) + 2: l'igi(X),
i=1 ;=1
Thus far, it has been proved that the weighted Tchebycheffproblem can find
any Pareto optimal solution. According to Corollary 3.4.4, the unique solution
of the weighted Tchebycheff problem is Pareto optimal. If the solution is not
unique or the uniqueness is difficult to guarantee, the weakness of the problem
is that it may produce weakly Pareto optimal solutions as well. This weakness
can be overcome in different ways. One possibility is to solve some additional
3.4. Method of Weighted Metrics 101
(3.4.5)
minimize . max
,,==l, ... ,k
[wilfi(X) - zt*I] + p L
.
Ifi(X) - zt*1
• =1
subject to XES,
where p is a sufficiently small positive scalar.
A slightly different modified weighted Tchebycheff metric is used in the mod-
ified weighted Tchebycheff problem
t; Ifi(X) - zt*I)]
k
(3.4.6)
minimize i=r1f~k [Wi (11i(x) - zt*1 + p
subject to XES,
It is valid for both the augmented and the modified weighted Tchebycheff
problem that they generate only properly Pareto optimal solutions and any
properly Pareto optimal solution can be found. In what follows, the symbol M
is the scalar from Definition 2.9.1 of proper Pareto optimality in Part I.
3.4. Method of Weighted Metrics 103
Some metrics for measuring the distance between the utopian objective
vector and the feasible objective region can be formed in such a way that they
produce solutions with selectively bounded global trade-offs. This is in reverse
to the general way where trade-offs are calculated only after solutions have
been generated.
104 Part II - 3. A Posteriori Methods
For the simplicity of notations we here assume the global ideal objective
vector and, thus, the global utopian objective vector to be known. This implies
that we can drop the absolute value signs.
All the properly Pareto optimal solutions produced with modified weighted
Tchebycheff problem (3.4.6) have bounded global trade-offs. Further, we have
a common bound for every global trade-off involved.
A~(x.) < 1 + P
'J - P
for every i, j = 1, ... , k, i f:. j.
Proof. See Kaliszewski (1994, pp. 94-95).
Corresponding results can be proved for other types of problems, see Kaliszewski
(1994, pp. 82-113).
Sometimes the decision maker may wish to set a priori bounds on some
specific global trade-offs. Such a request calls for a scalarizing function of a
special form. These topics are treated in Kaliszewski and Michalowski (1995,
1997). Thus far, the additional term multiplied with p was added to guarantee
the proper Pareto optimality of the solutions. If we leave it out, we obtain
weighted Tchebycheff problem (3.4.2) and, thus, weakly Pareto optimal solu-
tions. In what follows, we use metrics without modification or augmentation
terms but use other parameters (7 and O'i > 0 to control the bounds of the
global trade-offs involved. Thus, the following results deal with weak Pareto
optimality.
The next theorem handles a case where we wish to set a priori bounds for
a group of selected global trade-offs. Let us choose a subset of the objective
functions 10 C I = {I, ... ,k} and define I(i) = {j I j E 10, j f:. i}.
if and only if there exist a weighting vector 0 <W E Rk and a number 0' >0
such that x· is a solution of the problem
max [Wi(/i(X) -
iE/\Io
z:*)]]
subject to xES.
3.4. Method of Weighted Metrics 105
Result (3.4.7) of Theorem 3.4.9 is utilized so that the decision maker is asked
to specify upper bounds for the selected global trade-offs. These values are set as
upper bounds to (1+0')/0'. A lower bound for the parameter 0' is obtained from
these inequalities. By using the calculated 0' value, we receive different weakly
Pareto optimal solutions satisfying the global trade-off bounds by altering the
weighting coefficients. In other words, we avoid generating solutions exceeding
the specified bounds for global trade-off.
In Theorem 3.4.9 we have a common bound for the selected set of global
trade-offs. This can further be generalized by using several different parameters
0'.
minimize max [ n~; [Wi (Ii (x) - zt* + L O'j (iJ (x) - zj*) ) ] ,
jE/o
(3.4.8)
max (wi(li(x) - zt*)]]
iE/\Io
subject to xES.
Theorem 3.4.10 is applied in the following way. Ifwe want to generate weakly
Pareto optimal solutions such that certain global trade-offs are bounded (each
global trade-off with an individual bound), we form a system of equations from
the global trade-off information. That is, we set (1 +O'i) /O'j equal to the specified
upper bound, where desired. If the system is consistent, we solve it and obtain
values for the parameters O'j. If the system is inconsistent, some equation(s)
must be dropped in order to form a consistent system. In this way, parameters
O'j are used to control the bounds of the selected global trade-offs.
(3.4.9)
minimize i=IIf,a.X,k [Wi (fi(X) - Zt* + LPi(fi(X) - Zt*))]
t=l
subject to xES.
A(](x*) < 1 + Pi
J' - Pj
Particularly the method of weighted Tchebycheff metric and its variants are
popular methods for generating Pareto optimal solutions. They work for convex
as well as non convex problems (unlike the weighting method) and alteration of
the parameters is easier than in the €-constraint method.
3.5. Achievement Scalarizing Function Approach 107
3.5.1. Introduction
It differs from weighted Tchebycheff problem (3.4.2) only in that the abso-
lute value signs are missing. This change ensures that weakly Pareto optimal
solutions are produced independently of the feasibility or infeasibility of the
reference point.
108 Part II - 3. A Posteriori Methods
Sz(z) = O.
3.5. Achievement Scalarizing Function Approach 109
Proof. Here we only prove the second statement because of the similarity of
the proofs. We assume that Sz is strongly increasing. Let z* E Z be a solution
of the achievement problem. Let us suppose that it is not Pareto optimal. In
this case, there exists an objective vector z E Z such that Zi ~ zi for all
i = 1, ... , k and Zj < zj for some j. Because Sz is strongly increasing, we know
that sz(z) < sz(z*), which contradicts the assumption that z* minimizes Sz.
Thus z* is Pareto optimal. 0
Note that Theorems 3.5.4 and 3.5.5 are valid for any scalarizing function.
Thus, the Pareto optimality and the weak Pareto optimality results proved for
the weighting method, the c;-constraint method and the method of weighted
metrics are explained by the 'monotonicity properties of the scalarizing func-
tions in question (see, e.g., Vanderpooten (1990».
We can now rewrite Theorem 3.5.4 so as to be able to characterize Pareto
optimal solutions with the help of order-representing and order-approximating
achievement functions. The proof follows from the proof of Theorem 3.5.4.
Proof. Here, we only prove the statement for Pareto optimality. The proofs
of the other statements are very similar. (The proof of the necessary condition
for e:-proper Pareto optimality can be found in Wierzbicki (1986a).)
Let z· E Z be Pareto optimal. This means that there does not exist any
other point z E Z such that Zi ~ z; z;
for all i = 1, ... , k and Zj < for some
j. Let us assume that z* is not a solution of the achievement problem when
z = z*. In this case there exists some vector ZO E Z such that sz(ZO) < sz(z*) =
sz(z) = 0 and ZO =f. Z·. Since Sz was assumed to be order-representing, we have
ZO E z - int Rt = z* - int Rt. This means that zi < z; for all i = 1, ... , k,
which contradicts the assumption that z* is Pareto optimal. Thus, z* is a
solution of the achievement problem. D
Remark 3.5.8. Aided by the results in Theorem 3.5.7 a certain point can be
confirmed not to be weakly, e:-properly or Pareto optimal (if the optimal value
of the achievement function differs from zero).
We are now able to completely characterize the set of weakly Pareto optimal
solutions with the help of order-representing achievement functions. The sets
of Pareto optimal and €-properly Pareto optimal solutions are characterized al-
most completely (if the closure of the sets of solutions of achievement problem
(3.5.2) for an order-approximating achievement function is taken as f -t 0).
If the solutions of achievement problem (3.5.2) are assumed to be unique, the
theorems above render the characterization of Pareto optimal solutions com-
plete.
3.5.3. Comments
where w is some fixed positive weighting vector. Let us briefly convince our-
selves that the above-mentioned function really is order-representing. The con-
tinuity of the function is obvious. If we have ZI and Z2 E Z such that zt < z;
for all i = 1, ... , k, then sz(ZI) = maxdwi(zt - Zi)J < maxi[Wi(Z; - zdJ =
sz(Z2) and thus the function is strictly increasing. If the inequality sz(z) =
maxi [Wi (Zi - Zi)] < 0 holds, then we must have Zi < Zi for all i = 1, ... , k, that
is, z E z - intR~.
An example of order-approximating achievement functions is
k
(3.5.3) sz(z) = t=l,
.max [Wi(Zi - zd 1+ P L Wi(Zi - Zi),
... ,k .
1=1
where w is some fixed positive weighting vector and p > 0 is sufficiently small
when compared with c and large when compared with e. The weighting coeffi-
cients can also be dropped from the latter part. This function is also e-strongly
increasing. FUnction (3.5.3) is related to augmented weighted Tchebycheff prob-
lem (3.4.5) and, thus, it can be called an augmented weighted achievement
function.
An example of a so-called penalty scalarizing function is
where fl > 1 is a scalar penalty coefficient and (z - z)+ is a vector with com-
ponents max [0, Zi - Zi]. This function is strictly increasing, strongly increas-
ing for all the metrics in Rk except for the Tchebycheff metric and order-
approximating with e ~ 1/ (} (see Wierzbicki (1980a, 1982». More examples of
order-representing and order-approximating functions are presented, for exam-
ple, in Wierzbicki (1980b, 1986a, b).
In cases when there exists a weighting vector such that the solution of
weighting problem (3.1.1) is equal to the solution of the achievement problem,
the weighting vector can be obtained from partial derivative information of the
achievement function. See Wierzbicki (1982) for details.
Let us finally mention a subset of reference points, termed dominating
points, considered in Skulimowski (1989). A point is called a dominating point
112 Part II - 3. A Posteriori Methods
if it is not dominated by any feasible point and it dominates at least one of the
feasible points.
Finally, we briefly mention some other methods of the a posteriori type. For
more detailed information, see the references cited.
The so-called hyperplane method is introduced in Yano and Sakawa (1989)
for generating Pareto optimal or properly Pareto optimal solutions. It is
shown that the weighting method, the E-constraint method and the method
of weighted metrics can be viewed as special cases of the hyperplane method.
A theory concerning trade-off rates in the hyperplane method is provided in
Sakawa and Yano (1990). A generalized hyperplane method for generating all
the efficient solutions (with respect to some ordering cone) is presented in
Sakawa and Yano (1992).
Another method for a general characterization of the Pareto optimal set is
suggested in Soland (1979). For example, the weighting method, the method of
weighted metrics and goal programming (see Section 4.3) can be seen as special
cases of the general scalar problem of Soland. Further, the weighting method
and the E-constraint method are utilized in a so-called envelope approach for
determining Pareto optimal solutions in Li and Haimes (1987). An application
to dynamic multiobjective programming is also treated.
The non inferior (meaning here Pareto optimality) set estimation (NISE)
method for MOLP problems can also be considered to belong to this class
of a posteriori methods. It is a technique for generating the Pareto optimal
set of two objective functions (see Cohon (1978)). It can be generalized for
convex problems with two objective functions (see, for example, Chankong and
Haimes (1983b, pp. 268-274)). In Balachandran and Cero (1985), the method
is extended to problems with three objective functions. The weighting method
is the basis of the NISE method.
3.6. Other A Posteriori Methods 113
In the case of a priori methods, the decision maker must specify her or
his preferences, hopes and opinions before the solution process. The difficulty
is that the decision maker does not necessarily know beforehand what it is
possible to attain in the problem and how realistic her or his expectations are.
The working order in these methods is: 1) decision maker, 2) analyst.
Below, we handle three a priori methods. First, we give a short presentation
of the value function method. Then we introduce lexicographic ordering and
goal programming.
4.1.1. Introduction
In the value function method, the decision maker must be able to give an
accurate and explicit mathematical form of the value function U: R k -+- R that
represents her or his preferences globally. This function provides a complete
ordering in the objective space. Then the value function problem
maximize U(f(x))
(4.1.1)
subject to xES
is ready to be solved by some method for single objective optimization as
illustrated in Figure 4.1.1. The bold line represents the Pareto optimal set.
Remember Theorem 2.6.2 of Part I, which says that the solution of problem
(4.1.1) is Pareto optimal if the value function is strongly decreasing.
The value function method seems to be a very simple method, but the diffi-
culty lies in specifying the mathematical expression of the value function. The
inability to encode the decision maker's underlying value function reliably is
demonstrated by experiments in de Neufville and McCord (1984). It is shown
that encoding methods that should theoretically produce identical value func-
tions fail: the functions may differ from each other by more than 50 %. It is also
K. Miettinen, Nonlinear Multiobjective Optimization
© Springer Science+Business Media New York 1998
116 Part II - 4. A Priori Methods
" ,
\
z1
contours of U
pointed out that there is no actual analysis of the accuracy of the value function
assessment. The consistency checks, that is, whether decision makers provide
consistent answers to similar questions, are not adequate: a biased instrument
can provide consistent data.
On the other hand, even if it were possible for the decision maker to express
her or his preferences globally, the resulting preference structure might be too
simple, since value functions cannot represent intransitivity or incomparability
(see Rosinger (1985». More features and weaknesses were presented in connec-
tion with the definition of the value function (Definition 2.6.1) in Section 2.6
of Part I.
4.1.2. Comments
The value function method could be called an 'optimal' way of solving mul-
tiobjective optimization problems if the decision maker could reliably express
the value function. The use of the value function method is restricted in prac-
tice to multiattribute decision analysis problems with a discrete set of feasible
alternatives. The theory of value and utility functions for multiattribute prob-
lems is examined broadly in Keeney and Raiffa (1976). But, it is believed, for
example, in Rosenthal (1985), that these experiences can also be utilized in
continuous cases.
Important results concerning value functions and the conditions for their ex-
istence are collected in Dyer and Sarin (1981). Two general classes of value func-
tions, additive and multiplicative forms, are presented extensively in Keeney
and Raiffa (1976) and briefly in Rosenthal (1985). The existence of value func-
tions and the nature of additive decreasing value functions are handled in Starn
et al. (1985). These topics and the construction of value functions are pre-
sented more widely in Yu (1985, pp. 95-161). General properties and some
desirable features of certain types of value functions (e.g., additive, max-min,
4.1. Value Function Method 117
min-sum and exponential forms) are stated in Bell (1986), Harrison and Rosen-
thal (1988), Soland (1979) and Sounderpandian (1991). More examples of value
functions are given in Tell and Wallenius (1979). Utility compatible measures
of risk are deduced in Bell (1995). Relations between value functions, ordering
cones and (proper) efficiency are studied in Henig (1990).
In some interactive methods, it is assumed that the underlying value func-
tion is of some particular (e.g., additive or exponential) form, after which, its
parameters are fitted according to the decision maker's preferences. Such meth-
ods are presented, for example, in Rothermel and Schilling (1986) and Sakawa
and Seo (1980, 1982a, b) (see Section 5.3).
Three kinds of conditions for value functions under which it is not possi-
ble to exclude any Pareto optimal or properly Pareto optimal solution from
consideration a priori are identified in Soland (1979).
The convergence properties of additive value functions (assuming prefer-
ential independence of the objective functions) are investigated by simulation
experiments in Stewart (1997). One observation is that piecewise linear value
functions perform dramatically better than linear ones.
Relationships between the method of weighted metrics and the value func-
tion method are reported in Ballestero and Romero (1991). It might be imag-
ined that the two methods have nothing in common, since a value function
represents the opinions of the decision maker and the method of weighted met-
rics does not take the decision maker into consideration. However, conditions
can be set on the value function to guarantee that its optimum belongs to the
solution set obtainable by the method of weighted metrics. More relationships
between these two methods, when the value functions are of a certain type,
are presented in Ballestero (1997a). It is demonstrated in Moron et al. (1996)
that there are large families of such well-behaved value functions for bi-criteria
problems where the connection is valid.
One important thing to take into account in practice is that the aspirations
of the decision maker may change during the solution process. Possible expla-
nations of such behaviour are pondered in Steuer and Gardiner (1990). Is it
possible that the decision maker's value function will change considerably over
a short period of time and thus be unstable? Another alternative is that it is
difficult for the decision maker to know the real value function without getting
to know the problem better, that is, without interaction with the solution pro-
cess. More open questions concerning value functions are listed in Nijkamp et
a!. (1988).
The weighting method may be regarded as a special case of a value function
where the utilities are linear and additive. If the underlying value function is
assumed to be linear, this means that the marginal rates of substitution of the
decision maker are constant for every solution. See comments on this feature
in Section 4.3.
4.2.1. Introduction
We can now present the following result concerning the Pareto optimality of
the solutions.
obtained for Ii. The assumption h(x) ~ h(x*) implies that fi(X) = h(x*),
which is a contradiction. Thus, x* is Pareto optimal. D
4.2.2. Comments
The justification for using lexicographic ordering is its simplicity and the
fact that people usually make decisions successively. However, this method has
several drawbacks. The decision maker may have difficulties in putting the
objective functions into an absolute order of importance. On the other hand,
the method is usually robust. It is very likely that the less important objective
functions are not taken into consideration at all. Ifthe most important objective
function has a unique solution, the other objectives do not have any influence
on the solution. And even if the most important objective had alternative
optima and it was possible to use the second most important objective, it is
very unlikely that this problem would have alternative optima, and the third
or other less important objectives could be used.
Note that lexicographic ordering does not allow a small increment of an
important objective function to be traded off with a great decrement of a less
important objective function. Yet, this kind of trading might often be appealing
to the decision maker.
4.3. Goal Programming 121
4.3.1. Introduction
The basic idea in goal programming is that the decision maker specifies
(optimistic) aspiration levels for the objective functions and any deviations
from these aspiration levels are minimized. An objective function jointly with
an aspiration level forms a goal. We can say that, for example, minimizing the
price of a product is an objective function, but if we want the price to be less
than 500 dollars, it is a goal (and if the price must be less than 500 dollars, it
is a constraint). We denote the aspiration level of the objective function Ii by
Zi for i = 1, ... ,k.
For minimization problems, goals are of the form h(x) ::; Zi (and of the
form h(x) ~ Zi for maximization problems). Goals may also be represented
as equalities or ranges (for the latter, see Charnes and Cooper (1977)). The
aspiration levels are assumed to be selected so that they are not achievable
simultaneously.
It is worth noticing that the goals are of the same form as the constraints
of the problem. This is why the constraints may be regarded as a subset of the
goals. This way of formulating the problem is called generalized goal program-
ming. In this case, the goals can be thought of as being divided into flexible and
inflexible goals, where the constraints are the inflexible (or rigid) ones. More
detailed presentations and practical applications of generalized goal program-
ming are given, for example, in Ignizio (1983a) and Korhonen (1991a). See also
Section 5.lD.
After the aspiration levels have been specified, the following task is to min-
imize the under- and overachievements of the objective function values with
respect to the aspiration levels. It is sufficient to study the deviational vari-
ables Oi = Zi - Ii(X). The deviational variable Oi may have positive or negative
values, depending on the problem. We can present it as the difference of two
positive variables, that is, Oi = 0; - 0;. We can now investigate how well each
of the aspiration levels is attained by studying the deviational variables. We
122 Part II - 4. A Priori Methods
(4.3.1)
minimize L wil.f~(x) - zil
i=1
subject to xES
151 = max [0, Ji(X) - z;] or 151 = ~ [Iz; - Ji(x)1 + J;(x) - Zi]
This means that the absolute value signs can be dropped from problem (4.3.1)
by introducing the underachievement and the overachievement variables. The
resulting weighted goal programming problem is
4.3. Goal Programming 123
k
minimize ~)wic5i + wtc5t)
.=1
(4.3.2) subject to Ii (x) + c5i - cSt = Zi for all i = 1, ... , k,
cSi,cSt 2:0 for all i=l,... ,k,
xES,
where we give separate weighting coefficients for underachievements and over-
achievements, and x ERn, cSi and c5t ' i = 1, ... , k, are the variables. If all the
goals are in the form li(x) ~ Zi, we can leave the underachievement variables
and write the problem in the form
k
minimize '" wT cST
L..J
i=1 ••
(4.3.3) subject to li(x) - cSt ~ Zi for all i = 1, ... ,k,
c5t 2: 0 for all i = 1, ... ,k,
xES,
where x E Rn and cSt, i = 1, ... , k, are the variables.
Figure 4.3.1 portrays how problem (4.3.3) is solved. The black spot is the
reference point of the aspiration levels. Every weighting vector produces differ-
ent contours by which the feasible objective region is to be intersected. Thus,
different solutions can be obtained by altering the weights. Contours with two
weighting vectors have been depicted in the figure. The bold line illustrates the
Pareto optimal set.
reference
•
point
Even though the constraints cSi . cSt = 0 for all i = 1, ... ,k are not usually
included in the problem formulations, some attention must be paid to guar-
124 Part II - 4. A Priori Methods
antee that they are valid (see details in Rosenthal (1983)). An example of the
required conditions is given in Sawaragi et al. (1985, p. 253). The weighted goal
programming problem may be solved by standard single objective optimization
methods. If the original problem is linear, then the corresponding weighted goal
programming problem is also linear. The close connection between goal pro-
gramming and MOLP problems explains why the above-mentioned constraint
is usually absent from the problem formulation (it would make the problem
nonlinear) .
Note that weighted goal programming is closely related to the method of
weighted metrics or compromise programming. This can be seen particularly
well in formulation (4.3.1). Instead of the ideal objective vector, the reference
point of the decision maker is used in goal programming. The distances can be
measured by metrics other than the L1-metric. The L1-metric is widely used in
connection with goal programming because of the origin of the method in linear
programming. (This metric maintains the linearity of the problem.) If some
other Lp-metric is used there is another problem in determining an appropriate
value for p. Note, however, that if we have appropriate solvers available, we can
solve problem (4.3.1) directly without any deviational variables and using any
metric.
In the lexicographic approach, the decision maker must specify a lexico-
graphic order for the goals in addition to the aspiration levels. The goal at the
highest priority level is supposed to be infinitely more important than the goal
at the second priority level, etc. This means that no matter how large a mul-
tiplier is selected, a lower priority goal multiplied by it can never be made as
important as a higher priority goal. After the lexicographic ordering, the prob-
lem with the deviational variables as objective functions and the constraints as
in (4.3.2) is solved as explained in Section 4.2. In order to be able to use the
lexicographic approach, the decision maker's preference order for the objectives
must be definite and rigid.
A combination of the weighted and the lexicographic approaches, to be
called a combined approach, is quite popular. In this case, several objective
functions may belong to the same class of importance in the lexicographic
order. In each priority class, a weighted sum of the deviational variables is
minimized. The same weaknesses presented in connection with lexicographic
ordering are valid for this and the lexicographic approach.
It is not necessary to include the original constraints (x E S) in the lex-
icographic optimization problem in the normal way. They can be considered
to belong to the first priority level. In this way, they are taken into account
before any objective function is optimized and the feasibility of the solutions
is guaranteed by the nature of the lexicographic ordering.
Next, we prove a result concerning the Pareto optimality of the solutions of
goal programming.
4.3. Goal Programming 125
Proof. For the lexicographic approach, the proof corresponds to that of The-
orem 4.2.1. Here, we only present a proof for the weighted approach. For sim-
plicity of notation, we assume that the problem is of the form (4.3.3). A more
general case is straightforward.
Let x· E S be a solution of the weighted goal programming problem, where
the deviational variables (denoted here for clarity by rS;) are positive. Let us
assume that x· is not Pareto optimal. In this case, there exists a vector XO E S
such that fi(XO) $ h(x·) for all i = 1, ... , k and h(xO) < h(x·) for at least
one index j.
°
We denote h(x·) - h(xO) = f3 > 0. Then we set rSf = 8; > for i f- j and
8j = max [0, 8; - f3j ~ 0, where rSf is the deviational variable corresponding to
XO for i = 1, ... , k.
We have now fi(XO) - rSf $ h(x·) - 8; $ Zi for all i f- j. If 8; - f3 > 0,
then fj(xO) - 8j = h(xO) - rS; + h(x·) - fj(xO) ~ Zj, and if 8; - f3 ~ 0, then
h(xO) - rSj = h(xO) + h(x·) - h(x·) = h(x*) - f3 $ h(x·) - rS; $ Zj.
° °
This means that XO satisfies the constraints of problem (4.3.3). We have
8j < 8; (this is also valid if rSj = since 8; > for all i), and rSf $ rS; for all
if- j. As the weighting coefficients are positive, we have EwtrSi < EwtrS;,
which contradicts the fact that x' is a solution of weighted goal programming
problem (4.3.3).
For aspiration levels forming a Pareto optimal point the proof is self-evident.
o
Let us briefly mention one more form of goal programming, min-max goal
programming (suggested in Flavell (1976». It is not as widely used as the
weighted and the lexicographic approaches. For minimization problems the
min-max goal programming problem to be solved is
minimize . max 8t
t=l, ... ,k
(4.3.4) subject to hex) - 8t $ Zi for all i = 1, ... ,k,
xES,
4.3.3. Comments
This idea is even more evident when we look at the marginal rates of
substitution in goal programming problems. In Remark 2.8.7 of Part I it
was mentioned that the marginal rates of substitution may be defined as
mij (x) = aU~~~x)) / au~~~x)). Thus, goal programming does not take into con-
sideration the possibility that it is easier for the decision maker to let some-
thing increase a little if (s)he has got little of it than if (s)he has got much of
it. The reason for this is that goal programming implicitly assumes that the
marginal rates of substitution are piecewise constant. This critique also applies
to the lexicographic approach (see details in Rosenthal (1983, 1985)). More
critical observations about goal programming are presented in Romero (1991)
and Rosenthal (1983).
Goal programming is a very widely used and popular solution method for
practical multiobjective optimization problems. One of the reasons is its age.
Another reason is that goal-setting is an understandable and easy way of mak-
ing decisions. The specification of the weighting coefficients or the lexicographic
ordering may be more difficult. The weights do not have so direct an effect on
the solution obtained as in the a priori weighting method. However, they are
relative to each other. This means that only the relations of the weighting co-
efficient matter, not the weights themselves. It may be difficult to specify the
weights because they have no direct physical meaning. It is demonstrated in
Nakayama (1995) that desirable solutions are very difficult to obtain by ad-
justing the weighting coefficients in the weighted goal programming problem.
Anyway, it is as advisable as in the weighting method to normalize the objective
functions when weighting coefficients are used.
One must be careful with the selection of the aspiration levels so that the
Pareto optimality of the solutions can be guaranteed. The correct selection may
be difficult for a decision maker who does not know what the feasible region
looks like. Presenting the ranges of the Pareto optimal set, or at least the ideal
objective vector, to the decision maker may help in the selection.
Goal programming is not an appropriate method to use if it is desired to
obtain trade-offs. Another restricting property is the underlying assumption of
a piecewise linear value function and thus piecewise constant marginal rates of
substitution.
Assuming that goal programming follows a traditional product life cycle,
it is inferred in Schniederjans (1995b) that the current stage of productivity
is in decline. It is pointed out that the number of goal programming papers
has been on the decrease for several years. One of the reasons suggested is the
aging of the few active contributors to goal programming.
Part III
RELATED ISSUES
5. INTERACTIVE METHODS
The class of interactive methods is the most developed of the four classes
of methods presented here. The interest devoted to this class can be explained
by the fact that assuming the decision maker has enough time and capabilities
for co-operation, interactive methods can be presumed to produce the most
satisfactory results. Many of the weak points of the methods in the other three
classes are overcome. Namely, only part of the Pareto optimal points has to be
generated and evaluated, and the decision maker can specify and correct her
or his preferences and selections as the solution process continues and (s)he
gets to know the problem and its potentialities better. This also means that
the decision maker does not have to know any global preference structure. In
addition, the decision maker can be assumed to have more confidence in the
final solution since (s)he is involved throughout the solution process.
In interactive methods, the decision maker works together with an analyst
or an interactive computer program. One can say that the analyst tries to de-
termine the preference structure of the decision maker in an interactive way.
A solution pattern is formed and repeated several times. After every iteration,
some information is given to the decision maker and (s)he is asked to answer
some questions or provide some other type of information. The working or-
der in these methods is: 1) analyst, 2) decision maker, 3) analyst, 4) decision
maker, etc. After a reasonable (finite) number of iterations every interactive
method should yield a solution that the decision maker can be satisfied with
and convinced that no considerably better solution exists. The basic steps in
interactive algorithms can be expressed as
a) find an initial feasible solution,
b) interact with the decision maker, and
c) obtain a new solution (or a set of new solutions). If the new solution
(or one of them) or one of the previous solutions is acceptable to the
decision maker, stop. Otherwise, go to step b).
Interactive methods differ from each other by the form in which information
is given to the decision maker, by the form in which information is provided by
the decision maker, and how the problem is transformed into a single objective
optimization problem. One problem to be solved when designing an interactive
method is what kind of data one should use to interact with the decision maker.
It should be meaningful and easy for the decision maker to comprehend. The
decision maker should understand the meaning of the parameters for which
(s)he is asked to supply values. On the other hand, the data provided to the
decision maker should be easily obtainable by the analyst and contain infor-
mation about the system. Too much information should not be used and the
information obtained from the decision maker should be utilized efficiently. To
ensure that the greatest possible benefit can be obtained from the interactive
method, the decision maker must find the method worthwhile and acceptable,
and (s)he must be able to use it properly. This usually means that the method
must be understandable and sufficiently easy to use. This aim calls for research
in understanding the underlying decision processes and how decisions are made.
As stressed in Kok (1986), experiments in psychology indicate that the
amount of information provided to the decision maker has a crucial role. Ifmore
information is given to the decision maker, the percentage of the information
used decreases. In other words, more information is not necessarily better than
less information. More information may increase the confidence of the decision
maker in the solution obtained but the quality of the solution may nonetheless
be worse.
In addition to the fact that the decision maker has an essential role in
interactive methods, the analyst should not be forgotten either. The analyst
can support the decision maker in many ways and, in the best possible case,
explain the behaviour of the problem to the decision maker. Thus, the analyst
may play a meaningful role in the learning process of the decision maker.
Interactive methods have been classified in many ways, mainly according to
their solution approaches. Here we do not follow any of those classifications. Let
us, however, mention two different conceptions regarding interactive approaches
according to Vanderpooten (1989a, b, 1992). The approaches are searching
and learning. In searching-oriented methods a converging sequence of solution
proposals is presented to the decision maker. It is assumed that the decision
maker provides consistent preference information. In learning-oriented methods
a free exploration of alternatives is possible allowing trial and error. The latter
does not guide the decision maker and convergence is not guaranteed. The best
procedure would be a combination of these two approaches, drawing on their
positive features. Such an approach would support the learning of preferences,
while it would also include guiding properties.
Before we present any methods, some critical comments are in order. Re-
peatedly, it has been and will be assumed that the decision maker makes consis-
tent decisions or that (s)he has an underlying (implicitly known) value function
upon which her or his decisions are made. The purpose is not to go deeply into
the theories of decision making. However, it is worth mentioning that those
assumptions can be called into question because they are difficult to verify.
Consistency of the responses of the decision maker is one of the most im-
portant factors guaranteeing the success of many interactive solution methods.
Because of the subjectivity of the decision makers, different starting points,
different types of questions or interaction styles may lead to different final so-
5. Interactive Methods 133
lutions. Some methods are more sensitive with respect to consistency than oth-
ers. The handling of inconsistency with respect to several interactive methods is
treated in Shin and Ravindran (1991). In general, inconsistency can be reduced
by consistency tests during the solution process or by minimizing the decision
maker's cognitive burden. In other words, interactive methods assuming con-
sistent answers should have built-in mechanisms to deal with inconsistencies.
This is one of the motivations in developing new methods for multiobjective
optimization.
Further, once the existence of an underlying, implicit value function is sup-
posed, several assumptions are set on it. How can one guarantee and verify,
for example, the pseudoconcavity of a function that is not explicitly known?
Naturally, something can be concluded if we find out enough about the deci-
sion maker's preference structure. Steps in that direction are, however, very
laborious and in any case the results are likely to be controversial.
In solving practical problems, knowledge about decision processes and deci-
sion analysis is needed to guarantee fruitful co-operation between the decision
maker and the analyst. An understanding of the behaviour of the decision
maker is important in both developing and applying interactive methods. This
fact has been somewhat underestimated, as emphasized in Korhonen and Wal-
lenius (1996, 1997). Korhonen and Wallenius also handle several behavioural
issues related to interactive methods. Among them are the learning process
of the decision maker, her or his wish to control the search process, and the
permissibility of cyclic behaviour or making errors. Perhaps the behavioural
sciences should be taken more widely into account when designing interactive
methods. A critique of the assumptions underlying interactive methods is also
presented in French (1984). The primary concern is that the assumptions should
be supported by empirical research from the behavioural sciences.
One noteworthy aspect is that it is unrealistic to assume that decision mak-
ers can provide precise information and inputs. After studying 86 reported
applications of decision analysis in the literature, it is concluded in Corner and
Corner (1995) that the methods should become more user-friendly and descrip-
tive in dealing with the input of the decision maker. In Wierzbicki (1997a), it
is stressed that intuition plays an essential role in decision making. Wierzbicki
defines intuitive decisions as "quasiconscious and subconscious information pro-
cessing, leading to an action, utilizing aggregated experience and training and
performed (most probably) by a specialized part of the human mind." To pro-
voke intuitive decision making, analysts should provide information in rich and
multidimensional graphic terms and avoid requiring consistency.
Decision making is appositely described in Zeleny (1989) as "searching for
harmony in chaos." One can criticize the way decision makers are forced into a
priori formulas, patterns or contexts (like wandering around the Pareto optimal
set). Instead, the decision maker should be guided through her or his own
creative search process since decision making can be regarded as a process of
continuous redefinition of the problem.
134 Part II - 5. Interactive Methods
that the median number of iterations has been between three and eight. One
can ask whether such rapid convergence is the result of getting tired or whether
it is due to some other reason. Possibly the decision makers did not know how
to continue the solution process.
An important factor when using interactive solution methods is the selection
of the starting point. Particularly for nonconvex problems where the objective
functions may have several local optima, the starting point affects greatly the
solutions generated. If the starting point is somehow biased, it may anchor the
desires and the preferences of the decision maker. It is not desirable that the
final solution is affected by the starting point. In general, the starting point
should provide a useful basis for the decision maker in exploring the Pareto
optimal set. The starting point can, for example, be generated by some of the
noninteractive methods.
Nonconvexity is a mathematical aspect. Another aspect related to starting
points from the point of view of human judgment and decision making is the
above-mentioned anchoring. To be more exact, anchoring means that the deci-
sion maker fixes her or his thinking on some (possible irrelevant) information,
like the starting point, and fails to sufficiently adjust and move away from
that anchor. In other words, the decision maker is unable to move far from
the starting point. This kind of behavioural perspective on interactive decision
making is handled in Buchanan and Corner (1997). On the basis of a number
of experiments it is argued that anchoring effects are connected more to di-
rected and structured solution methods than to methods based on free search.
Buchanan and Corner conclude that whenever an anchoring bias is possible, it
is important that the starting point reflects the initial pref€rences of the deci-
sion maker. The reasoning is that since any starting point is likely to bias the
decision maker, it is best to bias her or him in the right direction.
Even though interactive methods can be regarded as most promising so-
lution methods for multiobjective optimization problems, there are still cases
where these methods are not practicable regardless of the availability of the de-
cision maker. Such problems include, for instance, many engineering problems
that require extensive and expensive calculations (like large-scale finite element
approximations). One must, however, remember that computational facilities
have developed greatly during the last few years. Thus, the number of problems
that cannot be solved by interactive methods has decreased. See Osyczka and
Zajac (1990) for a suggestion of handling computationally expensive functions.
On the other hand, the large number of objective functions may make interac-
tive methods impractical. In this case, it may be difficult for the decision maker
to absorb the information provided and to give consistent answers in order to
direct the solution process.
Below, we present several interactive methods. Some of them are relatively
old and much tested and developed, whereas some others are new and deserve
further refinement. The methods to be described are the interactive surrogate
worth trade-off method, the Geoffrion-Dyer-Feinberg method, the sequential
136 Part II - 5. Interactive Methods
proxy optimization technique, the Tchebycheff method, the step method, the
reference point method, the GUESS method, the satisficing trade-off method,
the light beam search, the reference direction approach, the reference direction
method and the NIMBUS method. The first three methods are based on the
existence of an underlying value function, whereas the last eight use reference
points and the classification of the objectives. (In developing the last of these,
attempts have been made to overcome some of the drawbacks observed in the
other methods.)
All the methods to be presented are based on generating mainly weakly,
properly or Pareto optimal solutions. In each method, it is assumed that less
is preferred to more by the decision maker. The same notion could be formu-
lated to require that the underlying value function is strongly decreasing. The
reason for avoiding this wording is that an underlying value function is not
always assumed to exist. The assumption only concerns the form of the general
preference structure of the decision maker.
In connection with the methods, some applications reported in the literature
are mentioned. However, let us keep in mind that the impressions obtained from
such applications may be biased because unsuccessful applications are hardly
ever published. In addition, we give references for extensions and modifications
of the methods. We also indicate whether the methods belong to the class of ad
hoc or non ad hoc methods. (These classes were introduced at the beginning
of this part in Chapter 1.)
Throughout the book the iteration counter is denoted by h and the deci-
sion variable vector at the current iteration by xh. In addition, the number of
alternative objective vectors presented to the decision maker is denoted by P.
5.1.1. Introduction
c-constraint problems and asking the decision maker to select the most satis-
factory solution for the continuation. In what follows, appropriate assumptions
are assumed to be valid so that the solutions produced by the c-constraint
method are Pareto optimal (see Section 3.2).
It is assumed that
1. The underlying value function U: R k -t R exists and is implicitly known
to the decision maker. In addition, U is continuously differentiable and
strongly decreasing.
2. The objective and the constraint functions are twice continuously differ-
entiable.
3. The feasible region S is compact (so that a finite solution exists for every
feasible c-constraint problem).
4. The assumptions in Theorem 3.2.13 are satisfied.
The main features of the ISWT method can be presented cursorily with
four steps.
(1) Select the reference function It to be minimized and give upper bounds
to the other objective functions. Set h = l.
(2) Solve the current c-constraint problem to get a Pareto optimal solution
xh. Trade-off rate information is obtained from the connected Karush-
Kuhn-Tucker multipliers.
(3) Ask the opinions of the decision maker with respect to the trade-off
rates at zh corresponding to xh.
(4) If some stopping criterion is satisfied, stop with xh as the final solu-
tion. Otherwise update the upper bounds of the objective functions
with the help of the answers obtained in step (3) and solve several c-
constraint problems (to determine an appropriate step-size). Let the
decision maker choose the most preferred alternative. Denote the cor-
responding decision vector by xh+1 and set h = h + l. Go to step (3).
First, we examine how trade-off rate information is obtained from Karush-
Kuhn-Tucker multipliers. As noted in Theorem 3.2.13 of Section 3.2, the
Karush-Kuhn-Tucker multipliers represent trade-off rates under the specified
assumptions.
Let xh E S be a solution of the c-constraint problem at the iteration h,
where It is the function to be minimized and the upper bounds are c7 for
i = 1, ... , k, i :f; P. We suppose that xh satisfies the assumptions specified in
Theorem 3.2.13. If the Karush-Kuhn-Tucker mUltipliers Aii associated with the
constraints I;(x) ~ c7 are strictly positive for all i = 1, ... , k, i :f; P, then Aii
represents the partial trade-off rate at xh between It and Ii. In other words, if
the multiplier Aii corresponding to the constraint involving /; is positive, this
particular constraint is active and binds the optimum.
138 Part II - 5. Interactive Methods
We know now that to move from xh to some other (locally) Pareto optimal
solution in the neighbourhood of xh, the value of the function It. decreases by
A~i units for every unit of increment in the value of the function h (or vice
versa), while the values of all the other objective functions remain unaltered.
The opinion of the decision maker with regard to this kind of trade-off rate for
all i = 1, ... ,k, i "I f, is found out by posing the following question.
Let an objective vector f (xh) = zh be given. If the value of ie is decreased
by '\~i units, then the value of fi is increased by one unit (or vice versa)
and the other objective values remain unaltered. How desirable do you find
this trade-off?
If the situation is not so convenient as presented above, that is, some of the
Karush-Kuhn-Tucker multipliers '\~i equal zero, then another type of question is
needed. Let us suppose that A~i > 0 for i E N> and A~j = 0 for j E N=, where
N> U N= = {i I i = 1, ... ,k, i "I f}. As noted in Theorem 3.2.13, increasing
the value of fi' where i E N> decreases the value of It. and in addition, the
values of all fj also change, where j E N=. The question to the decision maker
for all i E N> is now of the form
Let an objective vector f(xh) = zh be given. If the value of It. is decreased
by A~i units, then the value of h is increased by one unit (or vice versa) and
the values of h for j E N= change by 'Vh(xh)Ta~;ih) units. How desirable
do you find these trade-offs?
A problem with the question above is that the values of a~~h) for i E N>
are unknown. One of the ways suggested in Chankong and H~imes (1983b)
for coping with this is that the values can be approximated by solving the
c:-constraint problem with a slightly modified upper bound vector as ~(i) =
(c:~, ... ,c:LI' C:~+l' ... ,c:? + E, ... ,c:~), where E "lOis a scalar with a small
absolute value. Let the solution of this s-constraint problem be x(~(i)). We
obtain now an approximation by
8x(~) x(~(i)) - Xh
&;-~ E
Note that the decision maker's opinions are asked respecting certain amounts
of change in the values of the objective functions, and not of changes in general.
The following problem to be handled is the form of the answers expected from
the decision maker. It is suggested in Chankong and Haimes (1978, 1983b) that
the decision maker must specify an integer between 10 and -10 to indicate her
or his degree of preference. If the decision maker is completely satisfied with the
trade-off suggested, the answer is 10. Positive numbers less that 10 indicate the
degree of satisfaction (less than complete). Correspondingly, negative answers
reflect the decision maker's satisfaction with the trade-off which is converse to
that in the question. The answer 0 means that the decision maker is indifferent
to the given trade-off.
5.1. Interactive Surrogate Worth Trade-Off Method 139
In Tarvainen (1984), it is suggested that far fewer choices are given to the
decision maker. The possible answers are integers from 2 to -2 and their mean-
ing corresponds to that presented above. The justification is that it is easier for
the decision maker to give an answer and maintain some kind of consistency
when there are fewer alternatives. These five alternatives are enough to rep-
resent the direction and rough degree of the decision maker's preferences and
satisfaction.
Regardless of the scale selected, the response of the decision maker is called
a surrogate worth of the trade-off rate between It and Ii at xh and denoted by
Wl~' At each point xh, a number of k - 1 (or less, if N= :j:. 0) questions of the
previously described form are presented to the decision maker and the values
for Wl~ (i = 1, ... ,k, i :j:. £) are obtained.
According to Theorem 3.2.13, there exists a Pareto optimal solution in the
neighbourhood of xh when the values of the objective functions are changed
according to the information given in the trade-off rates. The problem is how
much the values of the objective functions can be changed in order to remain on
the Pareto optimal surface and obtain the best possible solution. We must find
a way to update the upper bounds of the objective functions in an appropriate
way.
How to proceed from this point depends on the scale chosen for the surrogate
worth values. The idea is to obtain an estimate for the gradient of the underlying
value function with the help of the surrogate worth values. Then a steepest
ascent-type formulation is used to maximize the value function. The upper
bounds of the c:-constraint problem are revised and a new solution is obtained.
It is assumed to satisfy the preferences of the decision maker indicated by the
surrogate worth values as well as possible.
In the original version by Chankong and Haimes, it is suggested that the
upper bounds are updated from iteration h to h + 1 by
E~+l = c~ + t(WI~Ij;(xh)l)
for i E N> and
5.1.3. Comments
In practice, when the decision maker is asked to express her or his prefer-
ences concerning the trade-off rates, (s)he is implicitly asked to compare the
trade-off rates with her or his marginal rates of substitution. (Naturally, the
decision maker does not have to be able to specify the marginal rates of sub-
stitution explicitly.) If mu < Au, then the surrogate worth value is positive
(and the contrary respectively). If mu = Ali for all i = 1, ... , k, i 'l-i, meaning
Wei = 0, then the stopping criterion (2.8.1) introduced in Subsection 2.8.2 of
Part I is valid. Thus, the condition Wl~ = 0 for all i 'l-i is a common stopping
criterion for the algorithm. Another possible stopping situation is that the de-
cision maker wants to proceed, but only in an infeasible direction. The latter
condition is more difficult to check.
The ISWT method can be classified as non ad hoc in nature. If the value
function is known, then the trade-off rates are easy to compare with the
marginal rates of substitution. Further, when comparing alternatives, it is easy
to select the one with the highest value function value.
The convergence rate of the ISWT method greatly depends on the accuracy
and the consistency of the answers of the decision maker. It was pointed out in
Section 2.8 of Part I that it is important to select the reference function care-
fully. This comment is also valid when considering the convergence properties.
If there is a sharp limit in the values of the reference function where there is a
change in satisfaction from 'very satisfactory' to 'very unsatisfactory,' the so-
lution procedure may stop too early. Further references are cited in Chankong
and Haimes (1978) for convergence results.
A method related to the ISWT method is presented in Chen and Wang
(1984). The method is an interactive version of the SWT method, where new
solution alternatives are generated by Lin's proper equality method (see Section
3.2), and the decision maker has to specify only the sign of the surrogate worth
values.
There are many other modifications of the SWT method in the literature.
Among others, it is generalized for multiple decision makers in Chankong and
Haimes (1983b, pp. 359-366), Haimes (1980) and Hall and Haimes (1976). The
first two handle also the SWT method in stochastic problems.
5.2. Geoffrion-Dyer-Feinberg Method 141
The role of the decision maker is quite easy to understand in the ISWT
method. (S)he is provided with one solution and has to specify the surrogate
worth values. The complicatedness of giving the answers depends on how ex-
perienced the decision maker is in such specification and which variation of the
method is employed. The set of 21 different alternatives as surrogate worth
values in the original version is quite a lot to select from. It may be difficult for
the decision maker to provide consistent answers throughout the decision pro-
cess. In addition, if there is a large number of objective functions, the decision
maker has to specify a lot of surrogate worth values at each iteration. At least
for some decision makers it may be easier to maintain consistency when there
are fewer alternative values for the surrogate worth available (as suggested by
Tarvainen (1984».
Trade-off rates play an important role in the ISWT method, and that is
why the decision maker has to understand the concept of trade-off properly.
Attention must also be paid to the ease of understanding and careful formula-
tion of the questions concerning the trade-off rates. Careless formulation may,
for example, cause the sign of the surrogate worth value to be changed.
It is a virtue that all the alternatives during the solution process are Pareto
optimal. Thus, the decision maker is not bothered with any other kind of solu-
tions.
A negative feature is that there are a lot of different assumptions to be
satisfied to guarantee that the algorithm works. It may be difficult (and at
least laborious) in many practical problems to ensure that the assumptions are
satisfied. One can argue that the validity of the assumptions is not always that
important in practice. However, for example, the correctness of the trade-off
rates is crucial for the success of the ISWT method.
5.2.1. Introduction
The basic idea behind the GDF and the ISWT methods is the same. At
each iteration, a local approximation of an underlying value function is gener-
ated and maximized. In the GDF method, the idea is somewhat more clearly
visible. Marginal rates of substitution specified by the decision maker are used
142 Part II - 5. Interactive Methods
to approximate the direction of steepest ascent of the value function. Then the
value function is maximized by a gradient-based method. A gradient method
of Frank and Wolfe (FW) (see Frank and Wolfe (1956)) has been selected for
optimization because of its simplicity and robust convergence (rapid initial con-
vergence) properties. The GDF method is also sometimes called an interactive
Frank-Wolfe method, because it has been constructed on the basis of the FW
method.
The problem to be solved here is
maximize u(x) = U(f(x))
(5.2.1)
subject to xES.
It is assumed that
1. The underlying value function U; R k --+ R exists and is implicitly known
to the decision maker. In addition, u; R n --+ R is a continuously differen-
tiable and concave function on S (sufficient conditions for the concavity
are, for example, that U is a concave decreasing function and the objec-
tive functions are convex; or U is concave and the objective functions
are linear), and U is strongly decreasing with respect to the reference
function (denoted here by It) so that au~~~x)) < o.
2. The objective functions are continuously differentiable.
3. The feasible region S is compact and convex.
Let us begin by presenting the main principles of the FW method. Let a
point xh E S be given. The idea of the FW method is that when maximizing
some objective function u: Rn --+ R subject to constraints XES, instead of u,
a linear approximation of it at some point xh E S is optimized. If the solution
obtained is yh, then the direction d h = yh - xh is a promising direction in
which to seek an increased value for the objective function u.
At any feasible point x", a linear approximation to u(y) is
Below, we shall show that even though we do not know the value function
explicitly, we can obtain a local linear approximation for it or to be more exact,
its gradient, with the help of marginal rates of substitution. This is enough to
permit the FW method to be applied. Before going into details we present the
basic phases of the GDF algorithm.
(1) Ask the decision maker to specify a reference function It. Choose a
feasible starting point Xl. Set h = 1.
(2) Ask the decision maker to specify marginal rates of substitution between
it and the other objectives at the current solution point xh.
(3) Solve problem (5.2.3), where the approximation of the value function
is maximized. Denote the solution by yh E S. Set the direction d h =
yh _ xh. If d h = 0, go to step (6).
(4) Determine with the help of the decision maker the appropriate step-size
t h to be taken in the direction d h . Denote the corresponding solution
by xhH = xh + thd h .
(5) Set h = h + 1. If the decision maker wants to continue, go to step (2).
(6) Stop. The final solution is xh.
In the algorithm above we need a local linear approximation of the value
function at the point xh. As explained earlier, we only need to know the gradient
of the value function at xh. According to the chain rule, we know that the
gradient of the objective function of problem (5.2.1) at the point xh E Scan
be written in the form
h
were m ih = aU(f(xh»
ali / aU(f(xh»
alt lor a11'z = 1, ... , k, Z. ...J,
c b
-r e. The num ers m i
h
contourofU
(5.2.3)
maximize (~-m7V,Ji(Xh») y
subject to yES
with y E R n being the variable. The solution is denoted by yh. The existence
of the optimal solution is ensured by the compactness of S and the continuity
of all the functions.
The search direction is now d h = yh - Xh. Provided that the marginal rates
of substitution are reasonably accurate, the search direction should be usable.
Let us mention that a scaling idea presented in Clinton and Troutt (1988) can
be included in the method. Heterogeneous objective functions can be scaled to
have equal effect in problem (5.2.3) by adjusting the norms of the gradients of
the objective functions with scalar coefficients.
The following problem is to find an appropriate step-size for going in the
search direction. The only variable is the step-size. The decision maker can
be offered objective vectors, where Zi = j;(xh + td h ) for i = 1, ... , k, and t
varies stepwise between 0 and 1 (e.g., t = ~-=-~ where j = 1, ... , P, and P is the
number of the alternative objective vectors to be presented). Another possibility
is to draw the objective values as a function of t, provided no serious scaling
problems exist. An example of the graphical presentation is given in Hwang
and Masud (1979, p. 109). Graphical illustration of the alternative objective
vectors is handled in Chapter 3 of Part III. Note that the alternatives are not
necessarily Pareto optimal. From the information given to the decision maker
(s)he selects the most preferred objective vector and the corresponding value of
t is selected as th. It is obvious that the task of selection becomes more difficult
for the decision maker as the number of objective functions increases.
The opinions of the decision maker and the situation yh = xh are used here
as stopping criteria. Other possible criteria are presented in Hwang and Masud
(1979, pp. 108-110) and Yu (1985, p. 327).
146 Part II - 5. Interactive Methods
5.2.3. Comments
In the GDF method the decision maker is first given one solution where
(s)he has to specify the marginal rates of substitution. After that the decision
maker must select the most preferred solution from a set of alternatives. Thus,
the ways of interaction are versatile.
In spite of the plausible theoretical foundation of the GDF method, it is not
so convincing and powerful in practice. The most important difficulty for the
decision maker is the determining of the k - 1 marginal rates of substitution
at each iteration. Even more difficult is to give consistent and correct marginal
rates of substitution at every iteration. The difficulties of the decision maker in
determining the marginal rates of substitution are demonstrated, for example,
in Wallen ius (1975) by comparative tests. The same point can be illustrated
by an example from Hemming (1981) where a politician is asked to specify the
exact marginal rate of substitution between unemployment and a decrease of
1 % in the inflation rate.
A drawback ofthe GDF method is that the final solution obtained is not nec-
essarily Pareto optimal. Naturally, it can always be projected onto the Pareto
optimal set with an auxiliary problem. A more serious objection is that when
several alternatives are given to the decision maker from which to select the
step-size, it is likely that many of them are not Pareto optimal. They can also
be projected onto the Pareto optimal set before presentation to the decision
5.3. Sequential Proxy Optimization Technique 149
maker, but this necessitates extra effort. The projection may be done, for in-
stance, by lexicographic ordering or by the means presented in Section 2.10 of
Part I. The use of achievement functions is demonstrated in the subgradient
GDF method. The weakness in the projection is that the computational burden
increases. It is for the analyst and the decision maker to decide which of the
two shortcomings is less inconvenient.
Theoretically, the Pareto optimality of the final solution is guaranteed if the
value function is strongly decreasing (by Theorem 2.6.2 of Part I). In any case,
marginal rates of substitution are crucial in approximating the value function,
and for many decision makers they are difficult and troublesome to specify.
For many people it is easier to think of desired changes in the objective
function values than to specify indifference relations. This may, especially, be
the case if the objective vector at which the marginal rates of substitution are
to be specified is not particularly desirable. Then it may be frustrating to think
of indifferent solutions instead of the improvements sought.
The Frank-Wolfe gradient method has been selected as the maximization
algorithm for its fast initial convergence. In some cases, other gradient-based
methods may be more appropriate. For example, the subgradient method is
employed in the subgradient GDF method.
There are a lot of assumptions that the problem to be solved must satisfy
in order the method to work and converge. Several sufficient conditions on the
decision maker's preferences are presented in Sawaragi et al. (1985, pp. 258-
259) to guarantee the differentiability and the concavity of the value function.
Even these conditions are not very easy to check. For more critical discussion
concerning the GDF method, see Sawaragi et al. (1985, pp.257-261).
Like the two previous methods, the sequential proxy optimization technique
(SPOT), presented in Sakawa (1982), is based on the idea of maximizing the
decision maker's underlying value function, which is once again assumed to be
known implicitly. SPOT includes some properties of the ISWT and the GDF
methods, and that is why we describe it here briefly.
5.3.1. Introduction
As in the two interactive methods presented thus far, the search direction
in SPOT is obtained by approximating locally the gradient of the underlying
value function, and the step-size is determined according to the preferences
of the decision maker. Here, both marginal rates of substitution and trade-off
rates are used in approximating the value function.
150 Part II - 5. Interactive Methods
It is assumed that
1. The underlying value function U: R k -t R exists and is implicitly known
to the decision maker. In addition, U is a continuously differentiable,
strongly decreasing and concave function on the subset of Z where the
points are Pareto optimal.
2. The objective and the constraint functions are convex and twice contin-
uously differentiable.
3. The feasible region S is compact and convex (and there exist some upper
bounds for the c-constraint problem so that the solution is finite).
4. The assumptions in Theorem 3.2.13 are satisfied.
The c-constraint problem is used to generate Pareto optimal solutions. The
solution of c-constraint problem (3.2.1) is denoted by xh. It is assumed to be
unique so that Pareto optimality is guaranteed. Throughout this section it is
assumed that all the upper bound constraints are active at the optimum. (If
this is not the case, then the upper bounds must be slightly modified.) Then,
Ii (xh) = cJ for all j = 1, ... ,k, j f. £. The optimal value of h, that is, h(x h ), is
denoted by zf. It is also assumed that all the Karush-Kuhn-Tucker multipliers
associated with the active constraints are strictly positive. The conditions of
Theorem 3.2.13 are assumed to be satisfied so that trade-off rate information
can be obtained from the Karush-Kuhn-Tucker multipliers.
Here, the value function is not maximized in form (4.1.1) as before. Instead,
the set of feasible alternatives is restricted to the Pareto optimal set. According
to the assumption above stating that h(x h ) = cJ for all j = 1, ... , k, j i- £,
we have a new formulation:
(5.3.1) ..
maximize U( cI"'"
hhhh
c£_I' Z£, C(+I"'" Ckh) .
No constraints are needed here since the formulation includes the original con-
straints. The optimization is now carried out in the objective space Rk-l, where
the upper bounds cJ are the variables.
It is proved in Sakawa (1982) that the new function is concave with respect
to those E E R k-l for which the upper bound constraints are all active. Sakawa
also claims that the partial derivative of (5.3.1) with respect to c:J, j = 1, ... , k,
j i- £, is equivalent to 8~j;) (m1j - ..\1j ) , where m1 j is the marginal rate of substi-
tution between hand fj at xh (obtained from the decision maker, see Section
5.2) and ..\1j is the partial trade-off rate between hand hat xh (obtained from
the Karush-Kuhn-Tucker multipliers, see Sections 3.2 and 5.1).
Because it was assumed that the value function is strongly decreasing, we
know that 8~j;) < 0 and we can divide by it. We denote now
k k
L.xfj(mfj - .xfj ) = L -.xtilcJ
j=1 j=1
j#f j#l
denoted by ilzf.
After obtaining the search direction, we have to find the step-size t which
in theory maximizes the function
(5.3.2)
respectively, is used. The constants ai, Wi, ni and D:i are used to tune the proxy
functions so that they represent the current problem and the preferences of
the decision maker better, and they are derived from the marginal rates of
substitution; see, for example, Sakawa (1982) and Sakawa and Seo (1982b)
for further details. This kind of proxy function is very restrictive globally but
reasonable when assumed locally.
(2) Solve the current (active) c-constraint problem for eh to obtain a solu-
tion xh.
(3) Denote the Pareto optimal objective vector corresponding to xh by Zh
and the corresponding Karush-Kuhn-Tucker multipliers by j , j = Ai
1, ... , k, j I- E.
(4) Ask the decision maker for the marginal rates of substitution mi j for
j = 1, ... , k, j I- E, at Xh. Test the consistency of the marginal rates of
substitution and ask the decision maker to respecify them if necessary.
(5) If Imij - A:jl < (), where () is a prespecified positive tolerance, then stop
with xh as the final solution. Otherwise, determine the components
..:1cJ, j I- E, of the search direction vector.
(6) Select an appropriate form of the proxy function and calculate its pa-
rameters. If the obtained proxy function is not strongly decreasing and
concave, then ask the decision maker to specify new marginal rates of
substitution.
(7) Determine the step-size by solving the c-constraint problem with the
upper bounds cJ + t..:1cJ, j = 1, ... , k, j I- E, for different values of t.
Denote the optimal value of the objective function by z~(t). A step-size
t h maximizing the proxy function is selected. If the new objective vector
(cf + t h ..:1cf , ... , z~ (th), ... , c~ + t h ..:1c~) T is preferred to zh, denote the
corresponding decision vector by xh+1, set h = h + 1 and go to step
(3). If the decision maker prefers zh to the new solution, reduce t h to
be !th, ith , ... until an improvement is achieved.
The maximum of the proxy function is determined by altering the step-size
t, calculating the corresponding Pareto optimal solution and searching for three
t values, t 1, t h and t2 so that tl < t h < t2 and p( tt} < p( t h ) > p( t2), where p is
the proxy function. When the condition above is satisfied, the local maximum
of the proxy function pet) is in the neighbourhood of th.
Under assumptions 1-4 (in Subsection 5.3.1), the optimality condition for
problem (5.3.1) at eh is that the gradient equals zero at that point. This means
that mi
j = Aii
for j = 1, ... k, j I- E. This is the background of the absolute
value checking at step (4) (see also (2.8.1) in Part I).
5.3.3. Comments
Ideas from several methods are combined in SPOT and several concepts
are utilized. As far as the role of the decision maker is concerned, (s)he is only
required to determine the marginal rates of substitution. Difficulties related
to this determination were mentioned in Section 5.2 and they are still valid.
However, the consistency of the marginal rates of substitution in SPOT is
even more important than in the GDF method. This is a very demanding
requirement.
A positive feature of SPOT when compared to the GDF method is that only
Pareto optimal solutions are handled. Because the multiobjective optimization
154 Part II - 5. Interactive Methods
problem was assumed to be convex, globally Pareto optimal solutions are ob-
tained. The burden on the decision maker is decreased by employing a proxy
function when selecting the step-size.
Many assumptions are set to guarantee the proper functioning of the al-
gorithm. Some of these are quite difficult to check in practice (see concluding
remarks concerning the GDF method in Subsection 5.2.5).
5.4.1. Introduction
The Tchebycheff method has been designed to be user-friendly for the deci-
sion maker, and, thus, complicated information is not required. To start with,
a utopian objective vector below the ideal objective vector is established. Then
the distance from the utopian objective vector to the feasible objective region,
measured by a weighted Tchebycheff metric, is minimized. Different solutions
are obtained with different weighting vectors in the metric, as introduced in
Section 3.4. The solution space is reduced by working with sequences of smaller
and smaller subsets of the weighting vector space. Thus, the idea is to develop a
sequence of progressively smaller subsets of the Pareto optimal set until a final
solution is located. At each iteration, different alternative objective vectors are
presented to the decision maker and (s)he is asked to select the most preferred
of them. The feasible region is then reduced and alternatives from the reduced
space are presented to the decision maker for selection.
Contrary to the previous interactive methods for multiobjective optimiza-
tion, the Tchebycheff method does not presume many assumptions regarding
the problem to be solved. It is assumed that
1. Less is preferred to more by the decision maker.
2. The objective functions are bounded (from below) over the feasible region
S.
In what follows we assume that the global ideal objective vector and, thus,
the global utopian objective vector are known, and we can leave the absolute
value signs from the metrics. The metric to be used for measuring the distances
5.4. Tchebycheff Method 155
subject to xES.
.........
•
utopian ...... second minimization
vector
" "
..... z1
first minimization
On the other hand, h(xO) - zt* ~ h(x*) - zt* for all i = 1, ... , k and at
least one of the inequalities is strict. That is why we have E~=1 (fi(XO) - zr) <
E~=l (fi(X*) - zr)· Here we have a contradiction with x* being a solution of
(5.4.2). Thus, x* is Pareto optimaL 0
Wi
1
= h(x*) - zt*
(
£; h(x*)1- zr )-1
k
for making errors and changing her or his mind concerning her or his desires
during the process. The correct selection of r is thus important. It is suggested
in Steuer (1986) and Steuer and Choo (1983) that
(1/ p)l/k ~ r ~ v1/(H-l),
where v is the final interval length of the weighting vectors with lk ~ v ~ 23k'
H is the number of iterations to be carried out and ~ stands for 'approximately
equal or less.'
(t.
setting
otherwise,
5.4.3. Comments
to use the scaling only in the calculations and present the alternatives to the
decision maker in the original form. More suggestions for modifications of the
algorithm are presented in Steuer (1989a).
The convergence rate of the Tchebycheff method is very difficult to estab-
lish. It is stressed in Steuer (1989a) that the Tchebycheff method is able to
converge to any Pareto optimal solution. The reduction factor r is compre-
hended as a convergence factor because it determines how fast the reduction
takes place. The weighting vector space is reduced until a solution is obtained
that is satisfactory enough to be a final solution (see Steuer and Choo (1983)).
The Tchebycheff method can be characterized as a non ad hoc method. If
the value function is known, it is easy to select from the set of P alternatives
the one maximizing the value function.
We do not here go into details of the alternative version of the Tchebycheff
method. We only mention that the possibility of getting weakly Pareto optimal
solutions may be overcome by using augmented weighted Tchebycheff problem
(3.4.5) (see Figure 3.4.2). This means that properly Pareto optimal solutions
are handled instead of Pareto optimal ones (see Theorem 3.4.6). In this way, the
lexicographic optimization is avoided, but the Tchebycheff algorithm is more
complicated in other ways. For example, the determination of the correct value
for the augmentation parameter p brings additional problems. It is proved in
Steuer (1986, pp. 440-444) and Steuer and Choo (1983) that the augmented
weighted Tchebycheff problem can be used to characterize Pareto optimal solu-
tions if the feasible region is finite or all the constraints are linear. A numerical
illustration of the algorithm is presented in Steuer (1986, pp. 468-472).
Implementing the Tchebycheff method in a spreadsheet (Excel) environ-
ment is suggested in Steuer (1997). The Tchebycheff method in its augmented
form is applied in Wood et al. (1982) to water allocation problems of a river
basin and in Silverman et al. (1988) to manpower supply forecasting. The aug-
mented method form is also used in Agrell et al. (1998) when solving an MOLP
problem of reservoir management. In Olson (1993), the Tchebycheff method is
applied to a sausage blending problem and in Kaliszewski (1987) it is proposed
that modified weighted Tchebycheff problem (3.4.6) is used to minimize the
distances in the Tchebycheff method.
A positive feature of the Tchebycheff method is that the role of the decision
maker is quite easy to understand. (S)he does not need to realize new concepts
or specify numerical answers as, for example, in the ISWT and the GDF meth-
ods. All (s)he has to do is to compare several alternative objective vectors and
select the most preferred one. The ease of the comparison depends on the mag-
nitude of P and on the number of objective functions. The personal capabilities
of the decision makers also play an important role. It is also positive that all
the alternatives are Pareto optimal.
5.5. Step Method 161
The flexibility of the method is reduced by the fact that the discarded parts
of the weighting vector space cannot be restored if the decision maker changes
her or his mind. Thus, some consistency is required.
The weakness of the Tchebycheff method is that a great deal of calculation
is needed at each iteration and many of the results are discarded. For large
and complex problems, where the evaluation of the values of the objective
functions may be laborious, the Tchebycheff method is not a realistic choice.
On the other hand, it is possible to utilize parallel computing since all the
lexicographic problems can be solved independently.
Although no absolute superiority can be attributed, it is worth mentioning
that the Tchebycheff method performed best in the comparative evaluation of
four methods (the ZW, the SWT, the Tchebycheff and the GUESS methods) in
Buchanan and Daellenbach (1987) (see Subsection 1.2.3 of Part III). However, a
difficulty was encountered in comprehending the information provided. The test
example had only three objective functions and six alternatives were presented
at each iteration. And the cognitive burden only becomes larger when the
number of the objective functions is increased.
The step method (STEM), presented in Benayoun et a1. (1971), contains el-
ements somewhat similar to the Tchebycheff method, but is based on a different
idea. STEM is one of the first interactive methods developed for multiobjec-
tive optimization problems. It was originally designed for the maximization of
MOLP problems but can be extended for nonlinear problems, as described,
for example, in Eschenauer et al. (1990b) and Sawaragi et al. (1985, pp. 268-
269). It can be considered to aspire at finding satisfactory solutions instead
of optimizing an underlying value function. We describe the method for the
minimization of nonlinear problems.
5.5.1. Introduction
It is assumed that
1. Less is preferred to more by the decision maker.
2. The objective functions are bounded over the feasible region S.
Information concerning the ranges of the Pareto optimal set is needed in
determining the weighting vector for the metric. The idea is to make the scales
of all the objective functions similar with the help of the weighting coefficients.
The nadir objective vector znad is approximated from the payoff table as
explained in Subsection 2.4.2 of Part I. Thus, the maximal element of the
column i is called ziad. The weighting vector is calculated by the formula
Wi = k
ei
,i = 1, ... , k,
Ej=l ej
the decision maker. Then the decision maker is asked to specify those objective
function(s) whose value(s) (s)he is willing to relax (Le., weaken) to decrease
the values of some other objective functions. (S)he must also determine the
amount(s) of acceptable relaxation. Ways of helping the decision maker in this
phase are presented in Benayoun et al. (1971).
The feasible region is restricted according to the information of the decision
maker and the weights of the relaxed objective functions are set equal to zero,
that is Wi = 0 for i E J>. Then a new distance minimization problem
minimize . max [wilfi(X) -
t=l, ... ,k
z71]
(5.5.1) subject to hex) ~ Ci for all i E [>,
hex) ~ h(x h ) for all i E [<,
xES
is solved. The first new constraint set allows the relaxed (acceptable) objective
function values to increase up till the specified level and the second new con-
straint set makes sure that the unsatisfactory objective function values do not
increase, that is, get worse. The procedure continues until the decision maker
does not want to change any component of the current objective vector. If the
decision maker is not satisfied with any of the components, then the procedure
must also be stopped. In this case, STEM fails to find a satisfactory solution.
Different versions of the method vary in the formulation of the constraint
set. In some versions, a new constraint set is generated at every iteration and
in some other versions new constraints are included to accompany the old ones.
In the latter model the decision maker must be somewhat consistent in her or
his actions because it is not possible to withdraw the restrictions set on the
feasible region.
5.5.3. Comments
STEM does not assume the existence of an underlying value function. Even
if one were available, it would not help in answering the questions. Thus STEM
can be characterized as an ad hoc method. Naturally, nothing can be said
about the convergence of STEM with respect to a value function. However, the
developers of the method mention that the algorithm produces a final solution
fast if the new constraints constructed during the solution process become
ineligible for further relaxations.
A linear numerical application example of STEM is given in Hwang and
Masud (1979, pp. 174-182). The properties of the solution set of STEM are
studied in Crama (1983). A so-called exterior branching algorithm is presented
in Aubin and Naslund (1972). It is another kind of extension of STEM into
nonlinear problems. There are several differences when compared with the orig-
inal method. For example, the decision maker does not need to specify any
amounts of change and an implicit value function is assumed to exist. Some
164 Part II - 5. Interactive Methods
Because we are moving around the (weakly) Pareto optimal set, a decrement
in some objective function values can be achieved only by paying the price of
an increment in some other objective function values. The idea of specifying
objective functions whose values should be decreased or can be increased seems
quite simple and appealing. However, it may be difficult to estimate appropriate
amounts of increment that would allow the desired amount of improvement in
those functions whose values should be decreased. In other words, the control
of the solution is somewhat indirect. On the other hand, a positive feature is
that the information handled is easy to understand. No complicated concepts
are introduced to the decision maker.
According to the results presented in Section 3.4, the solutions of STEM
are not necessarily Pareto optimal, but weakly Pareto optimal solutions may
be obtained. It must also be kept in mind that the global ideal objective vector
has to be known.
STEM was the first interactive method to be based on the classification
idea. Numerous other methods adapting this idea in one way or the other have
appeared since. In what follows, we present several methods where the decision
maker can specify both the amounts of relaxation and desirable aspiration
levels. In this way the decision maker can control the solution process in a
more direct way than in STEM.
5.6.1. Introduction
The basic idea behind the reference point method is to reconsider how
decision makers make decisions. It is doubted in Wierzbicki (1980a, b) that
individuals make everyday decisions by maximizing a certain value function.
Instead, Wierzbicki claims that decision makers want to attain certain aspi-
ration levels (e.g., when making purchases according to a shopping list). He
suggests that, while thousands of consumers may behave on the average as
if they were maximizing a value function, no individual behaves in that way.
The basic idea is satisficing (introduced in Section 2.6 of Part I) rather than
optimizing. In addition, reference points are intuitive and easy for the decision
maker to specify and their consistency is not an essential requirement.
Classifying the objective functions into acceptable and unacceptable ones
(at a current objective vector) was mentioned in connection with STEM. Spec-
ifying a reference point can be considered a way of classifying the objective
functions. If the aspiration level is lower than the current objective value, that
objective function is currently unacceptable, and if the aspiration level is equal
to or higher than the current objective value, that function is acceptable. The
difference here is that the reference point can be infeasible in every component.
In other words, where the set of acceptable objective functions is empty, the
reference point-based approach can still be utilized. Naturally, this does not
mean that all the objective values could be decreased but a different solution
can be generated.
Further information concerning the matters addressed in this section can be
found in Wierzbicki (1977, 1980b, 1981, 1982, 1986a, b). By a reference point
method we here mean that of Wierzbicki's. The reference point method relies
heavily on the properties of achievement functions, which were dealt with in
Section 3.5. Of particular interest are Corollary 3.5.6 and Theorem 3.5.7. As
far as the preference structure of the decision maker is concerned, it is assumed
that
1. Less is preferred to more by the decision maker.
(2) Ask the decision maker to specify a reference point zh E Rk (an aspi-
ration level for every objective function).
(3) Minimize the achievement function and obtain a (weakly, c-properly or)
Pareto optimal solution xh and the corresponding zh. Present zh to the
decision maker.
(4) Calculate a number of k other (weakly, c-properly or) Pareto optimal
solutions by minimizing the achievement function with perturbed ref-
erence points
z(i) = Zh +dhei ,
where d h == IIzh - zhll and e i is the ith unit vector for i = 1, . .. , k.
(5) Present the alternatives to the decision maker. If (s)he finds one of the
k + 1 solutions satisfactory, the corresponding Xh is the final solution.
Otherwise, ask the decision maker to specify a new reference point Zh+l .
Set h = h + 1 and go to step (3).
The reason for writing the words weakly or c-properly in parentheses in the
algorithm is that it depends on the achievement function selected whether the
solutions are weakly, c-properly or Pareto optimal.
The advantage of perturbing the reference point in step (4) is that the de-
cision maker gets a better conception of the possible solutions. If the reference
point is far from the Pareto optimal set, the decision maker gets a wider de-
scription of the Pareto optimal set and if the reference point is near the Pareto
optimal set, then a finer description of the Pareto optimal set is given. The
effects of the perturbation and close and distant reference points are illustrated
in Figure 5.6.1.
z1
5.6.3. Comments
5.6.4. Implementation
Analysis (IIASA) in Austria and the Warsaw Technical University have been
involved. The latest version is called IAC-DIDASN++. There is a lot of lit-
erature describing the various phases in the development work (see Granat et
al. (1994a, b), Grauer (1983a, b), Grauer et al. (1984), Kreglewski (1989), Kre-
glewski et al. (1987, 1991), Lewandowski and Grauer (1982), Lewandowski et
al. (1987) and Rogowski et al. (1987».
DIDAS is a dynamic decision support system which aims at helping to
achieve better decisions. The ideology has been extended from the reference
point method with reservation levels. Reservation levels Zi are objective func-
tion values the user wants to avoid. For the objective functions to be minimized
they must be above the aspiration levels forming the reference point z. In DI-
DAS, the user is asked to specify both aspiration and reservation levels for each
objective function. The achievement function has to be reformulated to take
the reservation levels into account. Several achievement functions have been
suggested in different versions of the system.
The user can easily obtain different Pareto optimal solutions by changing
the aspiration levels and the reservation levels. The objective functions are
scaled and the user is assumed to specify aspiration levels between the ideal
objective vector and the nadir objective vector. In this setting, the user can
implicitly attach more importance to attaining a particular aspiration level
by placing it near the ideal objective value. In that case, the corresponding
objective function is weighted stronger in the achievement function.
We give an example of achievement functions, including both aspiration
and reservation levels. If all the objective functions are to be minimized, an
order-approximating achievement function to be maximized can be of the form
zt
where are components of the ideal objective vector, p > 0 is an augmentation
term and
. Zi - z;
V = mm -v--_-.
i=l ..... k Zi - Zi
all the deviations to be equally important (which is the case in the reference
point method) predefined priorities between the goals are also handled. The
reference point method is modified for problems with homogeneous and anony-
mous objective functions in Ogryczak (1997b). Here, anonymity stands for sym-
metry with respect to permutations of the objective functions.
Wierzbicki's reference point method is quite easy for the decision maker to
understand. The decision maker only has to specify appropriate aspiration lev-
els and compare objective vectors. What has been said about the comparison of
alternatives in connection with the previous methods is also valid here. The so-
lutions are weakly, c:-properly or Pareto optimal depending on the achievement
function employed.
The freedom of the decision maker has both positive and negative aspects.
The decision maker can direct the solution process and is free to change her or
his mind during the process. However, the convergence is not necessarily fast
if the decision maker is not purposeful. There is no clear strategy to produce
the final solution since the method does not help the decision maker to find
improved solutions.
Wierzbicki's method can be regarded as a generalization of goal program-
ming. Aspiration levels are central in both methods, but unlike goal program-
ming Wierzbicki's method is able to handle both feasible and infeasible aspi-
ration levels.
Methods based on reference points are widely regarded efficient for the
solution of practical problems. They are easy to understand and to implement.
Further, they do not necessitate consistency from the decision maker. One can
say that controlling a method with reference points is a more direct and a more
explicit way than, for example, with weighting coefficients.
5.7.1. Introduction
The GUESS method does not involve any special assumptions. The only
requirement is that the ideal objective vector z* and the nadir objective vector
znad are available. Thus, it is assumed that
zr -hex)
ad .
for all z = 1, ... , k.
zr zi
a
d
-
Let us once again emphasize that the global ideal objective vector and the nadir
objective vector are assumed to be known.
The weighted max-min problem to be solved is
maximize ._
ml' n [~zrad - !i(X)]
. --"--n-ad-:--'---''-*'':'"
(5.7.1) ,_l, ... ,k W, zi - zi
subject to xES,
where the weighting coefficients Wi, i = 1, ... , k, are positive and the denomi-
nators must not equal zero.
We have the following result.
The weighting coefficients are not any positive numbers whatsoever, but
normalized aspiration levels. In other words, we have
z!lad - z!t
w? =
zr
Z d
a -
'for all i = 1, ... ,k.
zt
With the specified weighting coefficients we can write the problem to be
solved in the form
maximize . mm
. [zrad-li(x)]
nad - zi-h
(5.7.2) .=l ..... k Zi
subject to xES.
Notice that the aspiration levels specified by the decision maker have to be
strictly lower than the nadir objective vector, that is, zh < znad. If all the
objective functions are differentiable, problem (5.7.2) can be written in a dif-
ferentiable form with the help of an additional variable, whereas the nondiffer-
entiable formulation can be solved with appropriate single objective optimizers.
We can prove that any Pareto optimal solution can be found with problem
(5.7.2).
Proof. Let x* E S be Pareto optimal and let us suppose that it is not a solution
of (5.7.2) with z = f(x*). In this case there exist another XO E S such that
mm
. [z!lad
'
- Ji(X > m m [z!lad
'
-1i(X*)]
O
)] •
= 1.
i=l, ... ,k zr ad - Ji(X*) i=l, ... ,k zr ad -li(x*)
This means that fi(XO) < h(x*) for every i = 1, ... , k which is a contradiction
with the Pareto optimality of x*. In other words, x* must be a solution of
(5.7.2). 0
According to Theorems 5.7.1 and 5.7.2 we know that all the solutions gener-
ated are weakly Pareto optimal and any Pareto optimal solution can be found.
5.7. GUESS Method 173
5.7.3. Comments
The GUESS method is based on trial and error. The decision maker can
examine what kind of an effect her or his input has on the solution obtained and
then modify the input, if necessary. The system does not provide any additional
or supporting information about the problem to be solved.
As long as no additional constraints are included in the problem, the com-
ponents of the solution obtained are in equal proportion with the components
of the reference point specified. In other words, when the solution obtained
and the corresponding reference point are normalized, the quotients of their
component are the same for each component. The reason for this behaviour is
that the reference point is contained in the weighting vector.
The GUESS method is an ad hoc method. The existence of a value function
would not help in determining new reference points or upper or lower bounds
for the objective functions.
An interesting practical observation is mentioned in Buchanan (1997).
Namely, decision makers are easily satisfied if there is a small difference between
the reference point and the solution obtained. Somehow they feel a need to be
satisfied when they have almost achieved what they wanted. In this case they
may stop iterating 'too early.' The decision maker is naturally allowed to stop
the solution process if the solution really is satisfactory. But, the coincidence
of setting the reference point near an attainable solution may unnecessarily
increase the decision maker's satisfaction.
174 Part II - 5. Interactive Methods
The GUESS method is simple to use and does not set any specific assump-
tions on the behaviour or the preference structure of the decision maker. The
decision maker can change her or his mind since no consistency is required.
The only information required from the decision maker is a reference point and
possible upper and lower bounds.
The method has been compared to several other interactive methods in dif-
ferent comparative evaluations (to be described in Subsection 1.2.3 of Part III).
It has been received relatively well in the experiments reported. The reasons
may be its Simplicity and flexibility.
The optional upper or lower bounds specified by the decision maker are
not checked in any way in the method. Inappropriate lower bounds may lead
into solutions that are not weakly Pareto optimal. In other words, additional
constraints may invalidate the result of Theorem 5.7.1. This can be avoided,
for example, by allowing only upper bounds.
The weakness of the GUESS method is its heavy reliance on the availabil-
ity of the nadir objective vector. As mentioned in Subsection 2.4.2 of Part I,
the nadir objective vector is not easy to determine and it is usually only an
approximation.
5.S.1. Introduction
where p is some sufficiently small positive scalar, for example, of the order
10- 6 . Both these scalarizing functions presume that the ideal objective vector
and, thus, the utopian objective vector are known globally. However, if some
objective function fJ is not bounded from below in S, then some small scalar
value can be selected as zj*.
If the problem is bounded, then the solutions obtained by function (5.4.1)
are guaranteed to be weakly Pareto optimal (see Theorem 3.4.2) and every
Pareto optimal solution can be found (see Theorem 3.4.5). Further, it is proved
in Nakayama (1985a) and Sawaragi et al. (1985, pp. 271-272) that the solution
obtained is satisficing (Le., Ji(X*) ::; zf for all i = 1, ... , k) if the reference point
is feasible and weighting coefficients (5.8.1) are employed. For function (5.8.2)
all the solutions are properly Pareto optimal and any properly Pareto optimal
solution can be found. Even though the formulation slightly differs from (3.4.5),
the results of Theorem 3.4.6 are still valid. Unfortunately, function (5.8.2) does
not satisfy the third requirement concerning satisficing decision making (see
Nakayama (1985a)).
Other forms of weighting coefficients can also be used. The selection affects
the results obtained. This is demonstrated in Nakayama (1995). The reference
point method-type achievement functions can be used as well. This means that
the utopian objective vector is replaced by the reference point.
Both the scalarizing functions mentioned are nondifferentiable but they can
be written in a differentiable form assuming the differentiability of the functions
involved. This is carried out by introducing a scalar variable 0: as in (3.4.3).
176 Part II - 5. Interactive Methods
In what follows, we refer to the differentiable form where all the objective
functions have been transformed into constraints.
As mentioned in Subsection 3.4.4, trade-off rate information can be obtained
with the help of differentiable formulation (3.4.3). Both weighting coefficients
and Karush-Kuhn-Thcker multipliers are then utilized. That is why it must be
assumed that
1. Less is preferred to more by the decision maker.
2. The objective and the constraint functions are twice continuously differ-
entiable.
The availability of trade-off rates also necessitates the fulfillment of other
assumptions mentioned in Subsection 3.4.4. They are parallel to those in The-
orem 3.2.13; see also Yano and Sakawa (1987). This fact has not earlier been
sufficiently emphasized when introducing the method.
from sensitivity analysis on the basis of staying in the Pareto optimal set (see
Nakayama (1991b, 1992a, 1995». We set for each i E J>
-h+l
Zi = ! i (X h) + N()"~ +1 )w h "~ ()..hj + P)Wjh(!i (h)
X-
-h+l) ,
Zi
• P 'jE/<
where N is the number of the objective functions in the class J>. If no aug-
mentation term is used in the scalarizing function, we set P = 0 in the formula
above. Automatic trade-off increases all the objective functions in J> in the
equal proportion to ()..7 + p)w7· If the amounts of change are large or the prob-
lem is nonlinear, the aspiration levels produced by automatic trade-off may
not be large enough to allow the desired improvements to the other objective
functions (see Nakayama (1992b».
5.8.3. Comments
If the problem is linear or quadratic, we can go even further than the au-
tomatic trade-off. In this case parametric optimization is used in generating
so-called exact trade-off. This means that we can calculate exactly how much
the objective function values must be relaxed in order to stay in the Pareto
optimal set. Thus, we get a new Pareto optimal solution without having to
re-optimize the scalarizing function (see Nakayama (1991b, 1992a, b».
Trade-off information can also be used to check the feasibility of the refer-
ence point specified by the decision maker. If it is not feasible, the number of
minimizations of the scalarizing function can be reduced by directly specify-
ing higher aspiration levels (remember that satisficing solutions are obtained
when the reference point is feasible in scalarizing function (5.4.1». See details
in Nakayama (1985a, 1989), Nakayama and FUrukawa (1985) and Nakayama
and Sawaragi (1984).
Trade-off information is valuable even if some Karush-Kuhn-Tucker multi-
pliers are equal to zero. For example, if all the Karush-Kuhn-Tucker multipliers
of the functions to be relaxed equal zero, we know that it is not possible to
improve the desired objective function values with this classification. In other
words, the functions to be relaxed cannot compensate for the improvement de-
sired. The reason is that the objective functions to be relaxed are positively
affected by other objective function(s) to be improved (see Nakayama (1995».
Note that STOM can be used even in the absence of trade-off rate informa-
tion. This may be the case if all the differentiability and the regularity assump-
tions are not satisfied. If trade-off rates are not used, no special assumptions
need to be set on the problem to be solved. In this form STOM is almost the
same as the GUESS method - only the achievement function used is different.
Because no specific assumptions are set on the underlying value function,
convergence results based on it are not available. Even if a value function
existed, it could not be directly used to determine the functions to be decreased
178 Part II - 5. Interactive Methods
5.8.4. Implementation
STOM contains identical elements with STEM, the reference point method
and the GUESS method. Therefore, the comments given there are not repeated
here. The role of the decision maker is easy to understand. STOM requires even
less input from the decision maker than the above-mentioned methods because
only a part of the aspiration levels need to be given. The solutions obtained are
properly Pareto optimal or weakly Pareto optimal depending on the scalarizing
function used.
As said before, in practice, classifying the objective functions into three
classes and specifying the amounts of increment and decrement for their values
is a subset of specifying a new reference point. A new reference point is implic-
itly formed. Either the new aspiration levels are larger, smaller, or the same as
in the current solution. Thus the same outcome can be obtained with different
reasoning. A positive differentiating feature in STOM when compared to other
classification-based methods is the automatic or exact trade-off. This decreases
the amount of information inquired from the decision maker. STOM is in a
sense opposite to STEM. In STOM, only desired improvements are specified,
whereas only amounts of relaxation are used in STEM.
Because the method is based on satisficing decision making, the decision
maker can freely search for a satisficing solution and change her or his mind,
if necessary. No convergence based on value functions has even been intended.
5.9.1. Introduction
The basic setting in the light beam search is identical to the reference point
method of Wierzbicki in the spirit of satisficing decision making. The achieve-
ment function to be minimized is function (3.5.3) where weighting coefficients
are used only in the maximum part. They take into account the ideal and the
nadir objective values. This achievement function means that c-properly Pareto
optimal solutions are generated. The reference point is here assumed to be an
infeasible objective vector.
It is assumed that
1. Less is preferred to more by the decision maker.
2. The objective and the constraint functions are continuously differen-
tiable.
3. The objective functions are bounded over the feasible region S.
4. None of the objective functions is more important than all the others
together.
Assumption 3 is needed in order to have the ideal and the nadir objec-
tive vectors available. The other assumptions are related to the generation of
alternative solutions.
In the light beam search it is acknowledged that reference points provide a
practical and an easy way for the decision maker to direct the solution process.
However, the learning process of the decision maker is supported better if the
decision maker receives additional information about the Pareto optimal set
at each iteration. This means that other solutions in the neighbourhood of
the current solution (based on the reference point) are displayed. Thus far,
the motivation is the same as in the reference point method. But what if the
comparison of even a small number of alternative solutions is difficult for the
decision maker? Or what if all the alternatives provided are indifferent to the
decision maker? In such cases the decision maker may even stop the solution
process and never get as far as the satisfactory solutions.
An attempt is made to avoid frustration on the part of the decision maker
in the light beam search by the help of concepts used in multiattribute decision
analysis and particularly in ELECTRE methods (see, for example, Roy (1990)
and Vincke (1992, pp. 56-69)). The idea is to establish outranking relations
between alternatives. It is said that the alternative Zl outranks the alternative
Z2, denoted by ZlSZ2, if Zl is at least as good as Z2. In the light beam search,
additional alternatives near the current solution are generated so that they
outrank the current one. Incomparable or indifferent alternatives are not shown
to the decision maker.
To be able to compare alternatives and to define outranking relations, we
need several thresholds from the decision maker. Assumption 4 is related to
this. Because of the just noticeable difference or for some other reasons it is
not always possible for the decision maker to distinguish between different
alternatives. This means that there is an interval where indifference prevails.
5.9. Light Beam Search 181
For this reason the decision maker is asked to provide indifference thresholds
qi for each objective function (i = 1, ... , k). In fact the thresholds should be
functions of the objective values, that is qi(Zi), but in the light beam search
they are assumed to provide only local information and are thus constants.
The line between indifference and preference does not have to be sharp
either. The hesitation between indifference and preference can be expressed by
preference thresholds Pi for i = 1, ... ,k. Applying the same reasoning as above,
we assume here that Pi is not a function of the values of the objective function
but constant. In addition, we must have Pi ~ qi ~ 0 for i = 1, ... , k.
Given these thresholds we can distinguish three preference relations between
pairs of alternative objective vectors (Zl and Z2) for each component, that is,
each objective function. We can say that as far as the ith components (i =
1, ... ,k) of the two objective vectors are concerned,
Zl and Z2 are indifferent if Izl- zll ~ qi
2 1
Zl is weakly preferred to Z2 if qi < Zi - Zi < Pi
Zl is preferred to z2 if zl- zl ~ Pi·
One more type of threshold, namely a veto threshold Vi for i = 1, ... , k can
be defined. It prevents a good performance in some components from compen-
sating for poor values on some other components. As earlier, we assume the
threshold to be constant and have the relation Vi ~ Pi for i = 1, ... , k. In this
case z2 cannot be preferred to zl if Z[ - zl ~ Vi.
We can now define outranking relations on the basis of for how many com-
ponents indifference, weak preference or preference is valid or preference cannot
be valid. Let us compare the objective vector of the current iteration zh and
some other objective vector z. Below, #i denotes the number of components,
that is, objective functions, for which the condition mentioned holds. We define
ms(z,zh) as #i where z is indifferent, weakly preferred or preferred to zh,
mq(zh,z) as #i where zh is weakly preferred to z,
mp(zh,z) as #i where zh is preferred to z,
mv(zh, z) as #i where z cannot be preferred to zh.
The outranking relations are defined according to the numbers above. If
the decision maker has specified all the thresholds, that is the indifference, the
preference and the veto thresholds, it is proposed in Jaszkiewicz and Slowinski
(1994, 1995) that
zSZh if mv(zh,z) = 0, mp(zh,z) ~ 1 and mq(zh,z) +mp(zh,z) ~ ms(z,zh)
be defined. This definition must be modified if no veto thresholds are available.
In this case
182 Part II - 5. Interactive Methods
5.9.3. Comments
The idea of the light beam search is analogous to projecting a focused beam
of light from the reference point onto the Pareto optimal set. The lighted part
of the Pareto optimal set changes if the location of the spotlight, that is, the
reference point or the point of interest in the Pareto optimal set are changed.
This connection explains the name of the method. An implementation of the
light beam search is available from its developers (see Section 2.2 in Part III).
The light beam search can be characterized as an ad hoc method. If a value
function were available, it could not directly determine new reference points. It
could, however, be used in comparing the set of alternatives. Yet, the thresholds
are important in the method and they must come from the decision maker.
This method combines elements of multiobjective optimization and multi-
attribute decision analysis in an interesting way. An extension is suggested in
Wierzbicki (1997b), where both aspiration levels forming a reference point and
reservation levels (to be avoided) are used. In this case the reference point still
determines the source of light but the reservation levels are used to generate a
cone of light. Some convergence ideas are put forward in Wierzbicki (1997b) as
well.
184 Part II - 5. Interactive Methods
The light beam search is a rather versatile solution method where the deci-
sion maker can specify reference points, compare a set of alternatives and affect
the set of alternatives in different ways. Thresholds are used to try to make
sure that the alternatives generated are not worse than the current solution.
In addition, they are different enough to be compared and comparable on the
whole. This should decrease the burden on the decision maker.
Specifying different thresholds is a new aspect when compared to the meth-
ods presented earlier. This may be demanding for the decision maker. Anyway,
it is positive that the thresholds are not assumed to be global but can be al-
tered at any time. In other words, outranking relations based on the threshold
values are only used as local preference models in the neighbourhood of the
current solution.
The idea of combining strengths from different areas certainly deserves fur-
ther study. Nevertheless, this approach also has its weaknesses. As noted in
Jaszkiewicz and Slowinski (1994, 1995), it may be computationally rather de-
manding to find the exact characteristic neighbours in a general case. Parallel
computing is one solution. Ifthis is not possible, one can at least present differ-
ent neighbours as soon as they are calculated instead of waiting till all of them
have been generated. The visualization of alternatives is handled in Chapter 3
of Part III.
5.10.1. Introduction
the projection the decision maker can examine this Pareto optimal curve or a
representation of it by the means of computer graphics.
An interesting feature in the reference direction approach is that no ex-
plicit knowledge is assumed about the properties of the value function during
the solution process. However, sufficient conditions for optimality can be es-
tablished for the termination point of the algorithm, if the decision maker's
underlying value function is assumed to be pseudo concave (and differentiable)
at that point (and several other assumptions to be listed later are fulfilled).
The optimality conditions are necessary only for MOLP problems.
(5.10.1) Siilw(Z)
,
= max---,
iEI
Zi - Zi
Wi
Theorem 5.10.1. Let assumptions 2-4 be satisfied. Let Zh+1 E Z and let C
be a cone containing all the feasible directions at zh+1 (as in (5.10.2)). Let us
assume that
For MOLP problems we know that if the current solution is not optimal,
then one of the feasible directions of cone C must be a direction of improvement.
This direction is then used as a new reference direction in step (3). In other
words, to be able to apply Theorem 5.10.1 at a certain point, the decision
maker must first check every feasible direction at that point for improvement.
This increases both the computational costs and the burden on the decision
maker. It is demonstrated in Halme and Korhonen (1989) and Korhonen and
Laakso (1986a, b) how the number of search directions can be reduced. For
nonlinear problems the cone containing all the feasible directions may consist
of an infinite number of generators. In this case, the optimality cannot be
checked in practice (an infinite number of checks would be needed).
5.10.3. Comments
decision maker. If (s}he finds the end point to be the most satisfactory one,
then the next piece can be presented. If the number of objective functions is
large, the quality of graphical illustration suffers. For this reason, it is advisable
not to have more than ten objective functions at a time.
If it is not desired to check the optimality of the final result, the problem
to be solved does not have to satisfy any special assumptions. This means that
the reference direction approach can be applied to more general problems. The
reverse is valid as well. If the assumptions set are not satisfied, the optimality
cannot be checked, but the method can, of course, be used in any other way.
A similar interactive line search algorithm for MOLP problems is presented
in Benson and Aksoy (1991). The procedure generates only Pareto optimal
points and is able to automatically correct possible errors in the decision
maker's judgement.
The ideas of the reference direction approach are adapted to the goal pro-
gramming environment in Korhonen and Laakso (1986b). The intention is to
relax the predetermined roles of the objective functions and the constraints,
that is, to enable the roles to be interchanged. For that reason, the problem to
be solved is now assumed to be in the generalized goal programming form (see
Section 4.3). The objective functions are considered to be flexible goals and the
constraint functions inflexible goals. At each iteration, the decision maker can
easily convert flexible goals into inflexible ones and vice versa. This increases
the freedom of the decision maker. Combining achievement functions into goal
programming also eliminates the problems caused by feasible aspiration levels
(see Section 4.3).
The idea of changing the roles of the functions is refined in Korhonen and
Narula (1993). A systematic way of changing the roles of the objective functions
and the constraints is described therein. The presentation examines where and
how the changes can be carried out. This systematic handling concerns MOLP
.
problems, but the idea can in principle be generalized to other problems .
A dynamic user interface to the reference direction approach and its adapta-
tion to generalized goal programming is introduced in Korhonen and Wallenius
(1988). This method has been designed for MOLP problems and is called the
Pareto race. The software system implementing the Pareto race is called VIG
(Visual Interactive Goal programming) and it is described in Korhonen (1987,
1990, 1991a) and Korhonen and Wallenius (1989c, 1990). VIG is a dynamic,
visual and interactive solution system for MOLP problems with the emphasis
on graphical illustration.
The Pareto race develops reference directions in a dynamic way. In VIG,
the reference directions and the step-sizes are updated according to the actions
of the decision maker who can thus feel that (s)he is in control. The decision
maker can travel around the (weakly) Pareto optimal set as if driving a car.
The pioneering ideas of realizing user interfaces in VIG are supported by a
comparison of five MOLP programs in Korhonen and Wallenius (1989b). VIG
5.10. Reference Direction Approach 189
was found to be superior. The main reason was that the decision makers found
the aspiration levels to be a comfortable way of expressing preference relations.
The Pareto race is extended into a computer graphics-based decision sup-
port system in Korhonen et al. (1992b). The new method is especially useful
for large-scale MOLP problems.
In the reference direction approach the role of the decision maker is reminis-
cent of the reference point method. (S)he has to both specify reference points
and select the most preferred alternatives. In the reference point methods, how-
ever, there are fewer choices to select from. Ifthe problem is set in a generalized
goal programming form, the decision maker can also interchange the roles of
the objective and the constraint functions. By the reference direction approach,
the decision maker can explore a wider part of the weakly Pareto optimal set
than by the reference point method, even by providing similar reference point
information. This possibility brings the task of comparing the alternatives and
selecting the most preferred of them.
The reference direction approach works best for MOLP problems, as it has
basically been designed for them. It is interesting that the method requires no
additional assumptions about the problem and the underlying value function
until the optimality of the final solution is to be examined. The optimality can
be guaranteed under certain assumptions and with some effort.
The performance of the method depends greatly on how well the decision
maker manages to specify the reference directions that lead to improved solu-
tions. Korhonen and Laakso (1986a) mention that particularly when the num-
ber of objective functions is large, the specification of reference points may
be quite laborious for the decision maker. In this case, they suggest that ran-
dom directions in conjunction with decision maker-defined reference directions
should be used. See Korhonen and Laakso (1986a) for a discussion concerning
other ways of specifying the reference directions. Naturally, the choice of the
weighting coefficients affects the direction of the projection even though the
selection of their values has not been stressed here.
The consistency of the decision maker's answers is not important and it is
not checked in the algorithm. Thus the algorithm may cycle. This can also be
seen as a positive feature, since the decision maker is able to return to such
parts that (s)he already has examined, if (s)he changes her or his mind.
190 Part II - 5. Interactive Methods
5.11.1. Introduction
minimize max
zf
fi(X) -
iEI< zf - if
(5.11.1) subject to hex) ::; Ef + a(zf - Ef) for all i E J>,
hex) ::; zf for all i E r,
XES,
where zh is the current solution, 0 ::; a < 1 is the step-size in the reference
direction, if < zf for i E J< and Ef
> zf for i E [>. The problem is nondiffer-
entiable but it can be transformed into a differentiable form by introducing an
additional variable as described earlier (see, e.g., problem (3.4.3». If some of
the objective or the constraint functions are nondifferentiable, a single objective
solver applicable to nondifferentiable problems is needed.
The RD problem produces weakly Pareto optimal solutions.
Proof. Let x· E S be a solution of the RD problem for some 0 ::; a < 1. Let
us assume that it is not weakly Pareto optimal. In this case there exists some
point XO E S such that fi(XO) < fi(X·) for every i = 1, ... , k.
Because x· is feasible in problem (5.11.1), xo, being weakly Pareto optimal,
must also be feasible. In addition, zf - if > 0 for every i E [< and that is why
fi(XO) -
"--":-"--~:'"
zf <
fi(X*) - zf for every i E J<.
Zh - zh zh _ zh
l z t t
A result concerning the opposite direction and Pareto optimality can also
be established.
Proof. Let x· E S be Pareto optimal. Let us assume that there does not exist
z and a such that x· is a solution of the RD problem. Let us suppose that
192 Part II - 5. Interactive Methods
h(xO) - zf h(x·) - zf 1
rna h <rna h =-,
iEI< zi -h(x*) iEJ< Zi - li(X*)
Thus h(xO) - zf < -(zf -h(x*)), that is, h(xO) < h(x*) for all i E [<,
Because XO is a solution of problem (5.11.1), it must be feasible. In other
words, we have li(XO) ~ li(X*) + 0 for i E J> and h(xO) ~ li(X*) for i E [=.
Here we have a contradiction to the assumption that x· is Pareto optimal. This
completes the proof and x* must be a solution of the RD problem. 0
According to Theorem 5.11.2 we know that any Pareto optimal solution can
be found with an appropriate classification.
An augmented formulation of the RD problem is presented in Narula et
al. (1994a, b) in order to produce only Pareto optimal solutions.
5.11.2. RD Algorithm
5.11.3. Comments
the weaknesses detected in the older methods. Most of the methods previously
described have had an effect on the development of NIMBUS. Either they have
offered useful ideas to adopt or unsatisfactory properties to avoid.
Trade-off rate information cannot be exploited in non differentiable prob-
lems in the way it is used in the ISWT method and in SPOT and STOM.
The natural reason is that obtaining trade-off information from the Karush-
Kuhn-Tucker mUltipliers necessitates that the functions are twice continuous
differentiable. How to obtain trade-off information in nondifferentiable cases
needs and deserves more research.
The ideas of reference points and satisficing decision making seem to be
generalizable to nondifferentiable problems. We can adopt the ideas of classify-
ing the objective functions and reference points and mix them with some ideas
from nondifferentiable analysis. The outcome is described in the next section.
5.12.1. Introduction
The starting point in developing the NIMBUS method has been somewhat
the opposite to theoretical soundness. Emphasizing theoretical aspects may
lead to difficulties on the decision maker's side and more or less instable results,
not to mention higher computational costs. In the NIMBUS method, the idea
has been to overcome the difficulties encountered with many other interactive
methods. The most important aspects have appeared to be the effectiveness and
the comfortableness of the decision maker. Thus, the interaction phase has been
aimed at being comparatively simple and easy to understand for the decision
maker. NIMBUS offers flexible ways of performing interactive evaluation of
the problem and determining the preferences of the decision maker during
the solution process. At each iteration of the interactive solution process the
decision maker can direct the search according to her or his wishes.
Aspiration levels and classification have been selected as the means of inter-
action between the decision maker and the algorithm. It has been emphasized
on several occasions (e.g., in Nakayama (1995)) that an aspiration level-based
196 Part II - 5. Interactive Methods
approach is effective in practical fields. Among the validating facts for this
statement are the following. Aspiration levels do not require consistency from
the decision maker and they reflect her or his wishes well. In addition, they are
easy to implement. Using aspiration levels as a way of receiving information
from the decision maker means avoiding difficult and artificial concepts.
It is assumed that
1. Less is preferred to more by the decision maker.
2. The objective and the constraint functions are locally Lipschitzian.
3. The objective functions are bounded (from below) over the feasible region
S.
The second assumption comes from nondifferentiable analysis, and the third
assumption from the requirement of having the ideal objective vector available.
In the classification of the objective functions, the decision maker can easily
indicate what kind of improvements are desirable and what kind of impairments
are tolerable. The idea is that the decision maker examines at every iteration h
the values of the objective functions calculated at the current solution xh and
divides the objective functions into up to five classes. The classes are functions
Ii whose values
o should be decreased (i E [<),
o should be decreased to a certain aspiration level if
(i E [-:5.),
o are satisfactory at the moment (i E [=),
o are allowed to increase to a certain upper bound E? (i E I»~, and
o are allowed to change freely (i E r),
After the decision maker has classified the objective functions, one of the two
alternative subproblems, called vector and scalar subproblems, is formed. Thus,
the original multiobjective optimization problem is transformed into either a
new multiobjective or a single objective optimization problem, accordingly.
The subproblems lead to two different versions of NIMBUS, to be called vector
version and scalar version. We first introduce the older, that is, the vector
version.
~~? [max [Ji(x1)/wJ - zJ, OJ] - ~~? [max [fJ(x 2)/wJ - zj, 0]),
fi(Xl) - fi(Xh), (i E J=),
h(x 1 ) - €7, (i E J»,
gl(X 1), (l=l, ... ,m)}.
is formed (see also Miettinen and Makela (1996b, 1998a) and Miettinen et
al. (1996b)), where zt
for i E J< are components of the ideal objective vector
(assumed to be known globally).
Notice that problem (5.12.2) is nondifferentiable but has one objective func-
tion. It can be solved by any method for nondifferentiable single objective
optimization, for example, by efficient bundle methods (see MiikeUi and Neit-
taanmiiki (1992, pp. 112-137)).
Scalar subproblem (5.12.2) can be formulated in an alternative form:
minimize
(5.12.5)
subject to xES
for every j = 2, ... , P-l. This treatment works for convex as well as nonconvex
problems. An alternative method can be applied if the vector subproblem is
used and the problem is convex (see Miettinen and Makela (1995)). In this case
weak Pareto optimality can be guaranteed by solving the vector subproblem
with [< = {I, ... ,k} starting from each intermediate solution.
Since the Pareto optimality of the solutions produced cannot be guaranteed
(see Subsection 5.12.5), we check the final solution in the end by solving an
additional problem introduced in Theorem 2.10.3 of Part I. As the decision
maker was assumed to prefer less to more, we can presume that (s)he is satisfied
with the Pareto optimal final solution even where it was not her or his choice.
For clarity of notation, it is not stated in the algorithm that the decision maker
may check Pareto optimality at any time during the solution process. Then,
problem (2.10.2) of Part I is solved with the current solution as x·.
Note that, if scalar subproblem (5.12.2) is employed in the algorithm, we
have to calculate the components of the ideal objective vector z* in the first
step. However, presenting z* to the decision maker gives valuable information
about the problem in both NIMBUS versions.
We must remember that we cannot guarantee global optimality. If the solu-
tion obtained is not completely satisfactory, one can always solve the problem
again from a different starting point. This action is also advised if the decision
maker has to stop the solution process with xh = xh after step (3).
5.12. NIMBUS Method 201
It is also possible to improve the algorithm in step (3) to avoid the case
xh = xh. If the upper bounds specified by the decision maker are too tight,
one can use them as a reference point and project them with (5.12.5) onto
the (weakly) Pareto optimal set. Showing the new solution to the decision
maker provides her or him with information concerning the possibilities and
the limitations of the problem, and some dead ends can be avoided as well.
Unlike some other methods based on classification, the success of the solu-
tion process does not depend entirely on how well the decision maker manages
in specifying the classification and the appropriate parameter values. It is im-
portant that the classification is not irreversible. Thus, no irrevocable damage
is caused in NIMBUS if the solution f(xh) is not what was expected. The
decision maker is free to go back or explore intermediate points. (S)he can eas-
ily get to know the problem and its possibilities by specifying, for example,
loose upper bounds and examining intermediate solutions. NIMBUS is indeed
learning-oriented.
First, we state two theoretical results concerning the optimality of the so-
lutions of vector subproblem (5.12.1) and scalar subproblem (5.12.2).
While, in addition,
202 Part II - 5. Interactive Methods
for all i E J< f. 0, the point x" cannot be a Pareto optimal solution of the
vector subproblem. This contradiction implies that x* must be weakly Pareto
optimal. The proof is also valid if some of the classes J$, J=, J> or [0 are
empty as long as J> U J O f. 0. 0
The maximum can be attained either in the class [< or in [$ (or, naturally, in
both of them). In the first case we have
f(xO) = WiUi(XO) - zi) < WiUi(X*) - zi) :::; I(x*)
for some i E [<. The latter case has two different alternatives. Firstly,
f(xO) = wjmax[/j(xO) - :zj, OJ = 0 < wi(h(x*) - zt):::; f(x")
for some j E [$ and for all i E [<. Secondly,
l(xO) = Wj max [/j(XO) -:zj, OJ = Wj(/j(XO) - :zj) < Wj(/j(x·) - :zj) :::; I(x*)
for some j E [$,
In conclusion, we can state that the point x* cannot be a solution of the
scalar subproblem. This contradiction implies that x* must be weakly Pareto
optimal. The proof is also valid if some of the classes [$, [=, [> or JO are
empty as long as J> U [0 f. 0. 0
The following optimality result is common for the scalar and the vector
subproblem (even the proofs can be combined).
Theorem 5.12.3. Any Pareto optimal solution can be found with an appro-
priate classification in problems (5.12.1) and (5.12.2).
5.12. NIMBUS Method 203
Proof. Let x· E S be Pareto optimal. Let us assume that there does not
exist a classification such that x* is a solution of the vector or the scalar
subproblem. Let us suppose that we have the current NIMBUS solution xh
and the corresponding zh available.
Let us choose f(x") as a reference point. This means that we choose Zi =
j;(x") for those indices where zf > fi(X*) and set i E ]<:;. Further, we set
/Oi = j;(x*) for indices i E J> satisfying zf < j;(x*). Finally, the set J=
consists of indices where zf = fi (x*). This setting is possible because x* is
assumed to be Pareto optimal and xh is weakly Pareto optimal according to
Theorems 5.12.1 and 5.12.2 and the structure of the NIMBUS algorithm. That
is why J<:; =f. 0 and J= U J> =f. 0. In addition, we set Wi = 1 for i E ]<:;.
Because x" is not a solution of the vector or the scalar subproblem, there
exists another point XO E S that is a solution, meaning that
~~~ [ max [Jj(XO) - iJ(x*), OJ] < ~~~ [max [fj(x*) - iJ(x"), OJ] = o.
Thus, max [iJ(XO) - hex"), OJ < 0 for every j E ]<:;. In other words, we have
iJ(XO) < h(x*) for every j E J<:;. Because XO is a solution of problems 5.12.1
and 5.12.2, it must be feasible. In other words, we have fi(XO) ~ fi(X*) for
i E J= U J>. Here we have a contradiction to the assumption that x* is Pareto
optimal. This completes the proof and x· must be a solution of the vector and
the scalar subproblem. 0
The vector and the scalar versions of NIMBUS differ in the form of the
subproblem used. The origin of the development of the scalar version lies in
the drawbacks discovered in the vector version.
Theoretically, the solution of the vector subproblem has to be Pareto opti-
mal in order to guarantee weakly Pareto optimal solutions to the original mul-
tiobjective optimization problem. This assumption is quite demanding. With
the scalar subproblem we do not have problems of this kind. Further, the vector
version needs a special solution tool - MPB. In addition to this limitation, the
204 Part II - 5. Interactive Methods
role of the weighting coefficients is not commensurable between the classes [<
and [~. This implies that the controllability of the method suffers.
The advantage of having a single objective function in the scalar version
is that we can employ any efficient optimization routine of nondifferentiable
optimization. This gives more generality and applicability to the method. Fur-
thermore, in the scalar subproblem, we treat the functions in [< and IS. in a
consistent way and, thus, the roles of the weighting coefficients are also iden-
tical. In all, this means that the decision maker can better direct the solution
process.
Notice that in addition to the difference in the objective functions of the
subproblems, there is also deviation in the constraint part. Due to the goal of
the classes [< and [s., we have to make sure that the values of these func-
tions do not increase. This is the reason for modifying the constraints of scalar
subproblem (5.12.2).
In the vector subproblem, the MPB method does not allow increment in
[<. However, there is no guarantee that the values of the functions in IS. could
not increase. It is clear that including additional constraints in an optimization
problem increases its computational complexity. Because the increasing feature
occurs very rarely in the vector version, no additional constraints have been
used in order to emphasize computational efficiency. Thus, either we pay the
price of additional computational costs or take the risk of increment (depending
on the classification).
On the one hand, the calculation of the ideal objective vector used in the
scalar version also needs computational effort. On the other hand, the ideal
objective vector can provide supporting information for the decision maker in
any kind of multiobjective solution process.
A numerical comparison of the two versions of NIMBUS is reported in Miet-
tinen and Makela (1996b, 1998a) with versatile multiobjective optimization
problems. The standards of comparison chosen are computational efficiency
and the opinion of the decision maker concerning the controllability of the dif-
ferent versions. The efficiency can be measured by the number of times the
subroutine containing the objective functions is called. The controllability side
must be elicited from the decision maker. It is measured in the form of a rating
(between 1 and 5).
The numerical tests indicate that the scalar version obeys the decision maker
better, whereas the vector version is computationally more efficient. However,
it is important to note that the classifications employed affect considerably
the performance of the NIMBUS versions. In any case, the user has to choose
between controllability and computational efficiency when selecting a solution
method.
5.12. NIMBUS Method 205
5.12.7. Comments
The NIMBUS method has not been developed to converge in the traditional
sense. While the method does not assume the existence of any underlying value
function, no explicit convergence results can be put forward on the basis of the
assumptions about the properties of the function. In particular, the intention
has been to release the decision maker from the assumption of an underlying
value function. What is important is that the method satisfies two desirable
properties of interactive methods: not to place too demanding assumptions
on the decision maker or the information exchanged, and to be able to find
(weakly) Pareto optimal solutions quickly and efficiently.
The aim has been to formulate a method where the decision maker can
easily explore the (weakly) Pareto optimal set. When the decision maker no
longer wants to change any objective function value and the solution process
stops, the solution is then optimal.
An important factor is that the final solution is always Pareto optimal
because of the structure of the algorithm. In addition, all the intermediate
points are at least substationary points and they can be projected onto the
Pareto optimal set, if so desired.
The method is ad hoc in nature, since the existence of a value function
would not directly advise the decision maker how to act to attain her or his
desires. A value function could only be used to compare different alternatives.
The possibility of interchanging the roles of the objective and the constraint
functions has been mentioned thus far in connection with some methods. This
is easy to carry out also in NIMBUS because the class J> is nothing but
constraints with upper bounds. One can even go that far as to formulate all
the constraint functions as objective functions and modify their upper bounds
or roles during the solution process from one iteration to the other.
5.12.8. Implementations
5.12.9. Applications
problem. One of the goals is then to find a solution as close to the feasible
region as possible.
ciency. Thus, the user has to choose between these aspects when selecting a
solution method.
Eventually, it is up to the user interface to make the most of the possibilities
of the method and provide them to the user. When the first WWW version of
the NIMBUS algorithm was implemented in 1995 it was a pioneering interactive
optimization system in the Internet. The realization is based on the ideas of
centralized computing and a distributed interface.
Naturally, there are many challenges in the further development of the NIM-
BUS method and its implementations. One of the challenges, applicable to
software development in general, is to create illustrative and easy-to-use user
interfaces. If the interface is able to adapt to the decision maker's style of mak-
ing decisions and is of help in analyzing the alternatives and results, and can
perhaps give suggestions or advice, then the interface may even overcome some
of the deficiencies of the method itself.
The idea of the method in Moldavskiy (1981) is to form a grid in the space
of the weighting vectors and to map this grid onto the Pareto optimal set.
Weighted Lp-metrics are used as scalarizing functions to produce a represen-
tation of the Pareto optimal set. The decision maker can contract the space of
the weighting vectors until the most satisfactory solution is obtained.
A method based on sensitivity analysis and the weighted Tchebycheff metric
is presents in Diaz (1987), where the effects of changing aspiration levels are
studied by sensitivity analysis. The method in Sunaga et al. (1988) utilizes
also the weighted Tchebycheff metric. It transforms the constrained min-max
problem into a series of (differentiable) unconstrained problems by penalty
functions.
The interactive cutting-plane algorithm is presented in Loganathan and
Sherali (1987) with applications. The idea is to maximize the underlying value
function. The weighted Tchebycheff metric is utilized with marginal rates of
substitution as weighting coefficients. The convergence of the algorithm is also
treated.
The method proposed in M'silti and Tolla (1993) combines features from
the oS-constraint method and the augmented weighted Tchebycheff metric. The
global Pareto optimality of the solutions obtained is checked.
The method of the displaced ideal for MOLP problems, described in Zeleny
(1973, 1974, 1976), can be characterized as an interactive extension of the
method of weighted metrics. A subset of the Pareto optimal set is obtained
by minimizing the distance between the ideal objective vector and the feasible
210 Part II - 5. Interactive Methods
and value functions and convex feasible regions in Roy and Wallenius (1992).
A more general case of nonlinear objective functions, nonconvex feasible re-
gions and concave value functions is also discussed. This approach uses the
generalized reduced gradient method instead of the original simplex.
The method in Kim and Gal {1993} is intended for MOLP problems. It is
based on the concept of a maximally changeable dominance cone and marginal
rates of substitution. The effectiveness of the method is illustrated by a numer-
ical example.
Ideas for reducing the burden on the decision maker in interactive methods
are introduced in Korhonen et al. (1984) and further developed in Ramesh
et al. (1988). An underlying quasiconcave value function is assumed to exist.
Convex cones are formed according to the preference relations of the decision
maker. The cones are formed so that the solutions in them can be dropped from
further consideration, because they are dominated by some other solutions.
Thus, fewer questions have to be put to the decision maker in charting the
preferences. These ideas concerning convex cones can be applied equally to
multiobjective optimization as to multiattribute decision analysis. The ideas
are utilized, for example, in Ramesh et al. (1989a, b).
A method for complex problems with high dimensionality is proposed in
Baba et al. (1988). The method uses a random optimization method and is
also applicable to nondifferentiable objective functions.
The parameter space investigation (PSI) method is described briefly in
Lieberman (1991b) and in more detail in Statnikov and Matusov (1996) and
Steuer and Sun (1995). It has been developed for complicated nonlinear prob-
lems involving possible differential equations. Such problems occur, for exam-
ple, in engineering. The method is very simple and intended to be applicable
to problems where more sophisticated methods are useless. The PSI method is
a naIve sampling technique rather than an optimization method. Both the con-
straint functions and variables are assumed to have upper and lower bounds.
Thus, the feasible region is a parallelepiped. The Pareto optimal set is approx-
imated by generating randomly uniformly distributed points between the vari-
able bounds. Infeasible solutions are dropped as well as solutions not satisfying
the upper bounds specified by the decision maker. Pareto optimal solutions are
selected from this set. The sample size can be altered and the decision maker
can adjust the upper bounds. The method does not assume differentiability.
It works for nonconvex problems since its structure enables global search. The
method contains a random number generator of its own, but it is claimed in
Steuer and Sun (1995) that any generator can equally well be used. Conver-
gence properties and the accuracy of the approximation of the Pareto optimal
set assuming general Lipschitz conditions are handled in Sobol' and Levitan
(1997) and Statnikov and Matusov (1996). The PSI method has its origins in
the former Soviet Union, which is why most of the information about it has
been published in Russian. It is said to have been applied in many fields of the
national economy in Russia. One engineering application is described in Sobol
(1992).
1. COMPARING METHODS
As has been stressed many times thus far, a large variety of methods exists
for multiobjective optimization problems and none of them can be claimed to be
superior to the others in every aspect. Selecting a multiobjective optimization
method is a problem with multiple objectives itself. Thus some matters of
comparison and selection between the methods are worth considering.
The theoretical properties of the methods can rather easily be compared. We
summarize some of the features of the interactive methods treated in this book
in a comparative table at the beginning of this chapter. However, in addition
to theoretical properties, practical applicability also plays an important role
in the selection of an appropriate method for the problem to be solved. The
difficulty is that practical applicability is hard to determine without experience
and experimentation.
More fruitful information relating to the question of method selection would
likely emerge if computational applications were more extensively reported. Un-
fortunately, not too many actual computational applications of multiobjective
optimization techniques have been published. Instead, methods have mainly
been presented without computational experiences or with simple academic
test problems. As it is aptly remarked in Bischoff (1986), most of the applica-
tions presented are merely proposals for applications or they deal with highly
idealized problems. For most interactive methods a natural reason is the diffi-
culty (in finding and) in testing with real decision makers. A complicating fact
is also the enormous diversity of decision makers.
One more thing to keep in mind is that for the most part only successful
applications are published. This means that we cannot draw a complete picture
of the applicability of a method on the basis of the experiences reported.
The evident lack of benchmark-type test problems for nonlinear multiobjec-
tive optimization complicates the comparison of different methods. Naturally,
some methods are useful for some problems and other methods for other types
of problems. However, benchmark problems could be used to point out such
behaviour.
In this section we outline some comparisons of methods reported in the
literature. We also consider selected issues in deciding upon a method, including
a decision tree.
..c
~
80..
~ ~
~ ~ "
0
-g
~ a -g ~1;l '0
-5
"8
~ :t:: "8. -5 ~ ~
1 ~ ~ g
e
~
'" ::E
§
II
g ~
e ~
'";:J
~ ~'"
f-< ~
~
!!l § ~ ~
~
~ 0
f-< i
:::l
~
"
p: ~
~
z
'"
{.;)
'"
+ final solution
Pareto optimal
X X X eX) (x) X ex) ex) X
ad hoc nature X X X X X X X X
objecti ve functions X X X X X X X X X
assumed to be bounded
- sensitive needing X X X X (X)
consistent answers
- computationally X X
expensive
- difficult questions X X X
posed
+ trade-off rates X X X
provided
comparison of X X X X X X X
alternatives used
classification of X X X X
objective functions used
reference points X X X X
used
marginal rates of X X
substitution used
thresholds used X
implementation X X X
mentioned
eX)
1.2.1. Introduction
The comparisons have been carried out with respect to a variety of crite-
ria. Among them are ease of use and confidence in both the solution obtained
and the method used from the viewpoint of the decision maker. The rapidity
of convergence and CPU time are among the criteria from the mathematical
point of view. The number of Pareto optimal solutions needed to solve a prob-
lem could also serve as a comparison criterion, as pointed out in Ferreira and
Machado (1996). However, such a measure of effectiveness has not generally
been reported in the comparative evaluations available.
Some caution is in order when trying to judge something from the com-
parisons. The comparisons have been performed according to different criteria
and under varied circumstances. Thus they are not fully proportional. Which
method is the most suitable for a certain problem depends highly on the per-
sonality of the decision maker and on the problem to be solved.
Practical experience is especially important in evaluating the techniques
with respect to criteria related to the decision maker. It is important to com-
pare a method under a variety of circumstances so that the conclusions can be
generalized. As emphasized, for example, in Hobbs et al. (1992), the appropri-
ateness, ease of use and validity of a method must be tested with real decision
makers.
Using a human decision maker does not, however, mean that the practi-
cal applicability of the method has been fully investigated. Unfortunately, few
experiments have been reported with problem-related decision makers. Most
of the comparative evaluations with human decision makers have involved stu-
dents as the decision makers. This is understandable for practical reasons. How-
ever, this kind of a setting can be called into question. The results might have
been different with real decision makers who are actually responsible for the
solution obtained. Another aspect is the wide range of different problem ar-
eas and their different decision makers. Obvious examples are business-related
problems that typically involve less than ten objective functions and engineer-
ing design problems with hundreds of objective functions. Further aspects to be
kept in mind when testing several methods with human decision makers are the
effects of learning and anchoring. Learning is related both to getting to know
the problem- better and to the order of the methods used whereas anchoring is
related to the selection of starting points.
Instead of a human decision maker one can sometimes employ value func-
tions in the comparisons. Value functions may be useful in evaluating theo-
retical performance, but such tests do not fully reflect the real usefulness of
the methods. One can try to compensate for the lack of a real decision maker
by employing several different value functions. If, for example, marginal rates
of substitution are desired, the inconsistency and inaccurate responses of a
decision maker can be imitated by multiplying them with different random
numbers. These means are employed in Shin and Ravindran (1992). On the
other hand, value functions cannot really help in testing ad hoc methods.
1.2. Comparisons Available in the Literature 221
One crucial factor that can affect the performance of the methods in the
comparisons is the user interface. Nothing is usually mentioned concerning the
realization of the user interface in the comparisons reported. It is important
to remember that one can spoil a 'good' method with a poor user interface or
support a 'poor' method with a good interface. In addition to the illustration
of the (intermediate) results, a good user interface also means a clear and
intelligible input phase.
It is interesting to observe that most of the multiobjective optimization
problems solved when testing the methods (and reported in the literature)
have been linear. It is true that complex nonlinear functions cause difficulties of
their own and the characteristics of the solution methods may be disturbed. On
the other hand, features concerning nonlinear problems may remain unnoticed
with MOLP problems. On the whole, the comparisons available are not of too
much help if one is looking for a method for a nonlinear problem, and more
contributions in this area are needed. Nevertheless, we review some of the
comparisons published.
stated whether it was feasible or not. The decision makers were assumed to ex-
plore the feasible objective region until they were unable to find more preferred
solutions.
The criteria in the evaluation were the ease of using the method and the
confidence in the solution obtained. The results obtained favoured the GDF
method. Thus, a conclusion could be drawn that the GDF method can success-
fully be used by untrained decision makers.
The performance of the GDF method, STEM and the trial-and-error pro-
cedure (the same as that used by Dyer) is compared from the point of view of
a decision maker in Wallenius (1975). A total of 36 business school students
and managers from industry were employed as decision makers. The following
aspects of the methods were compared: the decision maker's confidence in the
solution obtained, ease of use and understanding of the method, usefulness of
the information provided, and rapidity of convergence. The linear management
problem to be solved contained three objective functions.
The results are analysed statistically in Wallenius (1975). One interesting
conclusion was how well the trial-and-error procedure competed with the more
sophisticated methods. Nevertheless, Wallen ius points out that its performance
might be weakened if the problems were more complex. Difficulties in estimating
the marginal rates of substitution weakened the overall performance of the GDF
method. Thus, Wallenius suggests that research should be directed to finding
ways of better adjusting methods to suit the characteristics of a human decision
maker.
The results of Dyer and Wallenius concerning the GDF method differ re-
markably. Some trials analysing the reasons are presented in Wallenius (1975).
The capabilities of the ZW, the SWT, the Tchebycheff and the GUESS
methods (without the upper and lower bounds) are compared in Buchanan and
Daellenbach (1987) from the point of view of the user in solving a linear three-
objective optimization problem. The problem concerned the production of the
electrical components of lamps. A total of 24 decision makers (students and
academic staff) were employed. The criteria in the comparison were partly the
same as those used by Wallenius. In addition to confidence in the final solution,
ease of use and ease of understanding the logic of the method, CPU and elapsed
time were compared. The most important criterion was the relative preference
for using each method. The conclusions are that the Tchebycheff method was
clearly preferred to the other methods and the ZW method came out the worst
in relation to the first four criteria. The SWT method was in the middle. The
GUESS method performed surprisingly well. On the basis of this experiment
one can say that decision makers seem to prefer solution methods where they
can feel that they are in control.
Experimental evaluations of interactive methods with 24 decision makers
(students) and two three-objective MOLP problems are reported in Buchanan
(1994). The methods involved were the Tchebycheff method, the GUESS
method and the simplified interactive multiple objective linear programming
1.2. Comparisons Available in the Literature 223
The method of Steuer and STEM are tested in Brockhoff (1985). A total of
147 decision makers were employed to solve six problems involving purchasing
cars. The results and progress are analysed according to several criteria, with
the method of Steuer emerging with the best outcomes on the average.
An experiment on the differences in the philosophies of methods for con-
tinuous compared to discrete problems is presented in Corner and Buchanan
(1995,1997). In Corner and Buchanan (1997), the continuous GUESS method,
a modified ZW method and a discrete SMART method (based on construct-
ing a value function) were used to solve a production planning problem with
three objective functions by 84 undergraduate students as decision makers. The
problem was nonlinear and had continuous variables. The main interest was to
determine the ability of the methods to capture the preferences of the decision
maker. In other words, how well the methods were able to find desirable solu-
tions and how much the decision makers liked the methods. The time spent on
each solution process was also recorded.
One of the conclusions is that the continuous methods were better and
faster than the discrete method. The GUESS method was rated easiest to use
and to understand. All the methods produced different solutions of which the
one generated by the GUESS method was ranked the best. The order of the
methods used was found to have no effects on the results. The exception was
the case when SMART was used first. Then the solutions obtained with the
other methods were statistically the same. In addition, it was observed that a
weighted additive value function explaining their ranking behaviour could be
found for most decision makers.
Another experimental test involving the ZW method and the GUESS
method is reported in Buchanan and Corner (1997). The emphasis was in test-
ing whether any anchoring effect can be explained by the structure of the
solution method. A number of 84 students acted as decision makers and solved
a nonlinear problem with three objective functions. The conclusion was that
an anchoring effect could be seen with the structured ZW method but less so
with the free search method GUESS. Thus, it can be deduced that the selection
of the starting point is even more crucial with more structured methods than
with less structured methods.
Some comparisons of continuous and discrete methods are also presented
in Korhonen and Wallenius (1989b). A continuous MOLP problem with five
objective functions concerning the allocation of a student's time between study,
work and leisure was solved by 65 student decision makers. The five methods
compared were all based on the reference direction approach. Only the speci-
fication of the reference direction varied. The original way of using aspiration
levels was found to be clearly superior to the others.
A more detailed review of the above-described and some other empirical
studies involving real decision makers is given in Olson (1992). However, no
final conclusions can be drawn from the experiments. The reason is that the
test settings and the samples are not similar enough.
1.2. Comparisons Available in the Literature 225
The role of the decision maker is important and should be taken seriously.
Many experiments have shown that decision makers prefer simpler methods
because they can more easily understand such methods and they feel more in
control. The valuation placed on some methods may increase if the decision
makers can practice using them or obtain advice. An important fact to keep
in mind is that theoretically irrelevant aspects, such as question phrasing, may
affect the confidence that the decision maker feels in the method. The concept
of the decision maker's confidence is analysed further in Bischoff (1986).
Other important criteria for the decision maker in selecting the solution
method are, for example, the simplicity of the concepts involved, possibilities of
interaction, the ease with which the results can be interpreted and the chances
of choosing the most preferred solution from a wide enough set of alternatives.
The method must also fit the decision maker's way of thinking. The language of
communication between the decision maker and the method (solution system)
must be understandable to the decision maker. (S)he wants also to see that the
information (s)he provides has a (desirable) effect on the solutions obtained.
One more element, not mentioned thus far, in the selection of a method is
how well the decision maker knows the problem to be solved. If (s)he does not
know its limitations, possibilities and potentialities well, (s)he needs a method
that can provide support in getting acquainted with the problem. In the oppo-
site case, a method that makes it possible to focus directly on some interesting
sector is advisable. Ways of identifying appropriate methods for different types
of decision makers are needed.
Few universally applicable guidelines have been given for the method se-
lection problem in the literature. Let us mention some of them including even
approaches for discrete problems.
An attempt to assist in the selection of a solution method is presented in
Gershon and Duckstein (1983). The selection problem is modelled as a multiob-
jective optimization problem. A set of 28 criteria for the selection are suggested
and they are divided into four groups. Only the criteria in the last group have to
be considered every time the selection algorithm is applied. The criteria take
into account the characteristics of the problem, the decision maker and the
methods. Many types of problems are taken into consideration in the criteria
(e.g., discrete and continuous variables). The model contains 13 solution meth-
ods from which to select. The set of methods can naturally be modified. The
number of selection criteria can also be varied to include only those relevant
to the problem to be solved. Finally, after the methods have been evaluated
according to the selection criteria, the resulting multiobjective optimization
problem is solved by the method of the global criterion (e.g., L1-metric).
A related procedure is suggested in Tecle and Duckstein (1992). There, a
set of 15 methods is evaluated with respect to 24 criteria in four classes. The
weighted Lp-metric is used in each class and another weighted Lp-metric is used
1.3. Selecting a Method 229
to combine the classes and obtain the best method. For the example problem
provided, the weighted Lp-metric turns out to be the best method. One may
wonder whether the weighted Lp-metric favours itself or whether this is a mere
coincidence. Some critique of the approach is also expressed in Romero (1997).
Different decision trees and rules for providing assistance in selecting a
method for multiattribute decision analysis problems are described in Hwang
and Yoon (1981) and Teghem et al. (1989). However, as criticized in Ozernoy
(1992a), to design a comprehensive and versatile decision tree usually results
in an explosion in the number of nodes. Another problem with decision tree
diagrams is what to do when the user answers 'I do not know.'
An expert system for advising in the selection of solution methods for prob-
lems with discrete alternatives in proposed in Jelassi and Ozernoy (1989). Steps
in the development of another expert system for selecting the most appropri-
ate method for discrete problems are described in Ozernoy (1992a, b). The
questions posed by the system are based on if/then rules. They lead to recom-
mending a method or stating that no method can be recommended. The user
of the system can also always ask why a particular question is posed.
Little advice exists for selecting a method for nonlinear and continuous prob-
lems. Therefore, despite the above-mentioned pitfalls and faults in decision-tree
diagrams, we nevertheless present one in Figure 1.3.1. The tree has primarily
been created on the basis of plain theoretical facts concerning the assumptions
imposed by the methods on the problem to be solved and secondarily according
to the preferences of the decision maker. Because of space limitations it has not
been possible to include all the properties.
The decision tree includes the twelve interactive methods described in Part
II. Only those methods are included that have been presented in more detail
or whose main features have been introduced. Remember that in practice, the
functioning of a method may not always require that all its technical assump-
tions are satisfied (as stated, for example, in Zionts (1997a, b)). Or it may even
be impossible to verify all the assumptions. If some of the assumptions are
not valid, some of the results may be incorrect, but this does not necessarily
mean that the method will not work in some contexts. The results may still be
adequate for practical purposes. This must be kept in mind when studying the
decision tree. Nonetheless, the assumptions provide some guidelines to follow.
The starting node is situated on the left. The tree diagram has been created
in such a way that only the answers 'yes' or 'no' are possible. Whenever the
immediate answer is 'I do not know,' the answer 'no' can be given. In order to
avoid confusing the picture any further, the words yes and no have been replaced
by arrows of different types. Continuous lines represent positive answers and
broken lines stand for negative ones. In addition, 'no' arrows always leave a
node to the right of the 'yes' arrows.
to.:>
objective functions marginal rates c:.>
o
continuously ----1
"'...... of substitution
differentiable ~ to be specified
"'0
NIMBUS method
®'" opinions ~
'. ' available about .....
.....
_ ... _._ .... __ ._ ' ... -.~ trade-off rates .....
y , ~
The nodes containing only capital letters are used in two different cases.
The first is to avoid repetition. In the second, no method can be found along
the path followed. In that case, one can try another path. The aim has been to
allow as many previous answers as possible to be exploited. Thus, some dead
ends may be avoided.
The same method may be reached by following different paths. In this case
varying questions may be needed in order to separate the methods. That is
why either general or more detailed questions leading to the same method are
used.
As already repeated several times, selecting the solution method is a difficult
and important task. After describing each method (see Part II), we have tried
to indicate what it has in its favour and what its drawbacks are. These matters,
of course, are always more or less subjective.
2. SOFTWARE
2.1. Introduction
The role and the requirements of the model, the optimizer and the interface
in the multiobjective optimization environment are outlined, for example, in
Jelassi et al. (1985). It is useful to have capabilities of self-learning and model
updating in a decision support system. The interface is an important factor in
relation to the user-friendliness of the system.
One can state that developing software for multiobjective optimization
problems is once again a multiobjective optimization problem in itself, and
proper planning is essential. Several (conflicting) objectives to be taken into
consideration in multiobjective software design and realization are collected
in Olkucu (1989). Among them are a short development time, long product
life, easy and cheap maintenance, reliable implementation of the algorithm,
an efficient user interface and a large number of potential users. Of no mi-
nor importance in this regard are the selection of the realization environment
(including the operating system) and development tools.
Features to be taken into consideration when designing decision support
systems are also handled in Lewandowski (1986). Different definitions of user-
friendliness and rules for dialogue design are given.
It should be pointed out that while a great deal of effort has gone into
developing the methodological and computational aspects of the systems, the
interface between the system and its user is often of poor quality. This is a
serious weakness, since no matter how brilliant the methodology and its imple-
mentation are, it will be discarded if the interface does not suit the user. In any
case, the algorithms must be implemented in such a manner that computer-
technical requirements do not overshadow the real problem and non-skilled
persons can also use the programs. One way to try to improve the situation
is to provide different interface possibilities for the same system for computer
specialists, trained users and average users.
An effort of measuring the effectiveness of decision support systems is de-
scribed in Sainfort et al. (1990). Even though it is widely assumed that decision
support systems really do help in decision making and problem solving, research
results in this important area are few. Added to this is the fact that there is
currently no general theory about problem solving because of its complexity.
Group decision support systems are mostly handled in Sainfort et al. (1990),
but the conclusions favouring decision support systems are general. It is demon-
strated that decision support systems increase the understanding of the prob-
lem, reduce frustration in the problem solving and contribute to progress in
the solution process.
To put if briefly, a decision support system should be easy to use, it should
capture the thinking procedure of the decision maker, it should support differ-
ent decision styles and it should help the decision maker to structure different
situations. Other desirable characteristics of decision support systems are listed
in Weistroffer and Narula (1997).
2.2. Review 235
The user of the decision support system may need guidance and training
to be able fully to make the most of it. Such a step may increase usability and
render the system more user-friendly than before.
An interesting question is raised in Verkama and Heiskanen (1996) about
how research concerning both methodology and decision support system soft-
ware should be reported in the literature. Verkama and Heiskanen suggest that
numerical examples should accompany both the algorithms proposed and the
software described. This would enable any interested reader to use the example
to understand details not presented in the paper.
2.2. Review
into commercial products. Yet, as emphasized in Buede (1996), even those soft-
ware developers concentrate too closely on features of analysis at the expense
of user-friendliness, as mentioned earlier.
Among software products for solving MOLP problems is VIG by P. Kor-
honen, Finland (see Subsection 5.10.3 in Part II). Let us also mention a pack-
age of subroutines, called ADBASE, by R.E. Steuer, USA (see Steuer (1986,
pp. 254-267)). ADBASE contains, for example, tools for generating Pareto op-
timal extreme points. These are examples of the generally available products
for linear problems.
The situation is worse for continuous nonlinear problems. Most of the soft-
ware implementing the extensive amount of existing multiobjective optimiza-
tion methods is neither commonly available nor widely known.
One explanation sometimes mentioned is the lack of a free and reliable
nonlinear solver that could be integrated and distributed with the software.
Most software products have been implemented for academic testing purposes
and have not been updated along with the development of computer facilities.
Consequently, their existence is not advertised. Simply designing and realizing
a functional user interface is demanding. One must assume that the need has
not been large enough to motivate the work, or the need has not been realized
because good solution tools have not been available.
However, some implementations were mentioned in connection with the
method descriptions in Part II. No detailed information was given, since the
implementations are under continuous development and the details may be
out-of-date at any moment.
Software comparisons reported in the literature mainly concern programs
for multiattribute decision analysis. We simply mention that seven microcom-
puter implementations are presented and compared in Colson and de Bruyn
(1987). Five of them are intended for multiattribute decision analysis. An im-
plementation of STEM is also reported. In addition, the main features and
requirement.s of eight microcomputer software packages are introduced in Lotfi
and Teich (1989). One of them is VIG and the other seven are for discrete
al ternati ves.
There exist several software packages for general single objective optimiza-
tion problems that also contain some possibilities for noninteractive multiobjec-
tive optimization. Let us briefly indicate some of them. The implementation of
the MPB method (see Section 2.2 in Part II), called MPBNGC by M. MakeUi,
Finland, is among them.
CAMOS (Computer Aided Multicriterion Optimization System) has been
developed to treat especially nonlinear computer aided optimal design problems
(see Osyczka (1989b, 1992) and for an earlier version Osyczka (1984)). CAM OS
produces Pareto optimal solutions with different generating methods.
The methods for identifying (weakly) Pareto optimal solutions are the
weighting method with or without normalizing the objective function, the
c-constraint method, the method of the global criterion and the method of
2.2. Review 237
weighted Tchebycheff metric. Problem (2.1.4) of Part II is also used. For more
details, see, for example, Osyczka (1984, 1992). Different underlying single ob-
jective optimization algorithms may be used.
The functioning of CAMOS is illustrated by two practical problems in Osy-
czka (1992, pp. 93-125). They are the optimal design of multiple clutch brakes
and the optimal counterweight balancing of robot arms.
NOA, a collection of subroutines for minimizing nondifferentiable functions
subject to linear and nonlinear (nondifferentiable) constraints, is described in
Kiwiel and Stachurski (1989). NOA is applicable to multiobjective optimization
problems since the single objective function to be minimized is assumed to be a
maximum of several functions. Thus, for example, some achievement functions
can be optimized.
Let us finally mention the optimization toolbox of the MATLAB system
including the weighting method, the E-constraint method and a modification
of goal programming. Naturally, other multiobjective optimization algorithms
may be coded within the MATLAB environment, taking advantage of the pow-
erful single objective solvers and graphics available.
3. GRAPHICAL ILLUSTRATION
3.1. Introduction
too much information should be allowed to be lost and, on the other hand, no
extra unintentional information should be included in the presentation.
in the Pareto optimal set. If the ranges are known, they give additional infor-
mation about the possibilities and limitations of the objective functions. Note
that each objective function can have a scale of its own. Examples are suggested
in Torn (1983) of how to display the scales of the objective functions.
Value paths are a recommendable method of illustration because they are
easy to interpret. For example, it is easy to distinguish non-Pareto optimal
alternatives if they are included. Further, even a large number of objective
functions or alternatives causes no problems. Value paths are used, for example,
in WWW-NIMBUS (see Subsection 5.12.8 of Part II) and the visual interactive
sensitivity analysis system VISA, see Belton and Vickers (1990).
Z! z2 z3
70
, , , ".............
,,
60
,
50
,,
, .,'.
40
,,
;,: '"
30
.'
20
to
In value path illustrations the roles of the lines and the bars can also be
interchanged. Then bars denote alternatives and lines represent objective func-
tions. In this case, possible different scales of the objective functions have to
be interpreted differently (see, e.g., Hwang and Masud (1979, p. 109». This
reversal of roles has been utilized, for instance, in the first implementations of
the reference direction approach (described in Section 5.10 of Part II), and its
counterpart for discrete problems, called VIMDA, see Korhonen (1986, 1991a).
The idea in VIMDA is that when the user horizontally moves the cursor to a
bar representing an alternative, the corresponding numerical objective values
are presented.
242 Part III - 3. Graphical Illustration
z1 z2 z3
70
60
50
40
30
20
10
Naturally the roles of the alternatives and the objective functions can be
interchanged in bar charts as well as in value paths. This, of course, means
that the order of the bars is altered. This is possible, for example, in WWW-
NIMBUS.
An alternative to using separate ranges for the objective functions is to
provide bar charts and value paths using both absolute and relative scales.
This is advisable in particular if the ranges of the objective functions vary
widely. This option is also available in WWW-NIMBUS.
3.3. Illustrating a Set of Alternatives 243
70
~
One can say that in the star coordinate system an alternative is better the
smaller the area of the star. If the order of the objective functions is altered,
the shape of the star changes. This can be considered a weakness of the system,
as stated in Tan and Fraser (1998). Tan and Fraser also suggest a modified star
graph to include the weight information of the decision maker (if available) in
the same display with the objective values.
objective value at the centre. In this case the larger the area the better. If this
is the setting, the ideal objective values can be replaced with, for example,
average objective values. This means that the figures can extend beyond the
circumference, stressing values better than the average.
30 40 50 60
II II 1111111111 IIIIII
65 - 0 f-
0
55 -
• •
f-
45 - f-
35 - X X ZI I- o alternative I
25 - I- X alternative 2
15 -
[J
I-
• alternative 3
--=
70
X X ~
I-
f-
: l-
• •
f-
45 -- I-
I-
- I-
25 - 0 0 I-
-
•
l- 60
- X X· I- 50
0 0
- Z3 I- 40
- l- 30
I III 1111111111 111111
25 45 70 15 25 35 45 55 65
curves with the aid of Fourier series. In this way all the vectors can be plotted
on the same coordinate system for comparison.
Other proposals for the graphical illustration of alternatives are given, for
example, in Vetschera (1992). They are based on indifference regions and linear
underlying value functions.
Let us finally mention a projection idea called GAIA (Geometrical Anal-
ysis for Interactive Aid). It is a part of the discrete multiattribute decision
analysis method PROMETHEE and it is described, for example, in Brans and
Mareschal (1990) and Mareschal and Brans (1988). The objective functions are
first modified to include some preference information of the decision maker and
then normalized. These objective functions have some benefits when compared
to the original ones. Namely, they are in the same scales, big differences in the
objective values are emphasized and small differences are lessened.
Principal component analysis is used in order to find a plane (two dimen-
sions) in which the new objective functions can be projected. The idea is to lose
as little information and variation as possible. In other words, the two largest
principal components are selected to form the projection plane. The weakness
here is that if the objective functions have nonlinear relations, principal com-
ponent analysis cannot find it.
If selecting the plane is managed well enough, the relations between the new
objective functions and the alternative solutions can be seen in their projec-
tions. Objective functions are depicted as vectors and alternatives as points on
the plane. For example, if two objective functions are highly conflicting, their
vectors go in opposite directions, whereas independent objective functions are
orthogonal and similar objective functions are oriented approximately in the
same direction. From the location of the alternatives one can see how well they
perform with respect to each objective function, that is, how near or far they
are from each other.
It seems that this GAIA plan ideology is a rather clear method of illustra-
tion. However, it has two main limitation. Firstly, the plane contains only a
part of the information available. Secondly, the conflict characteristics of the
objective functions are not absolute but depend on the alternatives considered.
The problem of how we can determine a priori whether the graphical formats
used will aid rather than hinder decision making is examined in Jarvenpaa
(1989) by comparative studies. The conclusion is that knowledge concerning
the relationship between the presentation format and the decision strategy can
facilitate the selection of the presentation format. Special attention is given to
the benefits of bar charts and grouped bar charts.
Similar matters are handled in connection with visual interactive simulation
in Bell and O'Keefe (1995). For example, it is concluded that the use of visual
displays generates solutions that are demonstrably better than those that make
248 Part III - 3. Graphical Illustration
limited use of such displays. This means that different levels of usage of spe-
cific displays have an impact on the quality of the solutions generated. In the
experiments, bar charts were the most favoured visual displays.
Several existing studies on the applicability of graphs versus tables are anal-
ysed in Vessey (1991). According to the theory developed, it is concluded that
tables perform better in information acquisition tasks in both time and accu-
racy of performance. Thus tables are in order when specific data values must
be extracted, since they represent discrete data values. If information must be
viewed at a glance, evaluated or relationships in the data are of interest, graphs
are recommendable. Thus, graphs and tables emphasize different characteristics
of the same data.
Using colours in illustrations has advantages and disadvantages. Above all,
the colours must be easy to discriminate. Another important issue is that some
colours may have specific connotations to the user. Such colours should be
avoided as far as possible.
An experimental evaluation of graphical and colour-enhanced information
presentation is given in Benbasat and Dexter (1985). Colours improve the read-
ability and understandability of both symbolic and graphical displays. Colours
make it easier for the decision maker to associate visually information belong-
ing to the same context or unit since such data are coded in the same colour.
Encouraging results with multi-colour reports are mentioned by Benbasat and
Dexter (1985), who also stress that tabular representation is the best when
a simple retrieval of data is important and a graphical representation is the
best when relationships among the data have to be shown. Graphs are visu-
ally appealing but sometimes tables are easier to read since they provide exact
values.
A recommended way of presenting information to the decision maker is to
offer the same data in different forms. In this way, the decision maker can choose
the most illustrative and informative representations. The illustrations may also
supplement each other. The decision maker can change her or his attention
from one figure to another and possibly skip undesirable alternatives before
making the final selection. A simple tabular format may be one of the figures.
Corresponding ideas are suggested, for instance, in Silverman et al. (1985) and
Steuer (1986, pp. 520-522).
An interesting alternative to graphics and numerical values is suggested
in Matos and Borges (1997). The idea is to illustrate alternatives in natural
language phrases. In this approach, fuzzy membership functions are formed
for every objective function defining fuzzy bounds, for example, 'very little',
'little', 'medium', 'very' and 'most' values. An example of alternatives in a
washing machine selection problem could be 'most cheap, medium power saver
and little fast.' The decision maker is asked for some descriptive information
as the basis of the membership functions before the solution process. This is a
promising way of illustrating data and it deserves further development.
3.3. Illustrating a Set of Alternatives 249
Finally, one must concede that where a great number of alternatives co-
exists, the decision maker may get confused no matter how the alternatives
are illustrated. In this case, statistical tools, for example, principal component
analysis, may be useful.
4. FUTURE DIRECTIONS
In this chapter, we outline some challenging topics for the future devel-
opment of multiobjective optimization, mainly from a mathematical point of
view. In addition, we give examples of promising ideas for research where the
first steps have been taken but further work is needed. All the issues mentioned
and many others merit further research and examination.
Multiobjective optimization is important, and improved solution methods
can bring about change in many areas and aspects of life. Even though mul-
tiobjective optimization methods have been applied to solving a variety of
problems in many areas of life, such as design problems in engineering, produc-
tion problems in economics, and environmental control problems in ecology,
there continue to exist many new problem types which could benefit highly
from multiobjective optimization. Particularly challenging in this respect are
real-life problems. There is clearly a need for more contributions reporting on
practical applications (making good use of more developed methods).
One interesting type of problems is so-called multidisciplinary re-engineer-
ing. It means that old engineering problems, for example, in optimal design,
whose solutions have been revised one feature at a time over the course of
years, are solved again from the very beginning, taking various aspirations and
aspects into consideration at the same time. Obviously this requires tools of
multiobjective optimization.
An important challenge for the developers of interactive methods and ap-
proaches is how to approach the decision maker. For example, a real experi-
ment with problem-related decision makers in Hobbs et al. (1992) shows that
the decision makers were sceptical of the value of multiobjective optimization
methods and they in some cases preferred unaided decision making. This means
that the methods should not only be user-friendly but also of real help to deci-
sion makers. Combining knowledge from the behavioural sciences with method
development could usefully serve in this direction.
The methodology of multiobjective optimization must be improved. This
means, for example, creating computationally efficient ways of generating trade-
off information for more general problem types under less restricting assump-
tions than those employed thus far. Another aspect is the structure of the
methods. On the one hand, providing the decision maker with the opportunity
for free search is important. On the other hand, guidance and support must
In general, one can say that the theory and the methods of multiobjective
optimization have been extensively developed during the past couple of decades.
Software implementations are considerably less in evidence. There is also a
lack of documentation in solving real-life multiobjective optimization problems
(using more developed methods). The reasons for this may be ignorance of
the full range of possibilities contained in existing methods as well as the lack
of suitable methods. For our part, we have filled a gap in the literature by
collecting several nonlinear multiobjective optimization methods between the
same covers.
In the development of methods the obvious conclusion is that it is important
to continue in the direction of user-friendliness. Methods must be even better
able to correspond to the characteristics of the decision maker. Ifthe aspirations
of the decision maker change during the solution process, the algorithm must
be able to cope with this situation. Computational tests have confirmed the
idea that decision makers want to feel in control of the solution process, and
Agrell P.J., On Redundancy in Multi Criteria Decision Making, European Journal of Oper-
ational Research 98, No.3 (1997), 571-586.
Agrell P.J., Lence B.J., Starn A., An Interactive Multicriteria Decision Model for Mul-
tipurpose Reservoir Management: the Shellmouth Reservoir, Journal of Multi-Criteria
Decision Analysis 1, No.2 (1998), 61-86.
Aksoy Y., Interactive Multiple Objective Decision Making: A Bibliography (1965-1988),
Management Research News 13, No.2 (1990), 1-8.
Aksoy Y., Butler T.W., Minor E.D. Ill, Comparative Studies in Interactive Multiple Objective
Mathematical Programming, European Journal of Operational Research 89, No.2 (1996),
408-422.
Al-alvani J.E., Hobbs B.F., Malakooti B., An Interactive Integrated Multiobjective Opti-
mization Approach for Quasiconcave/Quasiconvex Utility Functions, MUltiple Criteria
Decision Making: Proceedings of the Ninth International Conference: Theory and Appli-
cations in Business, Industry, and Government, Edited by A. Goicoechea, L. Duckstein,
S. Zionts, Springer-Verlag, New York, 1992, pp. 45-60.
Alekseichik M.l., Naumov G.E., Existence of Efficient Points in Infinite-Dimensional Vector
Optimization Problem, Automation and Remote Control 42, No.5, Part 2 (1981), 666-
670.
Angehrn A.A., Supporting Multicriteria Decision Making: New Perspectives (3 New Systems,
Working Paper, INSEAD, Fontainebleau (1990a).
Angehrn A.A., "Triple C" A Visual Interactive MCDSS, Working Paper, INSEAD, Fontain-
ebleau (1990b).
Antunes C.H., Tsoukias A., Against Fashion: A Travel Survival Kit in "Modern" MCDA,
Multicriteria Analysis, Edited by J. Climaco, Springer-Verlag, Berlin, Heidelberg, 1997,
pp. 378-389.
Antunes C.H., Alves M.J., Silva A.L., Climaco J.N., An Integrated MOLP Method Base
Package - A Guided Tour of TOMMIX, Computers & Operations Research 19, No.7
(1992a), 609-625.
Antunes C.H., Melo M.P., Climaco J.N., On the Integration of an Interactive MOLP Pro-
cedure Base and Expert System Technique, European Journal of Operational Research
61, No. 1-2 (1992b), 135-144.
Arbel A., Generating Interior Search Directions for Muitiobjective Linear Programming Us-
ing Approximate Gradients and Efficient Anchoring Points, Optimization 28, No.2
(1993), 149-164.
Arbel A., An Interior Multiobjective Primal-Dual Linear Programming Algorithm Using
Approximated Gradients and Sequential Generation of Anchor Points, Optimization 30,
No.2 (1994a), 137-150.
Arbel A., Anchoring Points and Cones of Opportunities in Interior Mu/tiobjective Linear
Programming, Journal of the Operational Research Society 45, No.1 (1994b), 83-96.
Arbel A., Using Efficient Anchoring Points for Generating Search Directions in Interior
Multiobjective Linear Programming, Journal of the Operational Research Society 45,
No.3 (l994c), 330-344.
258 References
Arbel A., An Interior Multiple Objective Primal-Dual Linear Programming Algorithm Us-
ing Efficient Anchoring Points, Journal of the Operational Research Society 46, No.9
(1995), 1121-1132.
Arbel A., An Interior Multiobjective Primal-Dual Linear Programming Algorithm Based
on Approximated Gradients and Efficient Anchoring Points, Computers & Operations
Research 24, No.4 (1997), 353-365.
Arbel A., Korhonen P., Using Aspiration Levels in an Interactive Interior Multiobjective
Linear Programming Algorithm, European Journal of Operational Research 89, No.1
(1996a), 193- 20l.
Arbel A., Korhonen P., Using Aspiration Levels in an Interior Primal-Dual Multiobjective
Linear Programming Algorithm, Journal of Multi-Criteria Decision Analysis 5, No. 1
(1996b), 61-71.
Arbel A., Korhonen P., Generating Interior Search Directions in Multiple Objective Lin-
ear Programming Problems Using Aspiration Levels, Multicriteria Analysis, Edited by
J. Climaco, Springer-Verlag, Berlin, Heidelberg, 1997a, pp. 146-155.
Arbel A., Korhonen P., An Interior Multiobjective Linear Programming Algorithm Using
Aspirations, Multiple Criteria Decision Making: Proceedings of the Twelfth International
Conference, Hagen (Germany), Edited by G. Fandel, T. Gal, Lecture Notes in Economics
and Mathematical Systems 448, Springer-Verlag, Berlin, Heidelberg, 1997b, pp. 245-254.
Arbel A., Oren S.S., A Modification of Karmarkar's Algorithm to Multiple Objective Linear
Programming Problems, Multiple Criteria Decision Making - Proceedings of the Tenth
International Conference: Expand and Enrich the Domains of Thinking and Application,
Edited by G.H. Tzeng, H.F. Wand, U .P. Wen, P.L. Yu, Springer-Verlag, New York, 1994,
pp.37-46.
Arbel A., Oren S.S., Using Approximate Gradients in Developing an Interactive Interior
Primal-Dual Mu/tiobjective Linear Programming Algorithm, European Journal of Oper-
ational Research 89, No.1 (1996), 202-21l.
Armann R., Solving Mu/tiobjective Programming Problems by Discrete Representation, Op-
timization 20, No.4 (1989), 483-492.
Athan T.W., Papalambros P.Y., Multicriteria Optimization of ABS Control Algorithms Us-
ing a Quasi-Monte Carlo Method, Multiple Criteria Decision Making: Proceedings of
the Twelfth International Conference, Hagen (Germany), Edited by G. Fandel, T. Gal,
Lecture Notes in Economics and Mathematical Systems 448, Springer-Verlag, Berlin,
Heidelberg, 1997, pp. 457-466.
Aubin J.-P., Niislund B., An Exterior Branching Algorithm, Working Paper 72-42, European
Institute for Advanced Studies in Management, Brussels (1972).
Baba N., Takeda H., Miyake T., Interactive Multi-Objective Programming Technique Us-
ing Random Optimization Method, International Journal of Systems Science 19, No.1
(1988), 151-159.
Balachandran M., Gero J.S., The Noninferior Set Estimation (NISE) Method for Three
Objective Problem, Engineering Optimization 9, No.2 (1985), 77-88.
Balbas A., Guerra P ..l., Sensitivity Analysis for Convex Multiobjective Programming in Ab-
stract Spaces, Journal of Mathematical Analysis and Applications 202, No.2 (1996),
645-658.
Ballestero E., Utility Functions: A Compromise Programming Approach to Specification and
Optimization, Journal of Multi-Criteria Decision Analysis 6, No.1 (1997a), 11-16.
Ballestero E., Selecting the CP Metric: A Risk Aversion Approach, European Journal of
Operational Research 97, No.3 (1997b), 593-596.
Ballestero E., Romero C., A Theorem Connecting Utility Function Optimization and Com-
promise Programming, Operations Research Letters 10, No.7 (1991), 421-427.
Bard J.F., A Multiobjective Methodology for Selecting Subsystem Automation Options, Man-
agement Science 32, No. 12 (1986), 1628-1641.
References 259
Bardossy A., Bogardi 1., Duckstein L., Composite Programming as an Extension oj Compro-
mise Programming, Mathematics of Multi Objective Optimization, Edited by P. Serafini,
Springer-Verlag, Wien, New York, 1985, pp. 375-408.
Batishchev D.l., Anuchin V.F., Shaposhnikov D.E., The Use oj the Qualitative InJormation
on the Importance oj Particular Criteria for the Computation of Weighting Coefficients,
Multiobjective Problems of Mathematical Programming, Edited by A. Lewandowski, V.
Volkovich, Lecture Notes in Economics and Mathematical Systems 351, Springer-Verlag,
1991, pp. 2-7.
Bazaraa M.S., Sherali H.D., Shetty C.M., Nonlinear Programming: Theory and Algorithms,
Second Edition, John Wiley & Sons, Inc., New York, 1993.
Belkeziz K., Pi riot M., Proper Efficiency in Nonconvex Vector-Maximization-Problems, Eu-
ropean Journal of Operational Research 54, No.1 (1991), 74-80.
Bell D.E., Double-Exponential Utility Functions, Mathematics of Operations Research 11,
No.2 (1986), 351-361.
Bell D.E., Risk, Return, and Utility, Management Science 41, No.1 (1995), 23-30.
Bell P.C., O'Keefe R.M., An Experimental Investigation into the Efficacy of Visual Interac-
tive Simulation, Management Science 41, No.6 (1995), 1018-1038.
Belton V., Vickers S., Use of a Simple Multi-Attribute Value Function Incorporating Vi-
sual Interactive Sensitivity Analysis for Multiple Criteria Decision Making, Readings in
Multiple Criteria Decision Aid, Edited by C.A. Bana e Costa, Springer-Verlag, Berlin,
Heidelberg, 1990, pp. 319-334.
Benayoun R., de Montgolfier J., Tergny J., Laritchev 0., Linear Programming with Multiple
Objective Functions: Step Method (STEM), Mathematical Programming 1, No.3 (1971),
366-375.
Benbasat 1., Dexter A.S., An Experimental Evaluation of Graphical and Color-Enhanced
Information Presentation, Management Science 31, No. 11 (1985), 1348-1364.
Benito-Alonso M.A., Devaux P., Location and Size of Day Nurseries - a Multiple Goal
Approach, European Journal of Operational Research 6, No.2 (1981), 195-198.
Benoist J., Connectedness of the Efficient Set for Strictly Quasiconcave Sets, Journal of
Optimization Theory and Applications 96, No.3 (1998), 627-654.
Benson H.P., Existence of Efficient Solutions for Vector Maximization Problems, Journal of
Optimization Theory and Applications 26, No.4 (1978), 569-580.
Benson H.P., An Improved Definition of Proper Efficiency for Vector Maximization with
Respect to Cones, Journal of Mathematical Analysis and Applications 71, No.1 (1979a),
232-241.
Benson H.P., Vector Maximization with Two Objective Functions, Journal of Optimization
Theory and Applications 28, No.2 (1979b), 253-257.
Benson H.P., On a Domination Property for Vector Maximization with Respect to Cones,
Journal of Optimization Theory and Applications 39, No.1 (1983), 125-132; Errata
Corrige, Journal of Optimization Theory and Applications 43, No.3 (1984), 477-479.
Benson H.P., Complete Efficiency and the Initialization of Algorithms for Multiple Objective
Programming, Operations Research Letters 10, No.8 (1991), 481-487.
Benson H.P., Aksoy Y., Using Efficient Feasible Directions in Interactive Multiple Objective
Linear Programming, Operations Research Letters 10, No.4 (1991), 203-209.
Benson H.P., Morin T.L., The Vector Maximization Problem: Proper Efficiency and Stability,
SIAM Journal on Applied Mathematics 32, No.1 (1977), 64-72.
Benson H.P., Sayin S., Optimization over the Efficient Set: Four Special Cases, Journal of
Optimization Theory and Applications 80, No.1 (1994), 3-18.
Benson H.P., Sayin S., Towards Finding Global Representations oj the Efficient Set in Mul-
tiple Objective Mathematical Programming, Naval Research Logistics 44, No.1 (1997),
47-67.
260 References
Ben-Tal A., Characterization of Pareto and Lexicographic Optimal Solutions, Multiple Cri-
teria Decision Making Theory and Application, Edited by G. Fandel, T. Gal, Lecture
Notes in Economics and Mathematical Systems 177, Springer-Verlag, Berlin, Heidelberg,
1980, pp. 1-11.
Berbel J., Zamora R., Application of MOP and GP to Wildlife Management (Deer). A
Case in the Mediterranean Ecosystem, Multi-Objective Programming and Goal Pro-
gramming: Theories and Applications, Edited by M. Tamiz, Lecture Notes in Economics
and Mathematical Systems 432, Springer-Verlag, Berlin, Heidelberg, 1996, pp. 300-308.
Bestle D., Eberhard P., Dynamic System Design via Multicriteria Optimization, Multiple
Criteria Decision Making: Proceedings of the Twelfth International Conference, Hagen
(Germany), Edited by G. Fandel, T. Gal, Lecture Notes in Economics and Mathematical
Systems 448, Springer-Verlag, Berlin, Heidelberg, 1997, pp. 467-478.
Bhatia D., Aggarwal S., Optimality and Duality for Multiobjective Nonsmooth Programming,
European Journal of Operational Research 57, No.3 (1992), 360-367.
Bhatia D., Datta N., Necessary Conditions and Subgradient Duality for Non-Differentiable
and Non-Convex Multi-Objective Programming Problem, Cahiers du Centre d'Etudes de
Recherche Operationelle 27, No. 1-2 (1985), 131-139.
Bischoff E., Two Empirical Tests with Approaches to Multiple-Criteria Decision Making, Plu-
ral Rationality and Interactive Decision Processes, Edited by M. Grauer, M. Thompson,
A.P. Wierzbicki, Lecture Notes in Economics and Mathematical Systems 248, Springer-
Verlag, Berlin, Heidelberg, 1985, pp. 344-347.
Bischoff E., Multi-Objective Decision Analysis - The Right Objectives?, Large-Scale Mod-
elling and Interactive Decision Analysis, Edited by G. Fandel, M. Grauer, A. Kurzhanski,
A.P. Wierzbicki, Lecture Notes in Economics and Mathematical Systems 273, Springer-
Verlag, 1986, pp. 155-160.
Bitran G.R., Duality for Nonlinear Multiple-Criteria Optimization Problems, Journal of
Optimization Theory and Applications 35, No.3 (1981), 367-401.
Bogetoft P., HalJefjord A., Kok M., On the Convergence of Reference Point Methods in Mul-
tiobjective Programming, European Journal of Operational Research 34, No.1 (1988),
56-68.
Borwein J.M., Proper Efficient Points for Maximizations with Respect to Cones, SIAM Jour-
nal on Control and Application 15, No.1 (1977), 57-63.
Borwein J.M., On the Existence of Pareto Efficient Points, Mathematics of Operations Re-
search 8, No.1 (1983), 64-73.
Borwein J.M., Zhuang n.M., Super Efficiency in Convex Vector Optimi:llation, ZOR - Meth-
ods and Models of Operations Research 35, No.3 (1991), 175-184.
Borwein J .M., Zhuang D.M., Super Efficiency in Vector Optimization, Transactions of the
American Mathematical Society 338, No.1 (1993), 105-122.
Bowman V.J. Jr., On the Relationship of the Tchebycheff Norm and the Efficient Frontier of
Multiple-Criteria Objectives, Multiple Criteria Decision Making, Edited by H. Thiriez,
S. Zionts, Lecture Notes in Economics and Mathematical Systems 130, Springer-Verlag,
Berlin, Heidelberg, 1976, pp. 76-85.
Brans J.-P., The Space of Freedom of the Decision Maker - Modelling the Human Brain,
European Journal of Operational Research 92, No.3 (1996), 593-602.
Brans J.P., Mareschal B., The PROMETHEE Methods for MCDM, the PROMCALC, GAIA
and BANKADVISER Software, Readings in Multiple Criteria Decision Aid, Edited by
C.A. Bana e Costa, Springer-Verlag, Berlin, Heidelberg, 1990, pp. 216-252.
Brauer D.C., Naadimuthu G., A Goal Programming Model for Aggregate Inventory and
Distribution Planning, Mathematical and Computer Modelling 16, No.3 (1992), 81-90.
Brockhoff K., Experimental Test of MCDM Algorithms in a Modular Approach, European
Journal of Operational Research 22, No.2 (1985), 159-166.
Brosowski B., da Silva A.R., Simple Tests for Multi-Criteria Optimality, OR Spektrum 16,
No.4 (1994), 243-247.
References 261
Chen T., Wang B.-I., An Interactive Method for Multiobjective Decision Making, Large Scale
Systems: Theory and Applications 1983, Edited by A. Straszak, Pergamon Press, 1984,
pp. 277-282.
Chew K.L., Choo E.U., Pseudolinearity and Efficiency, Mathematical Programming 28, No.
2 (1984), 226-239.
Choo E.U., Atkins D.R., Proper Efficiency in Nonconvex Multicriteria Programming, Math-
ematics of Operations Research 8, No.3 (1983), 467-470.
Chou J.H., Hsia W.-S., Lee T.-Y., On MUltiple Objective Programming Problems with Set
Functions, Journal of Mathematical Analysis and Applications 105, No.2 (1985), 383-
394.
Clarke F.H., Optimization and Nonsmooth Analysis, John Wiley & Sons, Inc., 1983.
Cleveland W.S., The Elements of Graphing Data, Revised Edition, AT&T Bell Laboratories,
Murray Hill, New Jersey, 1994.
Climaco J.N., Antunes C.H., Flexible Method Bases and Man-Machine Interfaces as Key
Features in Interactive MOLP Approaches, Multiple Criteria Decision Support, Edited
by P. Korhonen, A. Lewandowski, J. Wallenius, Lecture Notes in Economics and Math-
ematical Systems 356, Springer-Verlag, 1991, pp. 207-216.
Climaco J.N., Antunes C.H., Man-Machine Interfacing in MCDA, Multiple Criteria Decision
Making - Proceedings of the Tenth International Conference: Expand and Enrich the
Domains of Thinking and Application, Edited by G.H. Tzeng, H.F. Wand, V.P. Wen,
P.L. Yu, Springer-Verlag, New York, 1994, pp. 239-253.
Climaco J.N., Antunes C.H., Alves M.J., From TRIMAP to SOMMIX - Building Effective
Interactive MOLP Computational Tools, Multiple Criteria Decision Making: Proceedings
of the Twelfth International Conference, Hagen (Germany), Edited by G. Fandel, T. Gal,
Lecture Notes in Economics and Mathematical Systems 448, Springer-Verlag, Berlin,
Heidelberg, 1997, pp. 285-296.
Clinton R.J., 'Troutt M.D., Toward Optimal Scaling in the Method of Abstract Forces for
Interactive Multiple Criteria Optimization, Mathematical and Computer Modelling 11
(1988), 589-594.
Cohon J.L., Multiobjective Programming and Planning, Academic Press, Inc., New York,
1978. •
Cohon J.L., Multicriteria Programming: Brief Review and Application, Design Optimization,
Edited by J.S. Gero, Academic Press, Inc., 1985, pp. 163-191.
Colson G., de Bruyn C., Multiple Criteria Supports on Micro-Computers, Belgian Journal
of Operations Research, Statistics and Computer Science 27, No.1 (1987), 29-58.
Corley H.W., A New Scalar Equivalence for Pareto Optimization, IEEE 'Transactions on
Automatic Control 25, No.4 (1980), 829-830.
Corner J.L., Buchanan J.T., Experimental Consideration of Preference in Decision Making
under Certainty, Journal of Multi-Criteria Decision Analysis 4, No.2 (1995), 107-121.
Corner J.L., Buchanan J.T., Capturing Decision Maker Preference: Experimental Compar-
ison of Decision Analysis and MCDM Techniques, European Journal of Operational
Research 98, No.1 (1997), 85-97.
Corner J.L., Corner P.D., Characteristics of Decisions in Decision Analysis Practice, Journal
of the Operational Research Society 46, No.3 (1995), 304-314.
Costa J.P., Clfmaco J.N., A Multiple Reference Point Parallel Approach in MCDM, Multiple
Criteria Decision Making - Proceedings of the Tenth International Conference: Expand
and Enrich the Domains of Thinking and Application, Edited by G.H. Tzeng, H.F. Wand,
V.P. Wen, P.L. Yu, Springer-Verlag, New York, 1994, pp. 255-263.
Crama Y., Analysis of STEM-Like Solutions to Multi-Objective Programming Problems,
Multi-Objective Decision Making, Edited by S. French, R. Hartley, L.C. Thomas, D.J.
White, Academic Press, 1983, pp. 208-213.
Craven B.D., On Sensitivity Analysis for Multicriteria Optimization, Optimization 19, No.
4 (1988), 513-523.
References 263
Dyer J.S., A Time-Sharing Computer Program for the Solution of the Multiple Criteria
Problem, Management Science 19, No. 12 (1973a), 1379-1383.
Dyer J.8., An Empirical Investigation of a Man-Machine Interactive Approach to the So-
lution of the Multiple Criteria Problem, Multiple Criteria Decision Making, Edited by
J.L. Cochrane, M. Zeleny, University of South Carolina Press, Columbia, South Carolina,
1973b, pp. 202-216.
Dyer J.S., The Effects of Errors in the Estimation of the Gradient on the Frank-Wolfe
Algorithm, with Implications for Interactive Programming, Operations Research 22, No.
1 (1974), 160-174.
Dyer J.S., Sarin R.K., Multicriteria Decision Making, Mathematical Programming for Op-
erations Researchers and Computer Scientists, Edited by A.G. Holzman, Marcel Dekker,
Inc., 1981, pp. 123-148.
Dyer J.S., Fishburn P.C., Steuer R.E., Wallenius J., Zionts S., Multiple Criteria Decision
Making, Multiattribute Utility Theory: The Next Ten Years, Management Science 38,
No.5 (1992), 645-654.
Ecker J .G., Kouada I.A., Finding Efficient Points for Linear Multiple Objective Programs,
Mathematical Programming 8, No.3 (1975), 375-377.
Edgeworth F.Y., Mathematical Psychics, University Microfilms International (Out-of-Print
Books on Demand), 1987 (the original edition in 1881).
Egudo R.R., A Duality Theorem for a Fractional Multiobjective Problem with Square Root
Terms, Recent Developments in Mathematical Programming, Edited by S. Kumar, Gor-
don and Breach Science Publishers, 1991, pp. 101-113.
Ehrgott M., Klamroth K., Connectedness of Efficient Solutions in Multiple Criteria Com-
binatorial Optimization, European Journal of Operational Research 97, No.1 (1997),
159-166.
Eiduks J., Control of Complicated Systems in Case of Vectorial Indicator of Quality, Fourth
Formator Symposium on Mathematical Methods for the Analysis of Large-Scale Systems,
Edited by J. Bebes, L. Bakule, Academia, Prague, 1983, pp. 165-180.
Eiselt H.A., Pederzoli G., Sandblom C.-L., Continuous Optimization Models, Operations
Research: Theory, Techniques, Applications, Walter de Gruyter & Co., Berlin, New York,
1987.
El Abdouni B., Thibault L., Lagrange Multipliers for Pareto Nonsmooth Programming Prob-
lems in Banach Spaces, Optimization 26, No. 3-4 (1992), 277-285.
Eom H.B., The Current State of Multiple Criteria Decision Support Systems, Human Sys-
tems Management 8 (1989), 113-119.
Eschenauer H.A., Multicriteria Optimization Procedures in Application on Structural Me-
chanics Systems, Recent Advances and Historical Developments of Vector Optimization,
Edited by J. Jahn, W. Krabs, Lecture Notes in Economics and Mathematical Systems
294, Springer-Verlag, Berlin, Heidelberg, 1987, pp. 345-376.
Eschenauer H.A., Schafer E., Bernau H., Application of Interactive Vector Optimization
Methods with Regard to Problems in Structural Mechanics, Discretization Methods and
Structural Optimization - Procedures and Applications, Edited by H.A. Eschenauer, G.
Thierauf, Lecture Notes in Engineering 42, Springer-Verlag, Berlin, Heidelberg, 1989,
pp. 95-101.
Eschenauer H., Koski J., Osyczka A. (Eds.), Multicriteria Design Optimization Procedures
and Applications, Springer-Verlag, Berlin, Heidelberg, 1990a.
Eschenauer H.A., Osyczka A., Schafer E., Interactive Multicriteria Optimization in Design
Process, Multicriteria Design Optimization Procedures and Applications, Edited by H.
Eschenauer, J. Koski, A. Osyczka, Springer-Verlag, Berlin, Heidelberg, 1990b, pp. 71-
114.
Ester J., About the Stability of the Solution of Multiple Objective Decision Making Problems
with Generalized Dominating Sets, Systems Analysis Modelling Simulation 1, No.4
(1984), 287-291.
References 265
Ester J., Holzmiiller R., About Some Methods and Applications in Multicriteria Decision
Making, Systems Analysis Modelling Simulation 3, No.5 (1986), 425-438.
Ester J., Tr5ltzsch F., On Generalized Notions of Efficiency in Multicriteria Decision Mak-
ing, Systems Analysis Modelling Simulation 3, No.2 (1986), 153-161.
Evans G.W., An Overview of Techniques for Solving Multiobjective Mathematical Programs,
Management Science 30, No. 11 (1984), 1268-1282.
Federer H., Geometric Measure Theory, Springer-Verlag, Berlin, Heidelberg, 1969.
Ferreira P.A.V., Geromel J.C., An Interactive Projection Method for Multicriteria Optimiza-
tion Problems, IEEE Transactions on Systems, Man, and Cybernetics 20, No.3 (1990),
596-605.
Ferreira P.A.V., Machado M.E.S., Solving Multiple-Objective Problems in the Objective
Space, Journal of Optimization Theory and Applications 89, No.3 (1996), 659-680.
Fishburn P.C., Lexicographic Orders, Utilities and Decision Rules: A Survey, Management
Science 20, No. 11 (1974), 1442-1471.
Flavell R.B., A New Goal Programming Formulation, OMEGA, The International Journal
of Management Science 4, No.6 (1976), 731-732.
Frank M., Wolfe P., An Algorithm for Quadratic Programming, Naval Research Logistics
Quarterly 3, No. 1-2 (1956), 95-110.
French S., Interactive Multi-Objective Programming: Its Aims, Applications and Demands,
Journal of the Operational Research Society 35, No.9 (1984), 827-834.
Friedman L., Mehrez A., Towards a Theory of Determining Size in Multi-Objective Service
Systems, European Journal of Operational Research 63, No.3 (1992), 398-408.
Friesz T.L., Multiobjective Optimization in Transportation: The Case of Equilibrium Network
Design, Organizations: Multiple Agents with Multiple Criteria, Edited by N.N. Morse,
Lecture Notes in Economics and Mathematical Systems 190, Springer-Verlag, Berlin,
Heidelberg, 1981, pp. 116-127.
Gal T., On Efficient Sets in Vector Maximum Problems - A Brief Survey, European Journal
of Operational Research 24, No.2 (1986),253-264.
Gal T., Postoptimal Analyses, Parametric Programming, and Related Topics: Degeneracy,
Multicriteria Decision Making, Redundancy, Second Edition, Walter de Gruyter & Co.,
Berlin, 1995.
Gal T., Hanne T., On the Development and Future Aspects of Vector Optimization and
MCDM, Multicriteria Analysis, Edited by J. Climaco, Springer-Verlag, Berlin, Heidel-
berg, 1997, pp. 130-145.
Gal T., Leberling H., Redundant Objective Functions in Linear Vector Maximum Problems
and Their Determination, European Journal of Operational Research 1, No.3 (1977),
176-184.
Gal T., Wolf K., Stability in Vector Maximization - A Survey, European Journal of Opera-
tional Research 25, No.2 (1986), 169-182.
Gardiner L.R., Steuer R.E., Unified Interactive Multiple Objective Programming, European
Journal of Operational Research 74, No.3 (1994a), 391-406.
Gardiner L.R., Steuer R.E., Unified Interactive Multiple Programming: An Open Architecture
for Accommodating New Procedures, Journal of the Operational Research Society 45, No.
12 (1994b), 1456-1466.
Gardiner L.R., Vanderpooten D., Interactive Multiple Criteria Procedures: Some Reflections,
Multicriteria Analysis, Edited by J. Climaco, Springer-Verlag, Berlin, Heidelberg, 1997,
pp. 290-301.
Gass S., Saaty T., The Computational Algorithm for the Parametric Objective Function,
Naval Research Logistics Quarterly 2 (1955), 39-45.
Geoffrion A.M., Proper Efficiency and the Theory of Vector Maximization, Journal of Math-
ematical Analysis and Applications 22, No.3 (1968), 618-630.
Geoffrion A.M., Dyer J.S., Feinberg A., An Interactive Approach for Multi-Criterion Opti-
mization, with an Application to the Operation of an Academic Department, Manage-
ment Science 19, No.4 (1972)' 357-368.
266 References
Geromel J.C., Ferreira P.A.V., An Upper Bound on Properly Efficient Solutions on Multi-
objective Optimization, Operations Research Letters 10, No.2 (1991), 83-86.
Gershon M., DuckBtein L., An Algorithm lor Choosing 01 a Multiobjective Technique, Essays
and Surveys on Multiple Criteria Decision Making, Edited by P. Hansen, Lecture Notes
in Economics and Mathematical Systems 209, Springer-Verlag, Berlin, Heidelberg, 1983,
pp.53-62.
Ghosh D., Pal B.B., Basu M., Implementation 01 Goal Programming in Long-Range Resource
Planning in University Management, Optimization 24, No. 3-4 (1992), 373-383.
Giannikos t, El-Darzi E., Lees P., An Integer Goal Programming Model to Allocate Offices
to Staff in an Academic Institution, Journal of the Operational Research Society 46, No.
6 (1995), 713-720.
Gibson M., Bernardo J.J., Chung C., Badinelli R., A Comparison 01 Interactive Mu/tiple-
Objective Decision Making Procedures, Computers & Operations Research 14, No. 2
(1987), 97-105.
Giokas D., Vassiloglou M., A Goal Programming Model lor Bank Assets and Liabilities
Management, European Journal of Operational Research 50, No.1 (1991), 48-60.
Goh C.J., Yang X.Q., Analytic Efficient Solution Set lor Multi-Criteria Quadratic Programs,
European Journal of Operational Research 92, No.1 (1996), 166-181.
Giipfert A., Multicriterial Duality, Examples and Advances, Large-Scale Modelling and In-
teractive Decision Analysis, Edited by G. Fandel, M. Grauer, A. Kurzhanski, A.P.
Wierzbicki, Lecture Notes in Economics and Mathematical Systems 273, Springer-Verlag,
1986, pp. 52-58.
Gouljashki V.G., Kirilov L.M., Narula S.C., Vassilev V.S., A Reference Direction Inter-
active Algorithm 01 the Multiple Objective Nonlinear Integer Programming, MUltiple
Criteria Decision Making: Proceedings of the Twelfth International Conference, Hagen
(Germany), Edited by G. Fandel, T. Gal, Lecture Notes in Economics and Mathematical
Systems 448, Springer-Verlag, Berlin, Heidelberg, 1997, pp. 308-317.
Granat J., Kreglewski K., Paczynski J., Stachurski A., IAC-DIDASN++ Modular Modeling
and Optimization System Theoretical Foundations, Report of the Institute of Automatic
Control, Warsaw University of Technology (1994a).
Granat J., Kreglewski K., Paczynski J., Stachurski A., IAC-DIDASN++ Modular Modeling
and Optimization System Users Guide, Report of the Institute of Automatic Control,
Warsaw University of Technology (1994b).
Grauer M., A Dynamic Interactive Decision Analysis and Support System (DIDASS), User's
Guide, Working Paper WP-83-60, IIASA, Laxenburg (1983a).
Grauer M., Reference Point Optimization - The Nonlinear Case, Essays and Surveys on
Multiple Criteria Decision Making, Edited by P. Hansen, Lecture Notes in Economics
and Mathematical Systems 209, Springer-Verlag, Berlin, Heidelberg, 1983b, pp. 126-135.
Grauer M., Merten U., Analysis of Multimedia Concepts for Decision Support, Arbeits-
berichte Nr. 20, Universitat - Gesamthochschule - Siegen, 1995.
Grauer M., Lewandowski A., Wierzbicki A., DIDASS - Theory, Implementation and Expe-
riences, Interactive Decision Analysis, Edited by M. Grauer, A.P. Wierzbicki, Lecture
Notes in Economics and Mathematical Systems 229, Springer-Verlag, 1984, pp. 22-30.
Graves S.B., Ringuest J.L., Bard J.F., Recent Developments in Screening Methods for Non-
dominated Solutions in Mu/tiobjectiue Optimization, Computers & Operations Research
19, No.7 (1992), 683-694.
Guddat J., Guerra Vasquez F., Tammer K., Wendler K., Multiobjective and Stochastic Op-
timization Based on Parametric Optimization, Akademie-Verlag, Berlin, 1985.
Gulati T.R., Islam M.A., Proper Efficiency in a Linear Fractional Vector Maximum Problem
with Genemlized Convex Constraints, European Journal of Operational Research 36, No.
3 (1988), 339-345.
Gulati T.R., Islam M.A., Efficiency and Proper Efficiency in Nonlinear Vector Maximum
Problems, European Journal of Operational Research 44, No.3 (1990), 373-382.
References 267
Haimes Y.Y., The Surrogate Worth Trade-off (SWT) Method and Its Extensions, Multiple
Criteria Decision Making Theory and Application, Edited by G. Fandel, T. Gal, Lecture
Notes in Economics and Mathematical Systems 177, Springer-Verlag, Berlin, Heidelberg,
1980, pp. 85-108.
Haimes Y.Y., Multiple-Criteria Decisionmaking: A Retrospective Analysis, IEEE Transac-
tions on Systems, Man, and Cybernetics 15, No.3 (1985), 313-315.
Haimes Y.Y., Chankong V., Kuhn-Thcker Multipliers as Trade-Offs in Multiobjective Deci-
sion-Making Analysis, Automatica 15, No.1 (1979), 59-72.
Haimes Y.Y., Hall W.A., Multiobjectives in Water Resource Systems Analysis: The Surrogate
Worth Trade Off Method, Water Resources Research 10, No.4 (1974), 615-624.
Haimes Y.Y., Li D., Hierarchical Multiobjective Analysis for Large-Scale Systems: Review
and Current Status, Automatica 24, No.1 (1988), 53-69.
Haimes Y.Y., Lasdon L.S., Wismer D.A., On a Bicriterion Formulation of the Problems
of Integrated System Identification and System Optimization, IEEE Transactions on
Systems, Man, and Cybernetics 1 (1971),296-297.
Haimes Y.Y., Hall W.A., Freedman H.T., Multiobjective Optimization in Water Resources
Systems, Elsevier Scientific Publishing Company, Amsterdam, 1975.
Haimes Y.Y., Tarvainen K, Shima T., Thadathil J., Hierarchical Multiobjective Analysis of
Large-Scale Systems, Hemisphere Publishing Corporation, New York, 1990.
Hall W.A., Haimes Y.Y., The Surrogate Worth Trade-Off Method with Multiple Decision-
Makers, Multiple Criteria Decision Making Kyoto 1975, Edited by M. Zeleny, Lecture
Notes in Economics and Mathematical Systems 123, Springer-Verlag, Berlin, Heidelberg,
1976, pp. 207-233.
Hallefjord A., Jiirnsten K, An Entropy Target-Point Approach to Multiobjective Program-
ming, International Journal of Systems Science 17, No.4 (1986), 639-653.
Halme M., Korhonen P., Nondominated Tradeoffs and Termination in Interactive Multiple
Objective Linear Programming, Improving Decision Making in Organisations, Edited
by A.G. Lockett, G. Islei, Lecture Notes in Economics and Mathematical Systems 335,
Springer-Verlag, Berlin, Heidelberg, 1989, pp. 410-423.
Harrison T.P., Rosenthal R.E., Optimization of Utility and Value Functions, Naval Research
Logistics 35, No.3 (1988), 411-418.
Haslinger J., Neittaanmaki P., Finite Element Approximation for Optimal Shape Design:
Theory and Applications, John Wiley & Sons, Inc., Chichester, 1988.
Hazen G.B., Differential Characterizations of Nonconical Dominance in Multiple Objective
Decision Making, Mathematics of Operations Research 13, No.1 (1988), 174-189.
Helbig S., On the Connectedness of the Set of Weakly Efficient Points of a Vector Optimiza-
tion Problem in Locally Convex Spaces, Journal of Optimization Theory and Applications
65, No.2 (1990), 257-270.
Helbig S., Approximation of the Efficient Point Set by Perturbation of the Ordering Cone,
ZOR - Methods and Models of Operations Research 35, No.3 (1991), 197-220.
Hemaida R.S., Kwak N.K, A Linear Goal Programming Model for Trans-Shipment Prob-
lems with Flexible Supply and Demand Constraints, Journal of the Operational Research
Society 45, No.2 (1994), 215-224.
Hemming T., Multiobjective Decision Making under Certainty, Doctoral Thesis, The Eco-
nomic Research Institute at the Stockholm School of Economics, Stockholm, 1978.
Hemming T., Some Modifications of a Large Step Gradient Method for Interactive Multi-
criterion Optimization, Organizations: Multiple Agents with Multiple Criteria, Edited
by N.N. Morse, Lecture Notes in Economics and Mathematical Systems 190, Springer-
Verlag, Berlin, Heidelberg, 1981, pp. 128-139.
Henig M.I., Existence and Characterization of Efficient Decisions with Respect to Cones,
Mathematical Programming 23, No.1 (1982a), 111-116.
Henig M.l., Proper Efficiency with Respect to Cones, Journal of Optimization Theory and
Applications 36, No.3 (1982b), 387-407.
268 References
Jahn J., Scalarization in Vector Optimization, Mathematical Programming 29, No.2 (1984),
203-218.
Jahn J., Scalarization in Multi Objective Optimization, Mathematics of Multi Objective
Optimization, Edited by P. Serafini, Springer-Verlag, Wien, New York, 1985, pp. 45-88.
Jahn J., Mathematical Vector Optimization in Partially Ordered Linear Spaces, Methoden
und Verfahren der mathematischen Physik, Band 31, Verlag Peter Lang GmbH, Frankfurt
am Main, 1986a..
Jahn J., Existence Theorems in Vector Optimization, Journal of Optimization Theory and
Applications 50, No.3 (1986b), 397-406.
Jahn J., Parametric Approximation Problems Arising in Vector Optimization, Journal of
Optimization Theory and Applications 54, No.3 (1987), 503-516.
Janssen R., Multiobjective Decision Support for Environmental Management, Kluwer Aca-
demic Publishers, Dordrecht, 1992.
Jarvenpaa S.L., The Effect of Task Demands and Graphical Format on Information Pro-
cessing Strategies, Management Science 35, No.3 (1989), 285-30:-1.
Jaszkiewicz A., Slowinski R., The Light Beam Search over a Non-Dominated Surface of a
Multiple-Objective Programming Problem, Multiple Criteria Decision Making - Proceed-
ings of the Tenth International Conference: Expand and Enrich the Domains of Thinking
and Application, Edited by G.H. Tzeng, H.F. Wand, V.P. Wen, P.L. Yu, Springer-Verlag,
New York, 1994, pp. 87-99.
Jaszkiewicz A., Slowinski R., The Light Beam Search - Outranking Based Interactive Proce-
dure for Multiple-Objective Mathematical Programming, Advances in Multicriteria Anal-
ysis, Edited by P.M. Pardalos, Y. Siskos, C. Zopounidis, Kluwer Academic Publishers,
Dordrecht, 1995, pp. 129-146.
Jedrzejowicz P., Rosicka L., Multicriterial Reliability Optimization Problem, Foundations of
Control Engineering 8, No. 3-4 (1983), 165-173.
Jelassi M.T., Ozernoy V.M., A Framework for Building an Expert System for MCDM Models
Selection, Improving Decision Making in Organisations, Edited by A.G. Lockett, G. Islei,
Lecture Notes in Economics and Mathematical Systems 335, Springer-Verlag, Berlin,
Heidelberg, 1989, pp. 553-562.
Jelassi M.T., Jarke M., Stohr E.A., Designing a Generalized Multiple Criteria Decision
Support System, Decision Making with Multiple Objectives, Edited by Y.Y. Haimes, V.
Chankong, Lecture Notes in Economics and Mathematical Systems 242, Springer-Verlag,
1985, pp. 214-235.
Jendo S., Multicriteria Optimization of Single-Layer Cable System, Archives of Mechanics
38, No. :-I (1986), 219-234.
Jeyakumar V., Yang X.Q., Convex Composite Multi-Objective Nonsmooth Programming,
Mathematical Programming 59, No. :-I (1993),325-:-143.
Jurkiewicz E., Stability of Compromise Solution in Multicriteria Decision-Making Problems,
Journal of Optimization Theory and Applications 40, No.1 (1983), 77-83.
Kacprzyk J., Orlovski S.A. (Eds.), Optimization Models Using Fuzzy Sets and Possibility
Theory, Theory and Decision Library, Series B: Mathematical and Statistical Methods,
D. Reidel Publishing Company, 1987.
Kaliszewski I., A Characterization of Properly Efficient Solutions by an Augmented Tcheby-
chef] Norm, Bulletin of the Polish Academy of Sciences - Technical Sciences 33, No. 7-8
(1985), 415-420.
Kaliszewski I., Norm Scalarization and Proper Efficiency in Vector Optimization, Founda-
tions of Control Engineering 11, No.3 (1986), 117-131.
Kaliszewski I., A Modified Weighted Tchebychef] Metric for Multiple Objective Programming,
Computers & Opera.tions Research 14, No.4 (1987), 315-323.
Kaliszewski I., Quantitative Pareto Analysis by Cone Separation Technique, Kluwer Aca-
demic Publishers, 1994.
Kaliszewski I., A Theorem on Nonconvex Functions and Its Application to Vector Optimiza-
tion, European Journal of Operational Research 80, No.2 (1995), 439-445.
270 References
Kaliszewski I., Michalowski W., Generation of Outcomes with Selectively Bounded Trade-
Offs, Foundations of Computing and Decision Sciences 20, No.2 (1995), 113-122.
Kaliszewski I., Michalowski W., Efficient Solutions and Bounds on Tradeoffs, Journal of
Optimization Theory and Applications 94, No.2 (1997), 381-394.
Kaliszewski I., Michalowski W., Kersten G., A Hybrid Interactive Technique for the MCDM
Pmblems, Essays in Decision Making: A Volume in Honour of Stanley Zionts, Edited
by M.H. Karwan, J. Spronk, J. Wallenius, Springer-Verlag, Berlin, Heidelberg, 1997,
pp.48-59.
Kasanen E., Ostermark R., Zeleny M., Gestalt System of Holistic Graphics: New Manage-
ment Support View 01 MCDM, Computers & Operations Research 18, No.2 (1991),
233-239.
Kaul R.N., Lyall V., Kaur S., Semilocal Pseudolinearity and Efficiency, European Journal
of Operational Research 36, No.3 (1988), 402-409.
Keeney R.L., Raiffa H., Decisions with Multiple Objectives: Preferences and Value Tradeoffs,
John Wiley & Sons, Inc., 1976.
Kim S.H., Gal T., A New Interactive Algorithm for Multi-Objective Linear Programming Us-
ing Maximally Changeable Dominance Cone, European Journal of Operational Research
64, No.1 (1993), 126-137.
Kim S.-H., Ahn B.-S., Choi S.-H., An Efficient Force Planning System Using Multi-Objective
Linear Goal Programming, Computers & Operations Research 24, No.6 (1997), 569-580.
Kirilov L.M., Vassilev V.S., A Method for Solving Multiple Objective Linear Programming
Problems, Multicriteria Analysis, Edited by J. Climaco, Springer-Verlag, Berlin, Heidel-
berg, 1997, pp. 302-309.
Kitagawa H., Watanabe N., Nishimura Y., Matsubara M., Some Pathological Configura-
tions 01 Noninlerior Set Appearing in Multicriteria Optimization Problems 01 Chemical
Processes, Journal of Optimization Theory and Applications 38, No.4 (1982), 541-:-563.
Kiwiel KC., An Aggregate Subgradient Descent Method for Solving Large Convex Nonsmooth
Multiobjective Minimization Problems, Large Scale Systems: Theory and Applications
1983, Edited by A. Straszak, Pergamon Press, 1984, pp. 283-288.
Kiwiel KC., A Descent Method lor Nonsmooth Convex Multiobjective Minimization, Large
Scale Systems 8, No.2 (1985a), 119-129.
Kiwiel KC., An Algorithm for Linearly Constrained Nonsmooth Convex Multiobjective Min-
imization, Systems Analysis and Simulation 1985 Part I: Theory and Foundations, Edited
by A. Sydow, S.M. Thoma, R. Vichnevetsky, Akademie-Verlag, Pergamon Press, Berlin,
1985b, pp. 236-238.
Kiwiel KC., Methods of Descent for Nondifferentiable Optimization, Lecture Notes in Math-
ematics 1133, Springer-Verlag, Berlin, Heidelberg, 1985c.
Kiwiel KC., A Method for Solving Certain Quadratic Programming Problems Arising in
Nonsmooth Optimization, IMA Journal of Numerical Analysis 6, No.2 (1986), 137-152.
Kiwiel KC., Proximity Control in Bundle Methods lor Convex Nondifferentiable Minimiza-
tion, Mathematical Programming 46, No.1 (1990), 105-122.
Kiwiel KC., Stachurski A., Issues of Effectiveness Arising in the Design 01 a System 01
Nondifferentiable Optimization Algorithms, Aspiration Based Decision Support Systems:
Theory, Software and Applications, Edited by A. Lewandowski, A. Wierzbicki, Lecture
Notes in Economics and Mathematical Systems 331, Springer-Verlag, 1989, pp. 180-192.
Klimberg R., GRADS: A New Graphical Display System lor Visualizing Multiple Criteria
Solutions, Computers & Operations Research 19, No.7 (1992), 707-711.
Klinger A., Improper Solutions 01 the Vector Maximum Problem, Operations Research 15,
No.3 (1967), 570-572.
Kok M., Scalarization and the Interface with Decision Makers in Interactive Multi Objec-
tive Linear Programming, Mathematics of Multi Objective Optimization, Edited by P.
Serafini, Springer-Verlag, Wien, New York, 1985, pp. 433-438.
References 271
Kok M., The Interface with Decision Makers and Some Experimental Results in Interactive
Multiple Objective Programming Method, European Journal of Operational Research 26,
No.1 (1986), 96-107.
Kok M., Lootsma F.A., Pairwise-Comparison Methods in Multiple Objective Programming,
with Applications in a Long-Term Energy-Planning Model, European Journal of Opera-
tional Research 22, No.1 (1985), 44-55.
Kiiksalan M.M., Moskowitz H., Solving the Multiobjective Decision Making Problem Using
a Distance Function, Multiple Criteria Decision Making - Proceedings of the Tenth
International Conference: Expand and Enrich the Domains of Thinking and Application,
Edited by G.H. Tzeng, H.F. Wand, U.P. Wen, P.L. Yu, Springer-Verlag, New York, 1994,
pp. 101-107.
Koopmans T.C., Analysis and Production as an Efficient Combination of Activities, Activity
Analysis of Production and Allocation, Edited by T.C. Koopmans, Yale University Press,
New Haven, London, 1971 (originally published in 1951), pp. 33-97.
Korhonen P.J., Solving Discrete Multiple Criteria Problems by Using Visual Interaction,
Large-Scale Modelling and Interactive Decision Analysis, Edited by G. Fandel, M. Grauer,
A. Kurzhanski, A.P. Wierzbicki, Lecture Notes in Economics and Mathematical Systems
273, Springer-Verlag, 1986, pp. 176-185.
Korhonen P.J., VIG (A Visual Interactive Approach to Goal Programming), User's Guide,
NumPlan (1987).
Korhonen P.J., The Multiobjective Linear Programming Decision Support System VIG and
Its Applications, Readings in Multiple Criteria Decision Aid, Edited by C.A. Bana e
Costa, Springer-Verlag, Berlin, Heidelberg, 1990, pp. 471-491.
Korhonen P.J., Two Decision Support Systems for Continuous and Discrete Multiple Criteria
Decision Making: VIG and VIMDA, Methodology, Implementation and Applications
of Decision Support Systems, Edited by A. Lewandowski, P. Serafini, M.G. Speranza,
International Centre for Mechanical Sciences, Courses and Lectures No. 320, Springer-
Verlag, Wien, New York, 1991a, pp. 85-103.
Korhonen P.J., Using Harmonious Houses for Visual Pairwise Comparison of Multiple Cri-
teria Alternatives, Decision Support Systems 7, No.1 (1991b), 47-54.
Korhonen P.J., Reference Direction Approach to Multiple Objective Linear Programming:
Historical Overview, Essays in Decision Making: A Volume in Honour of Stanley Zionts,
Edited by M.H. Karwan, J. Spronk, J. Wallenius, Springer-Verlag, Berlin, Heidelberg,
1997, pp. 74-92.
Korhonen P., Halme M., Using Lexicographic Parametric Programming for Searching a Non-
dominated Set in Multiple Objective Linear Programming, Journal of Multi-Criteria De-
cision Analysis 5, No.4 (1996), 291-300.
Korhonen P., Laakso J., A Visual Interactive Method for Solving the Multiple-Criteria Prob-
lem, Interactive Decision Analysis, Edited by M. Grauer, A.P. Wierzbicki, Lecture Notes
in Economics and Mathematical Systems 229, Springer-Verlag, 1984, pp. 146-153.
Korhonen P., Laakso J., On Developing a Visual Interactive Multiple Criteria Method -
An Out/ine, Decision Making with Multiple Objectives, Edited by Y.Y. Haimes, V.
Chankong, Lecture Notes in Economics and Mathematical Systems 242, Springer-Verlag,
Berlin, Heidelberg, 1985, pp. 272-281.
Korhonen P., Laakso J., A Visual Interactive Method for Solving the Multiple Criteria Prob-
lem, European Journal of Operational Research 24, No.2 (1986a), 277-287.
Korhonen P., Laakso J., Solving Generalized Goal Programming Problems Using a Visual
Interactive Approach, European Journal of Operational Research 26, No.3 (1986b),
355-363.
Korhonen P., Narula S.C., An Evolutionary Approach to Support Decision Making with
Linear Decision Models, Journal of Multi-Criteria Decision Analysis 2, No.2 (1993),
111-119.
Korhonen P., Wallenius J., A Pareto Race, Naval Research Logistics 35 (1988),615-623.
272 References
Korhonen P., Wallenius J., A Careful Look at Efficiency and Utility in Multiple Criteria
Decision Making: A Tutorial, Asia-Pacific Journal of Operational Research 6, No. 1
(1989a), 46-62.
Korhonen P., Wallenius J., Observations Regarding Choice Behaviour in Interactive Multiple
Criteria Decision-Making Environments: An Experimental Investigation, Methodology
and Software for Interactive Decision Support, Edited by A. Lewandowski, I. Stanchev,
Lecture Notes in Economics and Mathematical Systems 337, Springer-Verlag, 1989b,
pp.163-170.
Korhonen P., WalleniuB J., VIG - A Visual and Dynamic Decision Support System for
Multiple Objective Linear Programming, Multiple Criteria Decision Making and Risk
Analysis Using Microcomputers, Edited by B. Karpak, S. Zionts, Springer-Verlag, Berlin,
Heidelberg, 1989c, pp. 251-281.
Korhonen P., Wallenius J., A Multiple Objective Linear Programming Decision Support Sys-
tem, Decision Support Systems 6, No.3 (1990), 243-251.
Korhonen P., Wallen ius J., Behavioural Issues in MCDM: Neglected Research Questions,
Journal of Multi-Criteria Decision Analysis 5, No.3 (1996), 178-182.
Korhonen P., Wallenius J., Behavioral Issues in MCDM: Neglected Research Questions,
Multicriteria Analysis, Edited by J. Climaco, Springer-Verlag, Berlin, Heidelberg, 1997,
pp. 412-422.
Korhonen P., Wallen ius J., Zionts S., Solving the Discrete Multiple Criteria Problem Using
Convex Cones, Management Science 30, No. 11 (1984), 1336-1345.
Korhonen P., Moskowitz H., Wallenius J., Choice Behavior in Interactive Multiple-Criteria
Decision Making, Annals of Operations Research 23, No. 1-4 (1990), 161-179.
Korhonen P., Moskowitz H., Wallenius J., Multiple Criteria Decision Support - A Review,
European Journal of Operational Research 63, No.3 (1992a), 361-375.
Korhonen P., Wallen ius J., Zionts S., A Computer Graphics-Based Decision Support System
for Multiple Objective Linear Programming, European Journal of Operational Research
60, No.3 (1992b), 280-286.
Korhonen P., Salo S., Steuer R.E., A Heuristic for Estimating Nadir Criterion Values in
Multiple Objective Linear Programming, Operations Research 45, No.5 (1997), 751-
757.
Kornbluth J.S.H., A Survey of Goal Programming, OMEGA 1, No.2 (1973), 193-205.
Koski J., Silvennoinen R., Norm Methods and Partial Weighting in Multicriterion Optimiza-
tion of Structures, International Journal for Numerical Methods in Engineering 24, No.
6 (1987), 1101-1121.
Kostreva M.M., Ordoyne T.J., Wiecek M., Multiple-Objective Programming with Polynomial
Objectives and Constraints, European Journal of Operational Research 57, No.3 (1992),
381-394.
Kostreva M.M., Zheng Q., Zhuang D., A Method for Approximating Solutions of Multicri-
terial Nonlinear Optimization Problems, Optimization Methods and Software 5, No.3
(1995), 209-226.
Kreglewski T., Nonlinear Optimization Techniques in Decision Support Systems, Aspira-
tion Based Decision Support Systems: Theory, Software and Applications, Edited by A.
Lewandowski, A. Wierzbicki, Lecture Notes in Economics and Mathematical Systems
331, Springer-Verlag, 1989, pp. 158-171.
Kreglewski T., Paczynski J., Wierzbicki A.P., IAC-DIDAS-N A Dynamic Interactive De-
cision Analysis and Support System for Multicriteria Analysis of Nonlinear Models on
Professional Microcomputers, Theory, Software and Testing Examples for Decision Sup-
port Systems, Edited by A. Lewandowski, A. Wierzbicki, Working Paper WP-87-26,
IIASA, Laxenburg, Austria, 1987, pp. 177-192.
Kreglewski T., Granat J., Wierzbicki A.P., IAS-DIDAS-N: A Dynamic Interactive Decision
Analysis and Support System for Multicriteria Analysis of Nonlinear Models, v. 4.0,
Collaborative Paper CP-91-010, IIASA, Laxenburg (1991).
References 273
Kuhn H.W., Tucker A.W., Nonlinear Programming, Proceedings of the Second Berkeley
Symposium on Mathematical Statistics and Probability, Edited by J. Neyman, University
of California Press, Berkeley, Los Angeles, 1951, pp. 481-492.
Kuk H., Tanino T., Tanaka M., Sensitivity Analysis in Parametrized Convex Vector Opti-
mization, Journal of Mathematical Analysis and Applications 202, No.2 (1996), 511-522.
Kumar P., Singh N., Tewari N.K., A Nonlinear Goal Programming Model for Multistage,
Multiobjective Decision Problems with Application to Grouping and Loading Problem in
a Flexible Manufacturing System, European Journal of Operational Research 53, No.2
(1991), 166-171.
Larichev OJ., Polyakov O.A., Nikiforov A.O., Multicriterion Linear Programming Problems,
Journal of Economic Psychology 8 (1987), 389-407.
Lazimy R., Solving Multiple Criteria Problems by Interactive Decomposition, Mathematical
Programming 35, No.3 (1986a), 334-361.
Lazimy R., Interactive Relaxation Method for a Broad Class of Integer and Continuous Non-
linear Multiple Criteria Problems, Journal of Mathematical Analysis and Applications
116, No.2 (1986b), 553-573.
Lee G.M., On Efficiency in Nonlinear Fractional Vector Maximization Problem, Optimiza-
tion 25, No.1 (1992), 47-52.
Levary R.R., Optimal Control Problems with Multiple Goal Objectives, Optimal Control
Applications & Methods 1, No.2 (1986), 201-207.
Lewandowski A., Man-Machine Dialogue Interfaces in Decision Support Systems, Large-
Scale Modelling and Interactive Decision Analysis, Edited by G. Fandel, M. Grauer, A.
Kurzhanski, A.P. Wierzbicki, Lecture Notes in Economics and Mathematical Systems
273, Springer-Verlag, 1986, pp. 161-175.
Lewandowski A., Granat J., Dynamic BIPLOT as the Interaction Interface for Aspiration
Based Decision Support Systems, Multiple Criteria Decision Support, Edited by P. Kor-
honen, A. Lewandowski, J. Wallenius, Lecture Notes in Economics and Mathematical
Systems 356, Springer-Verlag, 1991, pp. 229-241.
Lewandowski A., Grauer M., The Reference Point Optimization Approach - Methods of
Efficient Implementation, Working Paper WP-82-26, IIASA, Laxenburg (1982).
Lewandowski A., Rogowski T., Kreglewski T., A Trajectory-Oriented Extension of DIDASS
and Its Applications, Plural Rationality and Interactive Decision Processes, Edited by M.
Grauer, M. Thompson, A.P. Wierzbicki, Lecture Notes in Economics and Mathematical
Systems 248, Springer- Verlag, Berlin, Heidelberg, 1985a, pp. 261-268.
Lewandowski A., Rogowski T., Kreglewski T., Application of DIDASS Methodology to a
Flood Control Problem - Numerical Experiments, Plural Rationality and Interactive
Decision Processes, Edited by M. Grauer, M. Thompson, A.P. Wierzbicki, Lecture Notes
in Economics and Mathematical Systems 248, Springer-Verlag, Berlin, Heidelberg, 1985b,
pp. 325-328.
Lewandowski A., Kreglewski T., Rogowski T., Wierzbicki A.P., Decision Support Systems of
DIDAS Family (Dynamic Interactive Decision Analysis & Support), Theory, Software
and Testing Examples for Decision Support Systems, Edited by A. Lewandowski, A.
Wierzbicki, Working Paper WP-87-26, IIASA, Laxenburg, 1987, pp. 4-26.
Li D., Convexification of a Noninferior Frontier, Journal of Optimization Theory and Ap-
plications 88, No.1 (1996), 177-196.
Li D., Haimes Y.Y., The Envelope Approach for Multiobjective Optimization Problems, IEEE
Transactions on Systems, Man, and Cybernetics 11, No.6 (1987),1026-1038; Correction,
IEEE Transactions on Systems, Man, and Cybernetics 18, No.2 (1988),332.
Lieberman E.R., Soviet Multi-Objective Programming Methods: An Overview, Multiobjec-
tive Problems of Mathematical Programming, Edited by A. Lewandowski, V. Volkovich,
Lecture Notes in Economics and Mathematical Systems 351, Springer-Verlag, 1991a,
pp.21-31.
Lieberman E.R., Soviet Multi-Objective Mathematical Programming Methods: An Overview,
Management Science 37, No.9 (l991b), 1147-1165.
274 References
Luenberger D.G., Linear and Nonlinear Programming, Second Edition, Addison-Wesley Pub-
lishing Company, Inc., 1984.
MacCrimmon K.R., An Overview of Multiple Objective Decision Making, Multiple Criteria
Decision Making, Edited by J .L. Cochrane, M. Zeleny, University of South Carolina
Press, Columbia, South Carolina, 1973, pp. 18-44.
Majumdar A.A.K., Optimality Conditions in Differentiable Multiobjective Programming,
Journal of Optimization Theory and Applications 92, No.2 (1997), 419-427.
Makela M.M., Issues of Implementing a Fortran Subroutine Package NSOLIB for Nonsmooth
Optimization, Report 5/1993, University of Jyviiskyla, Department. of Mathematics, Lab-
oratory of Scientific Computing, Jyviiskyla (1993).
Makela M.M., Neittaanmaki P., Nonsmooth Optimization: Analysis and Algorithms with
Applications to Optimal Control, World Scientific Publishing Co., Singapore, 1992.
Manas M., Graphical Methods of Multicriterial Optimization, Zeitschrift flir Angewandte
Mathematik und Mechanik 62, No.5 (1982), 375-377.
Mangasarian O.L., Nonlinear Programming, McGraw-Hill, Inc., 1969.
Mareschal B., Brans J.-P., Geometrical Representation for MCDA, European Journal of
Operational Research 34, No.1 (1988), 69-77.
Martein 1., Some Results on Regularity in Vector Optimization, Optimization 20, No.6
(1989), 787-798.
Martel J.-M., Aouni B., Incorporating the Decision-Maker's Preferences in the Goal-Pro-
gramming Model, Journal of the Operational Research Society 41, No. 12 (1990), 1121-
1132.
Martel J.-M., Aouni B., Diverse Imprecise Goal Programming Model Formulations, Journal
of Global Optimization 12, No.2 (1998), 127-138.
Martinez-Legaz J.-E., Lexicographical Order and Duality in Multiobjective Programming,
European Journal of Operational Research 33, No.3 (1988), 342-348.
Martfnez-Legaz J .-E., Singer 1., Surrogate Duality for Vector Optimization, Numerical Func-
tional Analysis and Optimization 9, No. 5-6 (1987), 547-568.
Marusciac I., On Fritz John Type Optimality Criterion in Multi-Objective Optimization,
Revue d'Analyse Numerique et de Theorie de l'Approximation 11, No. 1-2 (1982), 109-
114.
Masud A.S.M., Hwang C.L., Interactive Sequential Goal Programming, Journal of the Oper-
ational Research Society 32 (1981), 391-400.
Masud A.S.M., Zheng X., An Algorithm for Multiple-Objective Non-Linear Programming,
Journal of the Operational Research Society 40, No. 10 (1989), 895-906.
Matos M.A., Borges P., A Flexible Interface for Decision-Aid in Multicriteria Decision Prob-
lems, Multicriteria Analysis, Edited by J. Climaco, Springer-Verlag, Berlin, Heidelberg,
1997, pp. 390-400.
Meisel W.S., 7radeoff Decision in Multiple Criteria Decision Making, MUltiple Criteria De-
cision Making, Edited by J.L. Cochrane, M. Zeleny, University of South Carolina Press,
Columbia, South Carolina, 1973, pp. 461-476.
Michalowski W., Evaluation of a Multiple Criteria Interactive Programming Approach: An
Experiment, INFOR 25, No.2 (1987),165-173.
Michalowski W., MCDM at the Crossroads, Multicriteria Analysis, Edited by J. Climaco,
Springer-Verlag, Berlin, Heidelberg, 1997, pp. 579-584.
Michalowski W., Szapiro T., A Bi-Reference Procedure for Interactive Multiple Criteria
Programming, Operations Research 40, No.2 (1992), 247-258.
Miettinen K., On the Methodology of Multiobjective Optimization with Applications, Doctoral
Thesis, Report 60, University of Jyviiskyla, Department of Mathematics, Jyviiskyla,
1994.
Miettinen K., Makela M.M., Nonsmooth Multicriteria Optimization Applied to Optimal Con-
trol, Reports on Applied Mathematics and Computing, No.6, University of Jyviiskyla,
Jyviiskyla (1991).
276 References
Miettinen K., Makela M.M., An Interactive Method for Nonsmooth Multiobjective Opti-
mization with an Application to Optimal Control, Optimization Methods and Software
2 (1993), 31-44.
Miettinen K., Makela M.M., A Nondifferentiable Multiple Criteria Optimization Method
Applied to Continuous Casting Process, Proceedings of the Seventh European Confer-
ence on Mathematics in Industry, Edited by A. Fasano, M. Primicerio, B.G. Teubner,
Stuttgart, 1994, pp. 255-262.
Miettinen K, Makela M.M., Interactive Bundle-Based Method for Nondifferentiable Multi-
objective Optimization: NIMBUS, Optimization 34, No.3 (1995), 231-246.
Miettinen K., MakeUi. M.M., NIMBUS - Interactive Method for Nondifferentiable Multiobjec-
tive Optimization Problems, Multi-Objective Programming and Goal Programming: The-
ories and Applications, Edited by M. Tamiz, Lecture Notes in Economics and Mathe-
matical Systems 432, Springer-Verlag, Berlin, Heidelberg, 1996a, pp. 50-57.
Miettinen K., Makela M.M., Comparing Two Versions 0/ NIMBUS Optimization System,
Report 23/1996, University of Jyvii.skyla, Department of Mathematics, Laboratory of
Scientific Computing, Jyvii.skyla, 1996b.
Miettinen K., Makela M.M., Interactive Method NIMBUS for Nondifferentiable Multiob-
jective Optimization Problems, Multicriteria Analysis, Edited by J. Climaco, Springer-
Verlag, Berlin, Heidelberg, 1997, pp. 310-319.
Miettinen K, Miikela M.M., Theoretical and Computational Comparison 0/ Multiobjective
Optimization Methods NIMBUS and RD, Report 5/1998, University of Jyviiskyla, De-
partment of Mathematics, Laboratory of Scientific Computing, Jyviiskyla, 1998a.
Miettinen K., Makela M.M., Interactive MCDM Support System in the Internet, Trends
in Multicriteria Decision Making: Proceedings of the 13th International Conference on
Multiple Criteria Decision Making, Edited by T. Stewart, R. van den Honert, Springer-
Verlag, 1998b, pp. 419-428.
Miettinen K., Makela M.M., Makinen R.A.E., Interactive Multiobjective Optimization Sys-
tem NIMBUS Applied to Nonsmooth Structural Design Problems, System Modelling and
Optimization, Proceedings of the 17th IFIP Conference on System Modelling and Op-
timization, Prague, Czech Republic, Edited by J. Dolezal, J. Fidler, Chapman & Hall,
London, 1996a, pp. 379-385.
Miettinen K., Makela M.M., Mannikko T., Nondifferentiable Multiobjective Optimizer NIM-
BUS Applied to an Optimal Control Problem of Continuous Casting, Report 22/1996,
University of Jyvii.skyla, Department of Mathematics, Laboratory of Scientific Comput-
ing, Jyviiskyla, 1996b.
Miller G.A., The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity
/or Processing In/ormation, Psychological Review 63, No.2 (1956), 81-87.
Minami M., Weak Pareto Optimality of Multiobjective Problem in a Banach Space, Bulletin
of Mathematical Statistics 19, No. 3-4 (1980-81), 19-23.
Minami M., Weak Pareto Optimality of Multiobjective Problems in a Locally Convex Linear
Topological Space, Journal of Optimization Theory and Applications 34, No.4 (1981),
469-484.
Minami M., Weak Pareto-Optimal Necessary Conditions in a Nondifferentiable Multiobjec-
tive Program on a Banach Space, Journal of Optimization Theory and Applications 41,
No.3 (1983), 451-461.
Mishra S.K., Lagrange Multipliers Saddle Point and Scalarizations in Composite Multiob-
jective Nonsmooth Programming, Optimization 38, No.2 (1996), 93-105.
Mishra S.K., Mukherjee R.N., Generalized Convex Composite Multi-Objective Nonsmooth
Programming and Conditional Proper EJficiency, Optimization 34, No.1 (1995), 53-66.
Mitani K., Nakayama H., A Multiobjective Diet Planning Support System Using the Satis-
ficing Trade-Off Method, Journal of Multi-Criteria Decision Analysis 6, No.3 (1997),
131-139.
Mitra A., Patankar J.G., A Multi-Objective Model for Warranty Estimation, European Jour-
nal of Operational Research 45, No. 2-3 (1990), 347-355.
References 277
Miyaji I., Ohno K., Mine H., Solution Method for Partitioning Students into Groups, Euro-
pean Journal of Operational Research 33, No.1 (1988),82-90.
Mocci U., Primicerio L., Ring Network Design: An MCDM Approach, Multiple Criteria De-
cision Making: Proceedings of the Twelfth International Conference, Hagen (Germany),
Edited by G. Fandel, T. Gal, Lecture Notes in Economics and Mathematical Systems
448, Springer-Verlag, Berlin, Heidelberg, 1997, pp. 491-500.
Moldavskiy M.A., Singling Out a Set of Undominated Solutions in Continuous Vector Op-
timization Problems, Soviet Automatic Control 14, No.5 (1981), 47-53.
Monarchi D.E., Kisiel C.C., Duckstein L., Interactive Multiobjective Programming in Water
Resources: A Case Study, Water Resources Research 9, No.4 (1973), 837-850.
Monarchi D.E., Weber J.E., Duckstein L., An Interactive Multiple Objective Decision-Making
Aid Using Nonlinear Goal Programming, Multiple Criteria Decision Making Kyoto 1975,
Edited by M. Zeleny, Lecture Notes in Economics and Mathematical Systems 123,
Springer-Verlag, Berlin, Heidelberg, 1976, pp. 235-253.
Mor6n M.A., Romero C., Ruiz del Portal F.R., Generating Well-Behaved Utility Functions
for Compromise Programming, Journal of Optimization Theory and Applications 91,
No.3 (1996), 643-649.
Mote J., Olson D.L., Venkataramanan M.A., A Comparative Multiobjective Programming
Study, Mathematical and Computer Modelling 10, No. 10 (1988), 719-729.
M'silti A., Tolla P., An Interactive Multiobjective Nonlinear Programming Procedure, Euro-
pean Journal of Operational Research 64, No.1 (1993), 115-125.
Mukai H., Algorithms for Multicriterion Optimization, IEEE Transactions on Automatic
Control 25, No.2 (1980),177-186.
Musselman K., Talavage J., A Tradeoff Cut Approach to Multiple Objective Optimization,
Operations Research 28, No.6 (1980), 1424-1435.
Nakayama H., Geometric Consideration of Duality in Vector Optimization, Journal of Op-
timization Theory and Applications 44, No.4 (1984), 625-655.
Nakayama H., On the Components in Interactive Multiobjective Programming Methods, Plu-
ral Rationality and Interactive Decision Processes, Edited by M. Grauer, M. Thompson,
A.P. Wierzbicki, Lecture Notes in Economics and Mathematical Systems 248, Springer-
Verlag, Berlin, Heidelberg, 1985a, pp. 234-247.
Nakayama H., Lagrange Duality and Its Geometric Interpretation, Mathematics of Multi
Objective Optimization, Edited by P. Serafini, Springer-Verlag, Wien, New York, 1985b,
pp. 105-127.
Nakayama H., Duality Theory in Vector Optimization: An Overview, Decision Making with
Multiple Objectives, Edited by Y.Y. Haimes, V. Chankong, Lecture Notes in Economics
and Mathematical Systems 242, Springer-Verlag, 1985c, pp. 109-125.
Nakayama H., Sensitivity and Trade-Off Analysis in Multiobjective Programming, Method-
ology and Software for Interactive Decision Support, Edited by A. Lewandowski, I.
Stanchev, Lecture Notes in Economics and Mathematical Systems 337, Springer-Ver-
lag, 1989, pp. 86-93.
Nakayama H., Satisficing Trade-Off Method for Problems with Multiple Linear Fractional
Objectives and Its Applications, Multiobjective Problems of Mathematical Programming,
Edited by A. Lewandowski, V. Volkovich, Lecture Notes in Economics and Mathematical
Systems 351, Springer-Verlag, 1991a, pp. 42-50.
Nakayama H., Trade-off Analysis Based upon Parametric Optimization, Multiple Criteria
Decision Support, Edited by P. Korhonen, A. Lewandowski, J. Wallenius, Lecture Notes
in Economics and Mathematical Systems 356, Springer-Verlag, 1991b, pp. 42-52.
Nakayama II., Trade-Off Analysis Using Parametric Optimization Techniques, European
Journal of Operational Research 60, No.1 (1992a), 87-98.
Nakayama H., Theoretical Remarks on Dynamic Trade-Off, Multiple Criteria Decision Mak-
ing: Proceedings of the Ninth International Conference: Theory and Applications in
Business, Industry, and Government, Edited by A. Goicoechea, L. Duckstein, S. Zionts,
Springer-Verlag, New York, 1992b, pp. 297-309.
278 References
Ogryczak W., Preemptive Reference Point Method, Multicriteria Analysis, Edited by J. Clf-
maco, Springer-Verlag, Berlin, Heidelberg, 1997a, pp. 156-167.
Ogryczak W., Reference Distribution - An Interactive Approach to Multiple Homogeneous
and Anonymous Criteria, Multiple Criteria Decision Making: Proceedings of the Twelfth
International Conference, Hagen (Germany), Edited by G. Fandel, T. Gal, Lecture Notes
in Economics and Mathematical Systems 448, Springer-Verlag, Berlin, Heidelberg, 1997b,
pp. 156-165.
Ohkubo S., Dissanayake P.B.R., Taniwaki K., An Approach to Multicriteria Fuzzy Optimiza-
tion of a Prestressed Concrete Bridge System Considering Cost and Aesthetic Feeling,
Structural Optimization 15, No.2 (1998), 132-140.
Ohta H., T. Yamaguchi, Linear Fractional Goal Programming in Consideration of Fuzzy
Solution, European Journal of Operational Research 92, No.1 (1996), 157-165.
Olbrisch M., The Interactive Reference Point Approach as a Solution Concept for Econo-
metric Decision Models, X. Symposium on Operations Research, Part 1, Sections 1-5,
Edited by M.J. Beckmann, K.-W. Gaede, K. Ritter, H. Schneeweiss, Verlag Anton Hain
Meisenheim GmbH, Knigstein/Ts., 1986, pp. 611-619.
Olkucu A., Conflicting Objectives in Software System Design, Multiple Criteria Decision
Making and Risk Analysis Using Microcomputers, Edited by B. Karpak, S. Zionts,
Springer-Verlag, Berlin, Heidelberg, 1989, pp. 357-394.
Olson D.L., Review of Empirical Studies in Multiobjective Mathematical Programming: Sub-
ject Reflection of Nonlinear Utility and Learning, Decision Sciences 23, No. 1 (1992),
1-20.
Olson D.L., Tchebycheff Norms in Multi-Objective Linear Programming, Mathematical and
Computer Modelling 17, No.1 (1993),113-124.
Oppenheimer K.R., A Proxy Approach to Multi-Attribute Decision Making, Management
Science 24, No.6 (1978), 675-689.
Osman M.S.A., Ragab A.M., A Unified Approach for Characterizing the Noninferior Solu-
tions of Multiobjective Nonlinear Programming Problems, X. Symposium on Operations
Research, Part 1, Sections 1-5, Edited by M.J. Beckmann, K.-W. Gaede, K. Ritter, H.
Schneeweiss, Verlag Anton Hain Meisenheim GmbH, Knigstein/Ts., 1986a, pp. 133-141.
Osman M.S.A., Ragab A.M., An Algorithm for Solving Multiobjective Nonlinear Program-
ming Problems, X. Symposium on Operations Research, Part 1, Sections 1-5, Edited by
M.J. Beckmann, K.-W. Gaede, K. Ritter, H. Schneeweiss, Verlag Anton Hain Meisenheim
GmbH, Knigstein/Ts., 1986b, pp. 143-147.
Osyczka A., Multicriterion Optimization in Engineering with FORTRAN Programs, Ellis
Horwood Limited, 1984.
Osyczka A., Multicriterion Decision Making with Min-Max Approach, Optimization Methods
in Structural Design, Edited by H. Eschenauer, N. Olhoff, Euromech-Colloquium 164,
Wissenschaftsverlag, 1989a.
Osyczka A., Computer Aided Multicriterion Optimization System (CAMOS), Discretization
Methods and Structural Optimization - Procedures and Applications, Edited by H.A.
Eschenauer, G. Thierauf, Lecture Notes in Engineering, Springer-Verlag, Berlin, Heidel-
berg, 1989b, pp. 263-270.
Osyczka A., Computer Aided Multicriterion Optimization System (CAMOS): Software Pack-
age in FORTRAN, International Software Publishers, 1992.
Osyczka A., Koski J., Selected Works Related to Multicriterion Optimization Methods for En-
gineering Design, Optimization Methods in Structural Design, Edited by H. Eschenauer,
N. Olhoff, Euromech-Colloquium 164, Wissenschaftsverlag, 1989.
Osyczka A., Kundu S., A New Method to Solve Generalized Multicriteria Optimization Prob-
lems Using the Simple Genetic Algorithm, Structural Optimization 10, No. 2 (1995),
94-99.
280 References
Roo S.S., Optimization Theory and Applications, Second Edition, Wiley Eastern Limited,
1984.
Roo S.S., Game Theory Approach for Multiobjective Structural Optimization, Computers
and Structures 25, No.1 (1987), 119-127.
Rarig H.M., Haimes Y.Y., Risk/Dispersion Index Method, IEEE Transactions on Systems,
Man, and Cybernetics 13, No.3 (1983), 317-328.
Reeves G.R., Gonzalez J.J., A Comparison of Two Interactive MCDM Procedures, European
Journal of Operational Research 41, No.2 (1989), 203-209.
Reeves G.R., Reid R.C., Minimum Values over the Efficient Set in Multiple Objective Deci-
sion Making, European Journal of Operational Research 36, No.3 (1988), 334-338.
ReVelle C., Equalizing Superpower Force Disparities with Optimized Arms Control Choices:
A Multi-Objective Approach, European Journal of Operational Research 33, No.1 (1988),
46-53.
Rietveld P., Multiple Objective Decision Methods and Regional Planning, North-Holland
Publishing Company, 1980.
Ringuest J .L., Multiohjective Optimization: Behavioral and Computational Considerations,
Kluwer Academic Publisher, 1992.
Rogowski T., Sobczyk J., Wierzbicki A., IAC-DIDAS-L: A Dynamic Interactive Decision
Analysis and Support System for Multicriteria Analysis of Linear and Dynamic Lin-
ear Models of Professional Microcomputers, Theory, Software and Testing Examples for
Decision Support Systems, Edited by A. Lewandowski, A. Wierzbicki, Working Paper
WP-87-26, IIASA, Laxenburg, 1987, pp. 106-124.
Romero C., Handbook of Critical Issues in Goal Programming, Pergamon Press, 1991.
Romero C., Goal Programming and Multiple Criteria Decision Making: Some Reflections,
Multiple Criteria Decision Making: Proceedings of the Twelfth International Conference,
Hagen (Germany), Edited by G. Fandel, T. Gal, Lecture Notes in Economics and Math-
ematical Systems 448, Springer-Verlag, Berlin, Heidelberg, 1997, pp. 192-198.
Rosenthal R.E., Goal Programming - A Critique, New Zealand Operational Research 11,
No.1 (1983), 1-7.
Rosenthal R.E., Principles of Multiobjective Optimization, Decision Sciences 16, No. 2
(1985), 133-152.
Rosinger E.E., Interactive Algorithm for Multiobjective Optimization, Journal of Optimiza-
tion Theory and Applications 35, No.3 (1981), 339-365; Errata Comge, Journal of
Optimization Theory and Applications 38, No.1 (1982), 147-148.
Rosinger E.E., Aids for Decision Making with Conflicting Objectives, Mathematics of Multi
Objective Optimization, Edited by P. Serafini, Springer-Verlag, Wien, New York, 1985,
pp. 275-315.
Rothermel M.A., Schilling D.A., Conjoint Measurement in Multiple Objective Decision Mak-
ing: A New Approach, European Journal of Operational Research 23, No.3 (1986),
31(}--319.
Roy A., Mackin P., Multicriteria Optimization (Linear and Nonlinear) Using Proxy Value
Functions, Multiple Criteria Decision Support, Edited by P. Korhonen, A. Lewandowski,
J. Wallenius, Lecture Notes in Economics and Mathematical Systems 356, Springer-
Verlag, 1991, pp. 128-134.
Roy A., Wallenius J., Nonlinear Multiobjective Optimization: An Algorithm and Some The-
ory, Mathematical Programming 55, No.2 (1992), 235-249.
Roy B., The Outranking Approach and the Foundations of ELECTRE Methods, Readings in
Multiple Criteria Decision Aid, Edited by C.A. Bana e Costa, Springer-Verlag, Berlin,
Heidelberg, 1990, pp. 155-183.
Roy B., Mousseau V., A Theoretical Framework for Analysing the Notion of Relative Impor-
tance 0/ Criteria, Journal of Multi-Criteria Decision Analysis 5, No.2 (1996), 145-159.
Rufz-Canales P., Rufian-Lizana A., A Characterization 0/ Weakly Efficient Points, Mathe-
matical Programming 68, No.2 (1995), 205-212.
282 References
Saber H.M., Ravindran A., A Partitioning Gradient Based (PGB) Algorithm for Solving
Nonlinear Goal' Programming Problems, Computers & Operations Research 23, No.2
(1996), 141-152.
Sadagopan S., Ravindran A., Interactive Algorithms for Multiple Criteria Nonlinear Pro-
gramming Problems, European Journal of Operational Research 25, No.2 (1986), 247-
257.
Sadek I.S., Bruch J.C. Jr., Sloss J.M., Adali S., Structural Control of a Variable Cross-
Section Beam by Distributed Forces, Mechanics of Structures and Machines 16, No.3
(1988-89), 313-333.
Sainfort F.C., Gustafson D.H., Bosworth K., Hawkins R.P., Decision Support Systems Ef-
fectiveness: Conceptual Framework and Empirical Evaluation, Organizational Behavior
and Human Decision Processes 45 (1990), 232-252.
Sakawa M., Interactive Multiobjective Decision Making by the Sequential Proxy Optimization
Technique: SPOT, European Journal of Operational Research 9, No.4 (1982), 386-396.
Sakawa M., Mori N., Interactive Multiobjective Decisionmaking for Nonconvex Problems
Based on the Weighted Tchebycheff Norm, Large Scale Systems 5, No.1 (1983), 69-82.
Sakawa M., Mori N., Interactive Multiobjective Decisionmaking for Nonconvex Problems
Based on the Penalty Scalarizing Functions, European Journal of Operational Research
17, No.3 (1984), 320-330.
Sakawa M., Seo F., Interactive Multiobjective Decisionmaking for Large-Scale Systems and
Its Application to Environmental Systems, IEEE Transactions on Systems, Man, and
Cybernetics 10, No. 12 (1980), 796-806.
Sakawa M., Seo F., Interactive Multiobjective Decision Making by the Sequential Proxy Op-
timization Technique (SPOT) and Its Application to Environmental Systems, Control
Science and Technology for the Progress of Society, Edited by H. Akashi, IFAC, vol. 2,
Pergamon Press, 1982a, pp. 1527-1532.
Sakawa M., Seo F., Interactive Multiobjective Decision Making in Environmental Systems
Using Sequential Proxy Optimization Techniques (SPOT), Automatica 18, No.2 (1982b),
155-165.
Sakawa M., Yano H., Interactive Multiobjective Reliability Design of a Standby System by
the Fuzzy Sequential Proxy Optimization Technique (FSPOT), International Journal of
Systems Science 16, No.2 (1985), 177-195.
Sakawa M., Yano H., Trade-Off Rates in the Hyperplane Method for Multiobjective Optimiza-
tion Problems, European Journal of Operational Research 44, No.1 (1990), 105-118.
Sakawa M., Yano H., Generalized Hyperplane Methods for Characterizing A-Extreme Points
and Trade-Off Rates for Multiobjective Optimization Problems, European Journal of
Operational Research 57, No.3 (1992), 368-380.
Sankaran S., Multiple Objective Decision Making Approach to Cell Formation: A Goal Pro-
gramming Model, Mathematical and Computer Modelling 13, No.9 (1990), 71-81.
Sarma G.V., Merouani H.F., Some Difficulties in Applying Interactive Compromise Pro-
gramming Illustrated by a Recent Method and a Case Study, Journal of the Operational
Research Society 46, No.1 (1995), 9-19.
Sawaragi Y., Nakayama H., Tanino T., Theory of Multiobjective Optimization, Academic
Press, Inc., Orlando, Florida, 1985.
Schilling D.A., ReVelle C., Cohon J., An Approach to the Display and Analysis of Multiob-
jective Problems, Socio-Economic Planning Sciences 17, No.2 (1983), 57-63.
Schniederjans M.J., Goal Programming: Methodology and Applications, Kluwer Academic
Publishers, Boston, Dordrecht, London, 1995a.
Schniederjans M.J., The Life Cycle of Goal Programming Research as Recorded in Journal
Articles, Operations Research 43, No.3 (1995b), 551-557.
Schniederjans M.J., Hoffman J., Multinational Acquisition Analysis: A Zero-One Goal Pro-
gramming Model, European Journal of Operational Research 62, No.2 (1992), 175-185.
References 283
Soloveychik D., The Multiple Criteria Decision-Making Process Using the Methods of Mul-
tivariate Statistical Analysis, Fifth International Conference on Collective Phenomena,
Edited by J.L. Lebowitz, The New York Academy of Sciences, New York, 1983, pp. 205-
212.
Song A., Cheng W.-M., A Method for Multihuman and Multi-Criteria Decision Making,
Systems Analysis and Simulation 1988 I: Theory and Foundations, Edited by A. Sydow,
S.G. Tzafestas, R. Vichnevetsky, Akademie-Verlag, Berlin, 1988, pp. 213-216.
Sounderpandian J., Value Functions When Decision Criteria Are Not Totally Substitutable,
Operations Research 39, No.4 (1991), 592-600.
Soyibo A., Goal Programming Methods and Applications: A Survey, Journal of Information
and Optimization Sciences 6, No.3 (1985), 247-264.
Spronk J., Interactive Multifactorial Planning: State of the Art, Readings in Multiple Criteria
Decision Aid, Edited by C.A. Bana e Costa, Springer-Verlag, Berlin, Heidelberg, 1990,
pp. 512-534.
Stadler W., A Survey of Multicriteria Optimization or the Vector Maximum Problem, Part
I: 1776-1960, Journal of Optimization Theory and Applications 29, No.1 (1979), 1-52.
Stadler W. (Ed.), Multicriteria Optimization in Engineering and in the Sciences, Plenum
Press, New York, 1988a.
Stadler W., Fundamentals of Multicriteria Optimization, Multicriteria Optimization in En-
gineering and in the Sciences, Edited by W. Stadler, Plenum Press, New York, 1988b,
pp. 1-25.
Staib T., On Necessary and Sufficient Optimality Conditions for Multicriterial Optimization
Problems, ZOR - Methods and Models of Operations Research 35, No.3 (1991), 231-248.
Starn A., Lee Y.-R., Yu P.L., Value Functions and Preference Structures, Mathematics of
Multiobjective Optimization, Edited by P. Serafini, CISM Courses and Lectures 289,
1985, pp. 1-22.
Starn A., Kuula M., Cesar H., Transboundary Air Pollution in Europe: An Interactive Multi-
criteria Tradeoff Analysis, European Journal of Operational Research 56, No.2 (1992),
263-277.
Stancu-Minasian I.M., Stochastic Programming with MUltiple Objective Functions, Math-
ematics and Its Applications (East European Series), D. Reidel Publishing Company;
Editura Academiei, 1984.
Statnikov R.B., Matusov J., Use of Pr-Nets for the Approximation of the Edgeworth-Pareto
Set in Multicriteria Optimization, Journal of Optimization Theory and Applications 91,
No.3 (1996), 543-560.
Sterna-Karwat A., Continuous Dependence of Solutions on a Parameter in a Scalarization
Method, Journal of Optimization Theory and Applications 55, No.3 (1987), 417-434.
Steuer R.E., Multiple Criteria Optimization: Theory, Computation, and Applications, John
Wiley & Sons, Inc., 1986.
Steuer R.E., The Tchebycheff Procedure 01 Interactive Multiple Objective Programming, Mul-
tiple Criteria Decision Making and Risk Analysis Using Microcomputers, Edited by B.
Karpak, S. Zionts, Springer-Verlag, Berlin, Heidelberg, 1989a, pp. 235-249.
Steuer R.E., Trends in Interactive Multiple Objective Programming, Methodology and Soft-
ware for Interactive Decision Support, Edited by A. Lewandowski, I. Stanchev, Lecture
Notes in Economics and Mathematical Systems 337, Springer-Verlag, 1989b, pp. 107-119.
Steuer R.E., Implementin9 the Tchebycheff Method in a Spreadsheet, Essays in Decision
Making: A Volume in Honour of Stanley Zionts, Edited by M.H. Karwan, J. Spronk,
J. Wallenius, Springer-Verlag, Berlin, Heidelberg, 1997, pp. 93-103.
Steuer R.E., Choo E.-U., An Interactive Weighted Tchebycheff Procedure lor Multiple Ob-
jective Programming, Mathematical Programming 26, No.3 (1983), 326-344.
Steuer R.E., Gardiner L.R., Interactive Multiple Objective Programming: Concepts, Current
Status, and Future Directions, Readings in Multiple Criteria Decision Aid, Edited by
C.A. Bana e Costa, Springer-Verlag, Berlin, Heidelberg, 1990, pp. 413-444.
References 285
Steuer R.E., Gardiner L.R., On the Computational Testing of Procedures for Interactive
Multiple Objective Linear Programming, Operations Research, Edited by G. Fandel, H.
Gehring, Springer-Verlag, Berlin, Heidelberg, 1991, pp. 121-131.
Steuer R.E., Sun M., The Parameter Space Investigation Method of Multiple Objective Non-
linear Programming: A Computational Investigation, Operations Research 43, No. 4
(1995), 641-648.
Steuer R.E., Whisman A.W., Toward the Consolidation of Interactive Multiple Objective
Programming Procedures, Large-Scale Modelling and Interactive Decision Analysis, Ed-
ited by G. Fandel, M. Grauer, A. Kurzhanski, A.P. Wierzbicki, Lecture Notes in Eco-
nomics and Mathematical Systems 273, Springer-Verlag, 1986, pp. 232-241.
Steuer R.E., Silverman J., Whisman A.W., A Combined Tchebycheff/Aspiration Criterion
Vector Interactive Multiobjective Programming Procedure, Management Science 39, No.
10 (1993), 1255-1260.
Steuer R.E., Gardiner L.R., Gray J., A Bibliographic Survey of the Activities and Interna-
tional Nature of Multiple Criteria Decision Making, Journal of Multi-Criteria Decision
Analysis 5, No.3 (1996), 195-217.
Stewart T.J., A Critical Survey on the Status of Multiple Criteria Decision Making Theory
and Practice, OMEGA 20, No. 5-6 (1992), 569-586.
Stewart T.J., Convergence and Validation of Interactive Methods in MCDM: Simulation
Studies, Essays in Decision Making: A Volume in Honour of Stanley Zionts, Edited by
M.H. Karwan, J. Spronk, J. Wallenius, Springer-Verlag, Berlin, H.lidelberg, 1997, pp. 7-
18.
Sultan A.M., Templeman A.B., Generation of Pareto Solutions by Entropy-Based Meth-
ods, Multi-Objective Programming and Goal Programming: Theories and Applications,
Edited by M. Tamiz, Lecture Notes in Economics and Mathematical Systems 432, Spring-
er-Verlag, Berlin, Heidelberg, 1996, pp. 164-195.
Sun M., Starn A., Steuer R.E., Solving Multiple Objective Programming Problems Using Feed-
Forward Artificial Neural Networks: The Interactive FFANN Pmcedure, Management
Science 42, No.6 (1996), 835-849.
Sunaga T., Mazeed M.A., Kondo E., A Penalty Function Formulation for Interactive Mul-
tiobjective Programming Problems, System Modelling and Optimization, Edited by M.
Iri, K. Yajima, Lecture Notes in Control and Information Sciences 113, Springer-Verlag,
1988, pp. 221-230.
Szidarovszky F., Szenteleki K., A Multiobjective Optimization Model for Wine Production,
Applied Mathematics and Computation 22, No. 2-3, (1987), 255-275.
Tabucanon M.T., Multiple Criteria Decision Making in Industry, Elsevier Science Publishers
B.V., Amsterdam, 1988.
Tamiz M., Jones D.F., Algorithmic Improvements to the Method of Martel and Aouni, Jour-
nal of the Operational Research Society 46, No.2 (1995), 254-257.
Tamiz M., Jones D.F., An Overview of Current Solution Methods and Modelling Practices
in Goal Programming, Multi-Objective Programming and Goal Programming: Theories
and Applications, Edited by M. Tamiz, Lecture Notes in Economics and Mathematical
Systems 432, Springer-Verlag, Berlin, Heidelberg, 1996, pp. 198-211.
Tamiz M., Jones D.F., Interactive Frameworks for Investigation of Goal Programming Mod-
els: Theory and Practice, Journal of Multi-Criteria Decision Analysis 6, No.1 (1997a),
52-60.
Tamiz M., Jones D.F., A General Interactive Goal Programming Algorithm, Multiple Crite-
ria Decision Making: Proceedings of the Twelfth International Conference, Hagen (Ger-
many), Edited by G. Fandel, T. Gal, Lecture Notes in Economics and Mathematical
Systems 448, Springer-Verlag, Berlin, Heidelberg, 1997b, pp. 433-444.
Tamiz M., Hasham R., Jones D.F., Hesni B., Fargher E.K., A Two Staged Goal Program-
ming Model for Portfolio Selection, Multi-Objective Programming and Goal Program-
ming: Theories and Applications, Edited by M. Tamiz, Lecture Notes in Economics and
Mathematical Systems 432, Springer-Verlag, Berlin, Heidelberg, 1996, pp. 286-299.
286 References
Tamura K., Arai S., On Proper and Improper Efficient Solutions of Optimal Problems with
Multicriteria, Journal of Optimization Theory and Applications 38, No.2 (1982), 191-
205.
Tan Y.S., Fraser N.M., The Modified Star Graph and the Petal Diagram: Two New Visual
Aids for Discrete Alternative Multicriteria Decision Making, Journal of Multi-Criteria
Decision Analysis 7, No.1 (1998), 20-33.
Tanino T., Sensitivity Analysis in Multiobjective Optimization, Journal of Optimization The-
ory and Applications 56, No.3 (1988a), 479-499.
Tanino T., Stability and Sensitivity Analysis in Convex Vector Optimization, SIAM Journal
on Control and Optimization 26, No.3 (1988b), 521-536.
Tanino T., Stability and Sensitivity Analysis in Multiobjective Nonlinear Programming, An-
nals of Operations Research 27, No. 1-4 (1990), 97-114.
Tanino T., Sawaragi Y., Stability of Nondominated Solutions in Multicriteria Decision-
Making, Journal of Optimization Theory and Applications 30, No.2 (1980), 229-253.
Tapia C.G., Murtagh B.A., The Use of Preference Criteria in Interactive Multiobjective
Mathematical Programming, Asia-Pacific Journal of Operational Research 6, No.2 (1989),
131-147.
Tapia C.G., Murtagh B.A., A Markovian Process in Interactive Multiobjective Decision-
Making, European Journal of Operational Research 57, No.3 (1992), 421-428.
Tarvainen K., On the Implementation of the Interactive Surrogate Worth Trade-Off (ISWT)
Method, Interactive Decision Analysis, Edited by M. Grauer, A.P. Wierzbicki, Lecture
Notes in Economics and Mathematical Systems 229, Springer-Verlag, Berlin, Heidelberg,
1984, pp. 154-161.
Tarvainen K, Duality Theory for Preferences in Multiobjective Decisionmaking, Journal of
Optimization Theory and Applications 88, No.1 (1996), 237-245.
Tecle A., Duckstein L., A Procedure for Selecting MCDM Techniques for Forest Resources
Management, Multiple Criteria Decision Making: Proceedings of the Ninth International
Conference: Theory and Applications in Business, Industry, and Government, Edited by
A. Goicoechea, L. Duckstein, S. Zionts, Springer-Verlag, New York, 1992, pp. 19-32.
Teghem J. Jr., Delhaye C., Kunsch P.L., An Interactive Decision Support System (IDSS)
for Multicriteria Decision Aid, Mathematical and Computer Modelling 12, No. 10-11
(1989), 1311-1320.
Tell R., Wallenius J., A Survey of Multiple-Criteria Decision Methods and Applications: Util-
ity Theory and Mathematical Programming, The Finnish Journal of Business Economics
I, No.1 (1979), 3-22.
Tenhuisen M.L., Wiecek M.M., On the Structure of the Non-Dominated Set for Bicriteria
Programmes, Journal of Multi-Criteria Decision Analysis 5, No.3 (1996), 232-243.
Thach P.T., Konno H., Yokota D., Dual Approach to Minimization on the Set of Pareto-
Optimal Solutions, Journal of Optimization Theory and Applications 88, No.3 (1996),
689-707.
Thore S., Nagurney A., Pan J., Generalized Goal Programming and Variational Inequalities,
Operations Research Letters 12, No.4 (1992), 217-226.
Torn A., A Sampling-Search-Clustering Approach for Solving Scalar (Local, Global) and
Vector Optimization Problems, Theory and Practice of Multiple Criteria Decision Mak-
ing, Edited by C. Carlsson, Y. Kochetkova, North-Holland Publishing Company, 1983,
pp. 119-141.
Udink ten Cate A., On the Determination of the Optimal Temperature for the Growth of an
Early Cucumber Crop in a Greenhouse, Plural Rationality and Interactive Decision Pro-
cesses, Edited by M. Grauer, M. Thompson, A.P. Wierzbicki, Lecture Notes in Economics
and Mathematical Systems 248, Springer-Verlag, Berlin, Heidelberg, 1985, pp. 311-318.
Vanderpooten D., The Use of Preference Information in Multiple Criteria Interactive Pro-
cedures, Improving Decision Making in Organisations, Edited by A.G. Lockett, G. Islei,
Lecture Notes in Economics and Mathematical Systems 335, Springer-Verlag, Berlin,
Heidelberg, 1989a, pp. 390-399.
References 287
Vanderpooten D., The Interactive Approach in MCDA: A Technical Framework and Some
Basic Conceptions, Mathematical and Computer Modelling 12, No. 10-11 (1989b), 1213-
1220.
Vanderpooten D., Multiobjective Programming: Basic Concepts and Approaches, Stochastic
versus Fuzzy Approaches to Multiobjective Mathematical Programming under Uncer-
tainty, Edited by R. Slowinski, J. Teghem, Kluwer Academic Publishers, Dordrecht,
1990, pp. 7-22.
Vanderpooten D., Three Basic Conceptions Underlying Multiple Criteria Interactive Proce-
dures, Multiple Criteria Decision Making: Proceedings of the Ninth International Con-
ference: Theory and Applications in Business, Industry, and Government, Edited by A.
Goicoechea, L. Duckstein, S. Zionts, Springer-Verlag, New York, 1992, pp. 441-448.
Vanderpooten D., Vincke P., Description and Analysis of Some Representative Interactive
Multicriteria Procedures, Mathematical and Computer Modelling 12, No. 10-11 (1989),
1221-1238.
Vassilev V., Sgurev V., Atanassov A., Deianov A., Kichovich M., KiriIov L., Djambov V.,
Vachkov G., Software Product for Multiobjective Nonlinear Programming: MONP-16,
Version 1.1, General Description; User's Guide, Software Products and Systems Corpo-
ration, Institute of Industrial Cybernetics and Robotics, Sofia, 1990.
Verkama M., Heiskanen P., Comment on a Decision Support Approach for Negotiation: Soft-
ware 'lis. Methodology, European Journal of Operational Research 96, No.1 (1996), 202-
204.
Vessey I., Cognitive Fit: A Theory-Based Analysis of the Graphs vel'sus Tables Literature,
Decision Sciences 22 (1991), 219-240.
Vetschera R., Feedback-Oriented Group Decision Support in a Reference Point Framework,
Multiobjective Problems of Mathematical Programming, Edited by A. Lewandowski, V.
Volkovich, Lecture Notes in Economics and Mathematical Systems 351, Springer-Verlag,
1991a, pp. 309-314.
Vetschera R., A Note on Scalarizing Functions under Changing Sets of Criteria, European
Journal of Operational Research 52, No.1 (1991b), 113-118.
Vetschera R., A Preference-Preserving Projection Technique for MCDM, European Journal
of Operational Research 61, No. 1-2 (1992), 195-203.
Vincke P., Multicriteria Decision-Aid, John Wiley & Sons, Inc., Chichester, 1992.
Wallenius J., Comparative Evaluation of Some Interactive Approaches to Multicriterion Op-
timization, Management Science 21, No. 12 (1975), 1387-1396.
Wallenius J., Zionts S., A Research Project on Multicriteria Decision Making, Conflicting
Objectives in Decision, Edited by D.E. Bell, R. Keeney, H. Raiffa, John Wiley & Sons,
Inc., New York, 1977.
Wan Y.-H., On Local Pareto Optima, Journal of Mathematical Economics 2, No.1 (1975),
35-42.
Wang S., Lagrange Conditions in Nonsmooth and Multiobjective Mathematical Programming,
Mathematics in Economics 1 (1984), 183-193.
Wang S., Algorithms for Multiobjective and Nonsmooth Optimization, Methods of Oper-
ations Research 58, Edited by P. Kleinschmidt, F.J. Radermacher, W. Schweitzer, H.
Wildermann, Athenum Verlag, Frankfurt am Main, 1989, pp. 131--142.
Wang S., Second-Order Necessary and Sufficient Conditions in Multiobjective Programming,
Numerical Functional Analysis and Application 12, No. 1-2 (1991), 237-252.
Wang S., An Interactive Method for Multicriteria Decision Making, Optimization: Tech-
niques and Applications, vol. 1, Proceedings of the International Conference (ICOTA),
Edited by K.H. Phu3, C.M. Wang, W.Y. Yeong, T.Y. Leong, H.T. Loh, K.C. Tan, F.S.
Chou, World Scientific Publishing Co., Inc., 1992, pp. 307-316.
Wang S., Li Z., Scalarization and Lagrange Duality in Multiobjective Optimization, Opti-
mization 26, No. 3-4 (1992), 315-324.
288 References
Wang S., Zhou X., Multiobjective Optimization in Spillway Profile, Approximation, Opti-
mization and Computing: Theory and Applications, Edited by A.G. Law, C.L. Wang,
Elsevier Science Publishers B.V., 1990, pp. 309-312.
Warburton A., Quasiconcave Vector Maximization: Connectedness of the Sets of Pareto-
Optimal and Weak Pareto-Optimal Alternatives, Journal of Optimization Theory and
Applications 40, No.4 (1983), 537-557.
Weber M., Decision Making with Incomplete Information, European Journal of Operational
Research 28, No.1 (1987), 44-57.
Week M., Fortsch F., Application of Multicriteria Optimization to Structural Systems, Sys-
tem Modelling and Optimization, Edited by M. Iri, K. Yajima, Lecture Notes in Control
and Information Sciences 113, Springer-Verlag, 1988, pp. 471-483.
Weidner P., On the Characterization of Efficient Points by Means of Monotone Functionals,
Optimization 19, No.1 (1988), 53-69.
Weidner P., Complete Efficiency and Interdependencies Between Objective Functions in Vec-
tor Optimization, ZOR - Methods and Models of Operations Research 34, No.2 (1990),
91-115.
Weir T., Proper Efficiency and Duality for Vector Valued Optimization Problems, Journal
of the Australian Mathematical Society, Series A 43, No.1 (1987), 21-34.
Weir T., On Efficiency, Proper Efficiency and Duality in Multiobjective Programming, Asia-
Pacific Journal of Operational Research 7, No.1 (1990), 46-54.
Weistroffer H.R., Multiple Criteria Decision Making with Interactive Over-Achievement Pro-
gramming, Operations Research Letters I, No.6 (1982), 241-245.
Weistroffer H.R., An Interactive Goal-Programming Method for Non-Linear Mu.ltiple-Cri-
teria Decision-Making Problems, Computers & Operations Research 10, No.4 (1983),
311-320.
Weistroffer H.R., A Combined Over- and Under-Achievement Programming Approach to
Multiple Objective Decision-Making, Large Scale Systems 7, No.1 (1984), 47-58.
Weistroffer H.R., Careful Usage of Pessimistic Values is Needed in Multiple Objective Opti-
mization, Operations Research Letters 4, No.1 (1985), 23-25.
Weistroffer H.R., A Flexible Model for Multi-Objective Optimization, Recent Advances and
Historical Developments of Vector Optimization, Edited by J. Jahn, W. Krabs, Lecture
Notes in Economics and Mathematical Systems 294, Springer-Verlag, Berlin, Heidelberg,
1987, pp. 311-316.
Weistroffer H.R., Narula S., The Cu.rrent State of Nonlinear MUltiple Criteria Decision
Making, Operations Research, Edited by G. Fandel, H. Gehring, Springer-Verlag, Berlin,
Heidelberg, 1991, pp. 109-119.
Weistroffer H.R., Narula S.C., The State of Multiple Criteria Decision Support Software,
Annals of Operations Research 72 (1997), 299-313.
Wendell R.E., Lee D.N., Efficiency in Multiple Objective Optimization Problems, Mathemat-
ical Programming 12, No.3 (1977), 406-414.
White D.J., Concepts of Proper Efficiency, European Journal of Operational Research 13,
No.2 (1983a), 180-188.
White D.J., A Selection of Multi-Objective interactive Programming Methods, Multi-Ob-
jective Decision Making, Edited by S. French, R. Hartley, L.C. Thomas, D.J. White,
Academic Press, 1983b, pp. 99-126.
White D.J., A Bibliography on the Applications of Mathematical Programming Multiple-
Objective Methods, Journal of the Operational Research Society 41, No.8 (1990), 669-
691.
Wierzbicki A.P., Basic Properties of Scalarization Functionals for Multiobjective Optimiza-
tion, Mathematische Operationsforschung und Statistik, Ser. Optimization 8, No. 1
(1977), 55-60.
References 289
Youness E.A., A Direct Approach for Finding All Efficient Solutions for Multiobjective Pro-
gramming Problems, European Journal of Operational Research 81, No.2 (1995), 440-
443.
Yu P.L., A Class of Solutions for Group Decision Problems, Management Science 19, No.8
(1973), 936-946.
Yu P.L., Cone Convexity, Cone Extreme Points, and Nondominated Solutions in Decision
Problems with Multiobjectives, Journal of Optimization Theory and Applications 14, No.
3 (1974), 319-377.
Yu P.L., Multiple-Criteria Decision Making Concepts, Techniques, and Extensions, Plenum
Press, New York, 1985.
Yu P.L., Habitual Domains, Operations Research 39, No.6 (1991), 869-876.
Yu P.L., Toward Expanding and Enriching Domains of Thinking and Application, Multiple
Criteria Decision Making - Proceedings of the Tenth International Conference: Expand
and Enrich the Domains of Thinking and Application, Edited by G.H. Tzeng, H.F. Wand,
U.P. Wen, P.L. Yu, Springer-Verlag, New York, 1994, pp. 1-9.
Yu P.-L., Habitual Domains: Freeing Yourself from the Limits on Your Life, Highwater
Editions, Shawnee Mission, Kansas, 1995.
Yu P.-L., Liu L., A Foundation of Principles for Expanding Habitual Domains, Essays in De-
cision Making: A Volume in Honour of Stanley Zionts, Edited by M.H. Karwan, J. Spronk,
J. Wallenius, Springer-Verlag, Berlin, Heidelberg, 1997, pp. 185-200.
Zadeh L., Optimality and Non-Scalar- Valued Performance Criteria, IEEE Transactions on
Automatic Control 8 (1963), 59-60.
Zangwill W.I., Nonlinear Programming: A Unified Approach, Prentice-Han, Inc., 1969.
Zeleny M., Compromise Programming, Multiple Criteria Decision Making, Edited by J.L.
Cochrane, M. Zeleny, University of South Carolina Press, Columbia, South Carolina,
1973, pp. 262-301.
Zeleny M., Linear Multiobjective Programming, Lecture Notes in Economics and Mathemat-
ical Systems 95, Springer-Verlag, Berlin, Heidelberg, 1974.
Zeleny M., The Theory of Displaced Ideal, Multiple Criteria Decision Making Kyoto 1975,
Edited by M. Zeleny, Lecture Notes in Economics and Mathematical Systems 123,
Springer-Verlag, Berlin, Heidelberg, 1976, pp. 153-206.
Zeleny M., Multiple Criteria Decision Making, McGraw-Hill, Inc., 1982.
Zeleny M., Stable Patterns from Decision-Producing Networks: New Interfaces of DSS and
MCDM, MCDM WorldScan 3, No. 2-3 (1989), 6-7.
Zeleny M., Towards the TradeoJJs-Free Optimality in MCDM, Multicriteria Analysis, Edited
by J. Climaco, Springer-Verlag, Berlin, Heidelberg, 1997, pp. 596-601.
Zhou X., Mokhtarian F.S., Zlobec S., A Simple Constraint Qualification in Convex Program-
ming, Mathematical Programming 61, No.3 (1993), 385-397.
Zionts S., Methods for Solving Management Problems Involving Multiple Objectives, Multiple
Criteria Decision Making Theory and Application, Edited by G. Fandel, T. Gal, Lecture
Notes in Economics and Mathematical Systems 177, Springer-Verlag, Berlin, Heidelberg,
1980, pp. 540-558.
Zionts S., Multiple Criteria Mathematical Programming: An Updated Overview and Several
Approaches, Multiple Criteria Decision Making and Risk Analysis Using Microcomputers,
Edited by B. Karpak, S. Zionts, Springer-Verlag, Berlin, Heidelberg, 1989, pp. 7-60.
Zionts S., The State of Multiple Criteria Decision Making: Past, Present, and Future, Multi-
ple Criteria Decision Making: Proceedings of the Ninth International Conference: Theory
and Applications in Business, Industry, and Government, Edited by A. Goicoechea, L.
Duckstein, S. Zionts, Springer-Verlag, New York, 1992, pp. 33-43.
Zionts S., Some Thoughts on MCDM: Myths and Ideas, Multicriteria Analysis, Edited by
J. Climaco, Springer-Verlag, Berlin, Heidelberg, 1997a, pp. 602-607.
References 291
Zionts S., Decision Making: Some Experiences, Myths and Observations, Multiple Criteria
Decision Making: Proceedings of the Twelfth International Conference, Hagen (Ger-
many), Edited by G. Fandel, T. Gal, Lecture Notes in Economics and Mathematical
Systems 448, Springer-Verlag, Berlin, Heidelberg, 1997b, pp. 233--241.
Zionts S., Wallenius J., An Interactive Programming Method lor Solving the Multiple Criteria
Problem, Management Science 22, No.6 (1976), 652-663.
Zionts S., WaJlenius J., An Interactive Multiple Objective Linear Programming Method lor a
Class 01 Underlying Nonlinear Utility Functions, Management Science 29, No.5 (1983),
519-529.
Zlobec S., Two Characterizations 01 Pareto Minima in Convex Multicriteria Optimization,
ApJikace Matematiky 29, No.5 (1984), 342-349.
Zubiri J., Scalarization 01 Vector Optimization Problems via Generalized Chebyshev Norm,
Mathematical Analysis and Systems Theory V., Edited by J. Szep, P. Ta1l6s, Department
of Mathematics, Karl Marx University of Economics, Budapest, 1988, pp. 39-42.
INDEX