Interactive Nonlinear Multiobjective Optimization Methods
Interactive Nonlinear Multiobjective Optimization Methods
This reprint may differ from the original in pagination and typographic detail.
Year: 2016
All material supplied via JYX is protected by copyright and other intellectual property rights, and
duplication or sale of all or part of any of the repository collections is not permitted, except that
material may be duplicated by you for your research use or educational purposes in electronic or
print form. You must obtain permission for any other use. Electronic or print copies may not be
offered, whether for sale or otherwise to anyone who is not an authorised user.
Chapter 1
INTERACTIVE NONLINEAR
MULTIOBJECTIVE OPTIMIZATION METHODS
Dmitry Podkopaev
University of Jyväskylä
Department of Biological and Environmental Science
P.O. Box 35 (YA), FI-40014 University of Jyväskylä, Finland
[email protected]
1
2
1. Introduction
Nonlinear multiobjective optimization means multiple criteria decision
making involving nonlinear functions of (continuous) decision variables.
In these problems, the best possible compromise, that is, Pareto optimal
solution, is to be found from an (infinite) number of alternatives repre-
sented by decision variables restricted by constraint functions. Thus,
enumerating the solutions is impossible.
Solving multiobjective optimization problems usually requires the par-
ticipation of a human decision maker who is supposed to have insight
into the problem and who can express preference relations between alter-
native solutions or objective functions or some other type of preference
information. Multiobjective optimization methods can be divided into
four classes according to the role of the decision maker in the solution
process [79, 136]. If the decision maker is not involved, we use meth-
ods where no articulation of preference information is used, in other
words, no-preference methods. If the decision maker expresses prefer-
ence information after the solution process, we speak about a posteriori
methods whereas a priori methods require articulation of preference in-
formation before the solution process. The most extensive class is inter-
active methods, where the decision maker specifies preference informa-
tion progressively during the solution process. Here we concentrate on
this last-mentioned class and introduce several examples of interactive
methods.
In the literature, interactive methods have proven useful for various
reasons. They have been found efficient from both computational and
cognitive points of view. Because the decision maker directs the solution
process with one’s preferences, only those Pareto optimal solutions that
are interesting to her or him need to be calculated. This means savings
in computational cost when compared to a situation where a big set of
Pareto optimal solutions should be calculated. On the other hand, the
amount of new information generated per iteration is limited and, in this
way, the decision maker does not need to compare too many solutions
at a time. An important advantage of interactive methods is learning.
Once the decision maker has provided preferences, (s)he can see from
the Pareto optimal solutions generated, how attainable or feasible the
preferences were. In this way, the decision maker gains insight about the
problem. (S)he learns about the interdependencies between the objective
functions and also about one’s own preferences. The decision maker can
also change her or his mind after the learning, if so desired.
Many real-world phenomena behave in a nonlinear way. Besides, lin-
ear problems can always be solved using methods created for nonlinear
Interactive Nonlinear Methods 3
problems but not vice versa. For these reasons, we here devote ourselves
to nonlinear problems. We assume that all the information involved is
deterministic and that we have a single decision maker.
In this presentation, we concentrate on general-purpose interactive
methods and, thus, methods tailored for some particular problem type
are not included. In recent years, interactive approaches have been de-
veloped in the field of evolutionary multiobjective optimization (see, for
example, [15]), but we do not consider them here. The literature survey
of years since 2000 has been limited to journal articles (in English). We
describe in more detail methods with published applications.
2. Concepts
Let us begin by introducing several concepts and definitions. We study
multiobjective optimization problems of the form
any other objective vector for which all the components are smaller.
Weakly Pareto optimal solutions are sometimes computationally easier
to generate than Pareto optimal solutions. Thus, they have relevance
from a technical point of view. On the other hand, a vector is properly
Pareto optimal if unbounded trade-offs are not allowed. For a collection
of different definitions of proper Pareto optimality, see, for example,
[136].
Multiobjective optimization problems are usually solved by scalar-
ization which means that the problem is converted into one or a fam-
ily of single (scalar) objective optimization problems. This produces a
new scalarized problem with a real-valued objective function, possibly
depending on some parameters. The resulting new problem must be
solved with a single objective optimization method which is appropriate
to the characteristics of the problem in question (taking into account,
for example, differentiability and convexity). When scalarization is done
properly, it can be guaranteed that the solution obtained is Pareto op-
timal to the original multiobjective optimization problems. For further
details see, for example, [136, 206].
Interactive methods differ from each other by the way the problem is
transformed into a single objective optimization problem, by the form
in which information is provided by the decision maker and by the form
in which information is given to the decision maker at each iteration of
the solution process.
One way of inquiring the decision maker’s opinions is to ask for satis-
factory or desirable objective function values. They are called aspiration
levels and denoted by z̄i , i = 1, . . . , k. They form a vector z̄ ∈ Rk to be
called a reference point.
The ranges of the objective functions in the set of Pareto optimal
solutions give valuable information to the decision maker about the pos-
sibilities and restrictions of the problem (assuming the objective func-
tions are bounded over S). The components of the ideal objective vector
z⋆ ∈ Rk are the individual optima of the objective functions. This vector
represents the lower bounds of the Pareto optimal set. (In nonconvex
problems, we need a global solver for minimizing the k functions.) Note
that we sometimes need a vector that its strictly better than the ideal
objective vector. This vector is called a utopian objective vector and
denoted by z⋆⋆ .
The upper bounds of the Pareto optimal set, that is, the components
of a nadir objective vector znad , are much more difficult to obtain. Actu-
ally, there is no constructive method for calculating the nadir objective
vector for nonlinear problems. However, a rough estimate can be ob-
tained by keeping in mind the solutions where each objective function
6
attains its lowest value and calculating the values of the other objectives.
The highest value obtained for each objective can be selected as the es-
timated component of znad . This approach was originally proposed in
[10] and later named as a pay-off table method. Some approaches for
estimating the nadir objective vector for nonlinear multiobjective opti-
mization are summarized in [136]. Examples of latest approaches include
[37, 38].
It is sometimes assumed that the decision maker makes decisions on
the basis of an underlying value function U : Rk → R representing
her or his preferences among the objective vectors [92]. Even though
value functions are seldom explicitly known, they have been important
in the development of multiobjective optimization methods and as a
theoretical background. Thus, the value function is sometimes presumed
to be known implicitly.
The value function is usually assumed to be strongly decreasing. In
other words, the preferences of the decision maker are assumed to in-
crease if the value of one objective function decreases while all the other
objective values remain unchanged. In brief, we can say that less is
preferred to more. In that case, the maximal solution of U is assured
to be Pareto optimal. Note that regardless of the existence of a value
function, in what follows, we shall assume that lower objective function
values are preferred to higher, that is, less is preferred to more by the
decision maker.
An alternative to the idea of maximizing some value function is sat-
isficing decision making [206]. In this approach, the decision maker
tries to achieve certain aspirations. If the aspirations are achieved, the
solution is called a satisficing solution.
(5) if several solutions were generated, ask the decision maker to select
the best solution so far, and (6) stop, if the decision maker wants to.
Otherwise, go to step (3).
Three main stopping criteria can be identified in interactive methods.
In the best situation, the decision maker finds a desirable solution and
wants to stop. Alternatively, the decision maker gets tired and stops or
some algorithmic stopping rule is fulfilled. In the last-mentioned case,
one must check that the decision maker agrees to stop.
As a matter of fact, as stated in [153], solving a multiobjective op-
timization problem with an interactive method can be regarded as a
constructive process where, while learning, the decision maker builds a
conviction of what is possible (that is, what kind of solutions are avail-
able and attainable) and confronting this knowledge with her or his
preferences that also evolve. Based on this understanding, in interactive
methods we should pay attention to psychological convergence, rather
than to mathematical convergence (like, for example, optimizing some
value function).
Sometimes, two different phases can be identified in interactive solu-
tion processes: learning phase and decision phase [153]. In the learning
phase, the decision maker learns about the problem and gains under-
standing of what kind of solutions are attainable whereas the most pre-
ferred solution is found in the decision phase in the region identified in
the first phase. Naturally, the two phases can also be used iteratively.
In what follows, we present several interactive methods. The idea is
to describe a collection of methods based on different approaches. In
addition, plenty of references are included. Note that although all the
calculations take place in the decision variable space, we mostly speak
about the corresponding objective vectors and refer to both as solutions
since the space is apparent from the context.
When presenting the methods we apply the classification given in
[125, 201] according to the type of preference information that the meth-
ods utilize. This is an important aspect because a reliable and an un-
derstandable way of extracting preference information from the decision
maker is essential for the success of applying interactive methods. The
decision maker must feel being in control and must understand the ques-
tions posed. Otherwise, the answers cannot be relied on in the solution
process. It is also important to pay attention to the cognitive load set on
the decision maker, as discussed in [112]. Applying the method should
not set too much cognitive load on the decision maker.
In the first class, the decision maker specifies aspiration levels (in
other words, a reference point) representing desirable objective function
values. In the second class, the decision maker provides a classification
Interactive Nonlinear Methods 9
The reference point method is very simple. Before the solution pro-
cess starts, some information is given to the decision maker about the
problem. If possible, the ideal objective vector and the (approximated)
nadir objective vector are presented. Another possibility is to minimize
and maximize the objective functions individually in the feasible region
(if it is bounded). Naturally, the maximized objective function values
do not typically represent components of the nadir objective vector but
they can give some information to the decision maker in any case.
The basic steps of the reference point algorithm are the following:
1 Select the achievement function. Present information about the
problem to the decision maker. Set h = 1.
Interactive Nonlinear Methods 11
reference point. Then the decision maker specifies a new reference point
and so on.
The general idea is to maximize the minimum weighted deviation from
the nadir objective vector. The scalarized problem to be solved is
[ ]
zinad − fi (x)
maximize min
i=1,...,k zinad − z̄ih (1.4)
subject to x ∈ S.
Notice that the aspiration levels have to be strictly lower than the com-
ponents of the nadir objective vector.
∑
k
minimize max [ wi (fi (x) − z̄ih ) ] + ρ (fi (x) − z̄ih )
i=1,...,k (1.5)
i=1
subject to x ∈ S,
lighted part of the Pareto optimal set changes if the location of the
spotlight, that is, the reference point or the point of interest in the
Pareto optimal set are changed.
In the light beam search, the decision maker specifies reference points,
compares alternatives and affects the set of alternatives in different ways.
Specifying different thresholds may be demanding for the decision maker.
Note, however, that the thresholds are not constant but can be altered at
any time. The developers of the method point out that it may be compu-
tationally rather demanding to find the exact characteristic neighbours
in a general case. It is, however, noteworthy that the neighbours can be
generated in parallel.
The light beam search is an ad hoc method because a value function
could not directly determine new reference points. It could, however,
be used in comparing alternatives. Remember that the thresholds are
important here and they must come from the decision maker.
A modification of the method is described in [260].
where 0 ≤ α < 1 is the step-size in the reference direction, z̄ih < zih for
i ∈ I < and εhi > zih for i ∈ I > .
objective vector is used in the reference point and in the class I ⋄ , the
component of the nadir objective vector is used. In this way, we can
get a k-dimensional reference point and can solve reference point based
scalarized problems. In the synchronous NIMBUS, the problems (1.4)
of GUESS, (1.3) of the reference point method and (1.8) of the STOM
method are used.
The decision maker can also ask for intermediate solutions between
any two Pareto optimal solutions xh and x̂h to be generated. This
means that we calculate a search direction dh = x̂h − xh and provide
more solutions by taking steps of different sizes in this direction. In other
words, we generate P − 1 new vectors f (xh + tj dh ), j = 2, . . . , P − 1,
where tj = Pj−1
−1 . Their Pareto optimal counterparts (by setting each of
the new vectors at a time as a reference point for (1.3)) are presented to
the decision maker, who then selects the most satisfying solution among
the alternatives.
The NIMBUS algorithm is given below. The solution process stops
if the decision maker does not want to improve any objective function
value or is not willing to impair any objective function value.
We denote the set of saved solutions by A. At the beginning, we
set A = ∅. The starting point of the solution process can come from
the decision maker or it can be some neutral compromise [261] between
the objectives. The nadir and and utopian objective vectors must be
calculated or estimated before starting the solution process.
The main steps of the synchronous NIMBUS algorithm are the fol-
lowing.
5 If the decision maker wants to save one or more of the new solutions
to A, include it/them to A.
24
8 Ask the decision maker to choose the most preferred one among
the new and/or the intermediate solutions or the solutions in A.
Denote it as the current Pareto optimal solution. If the decision
maker wants to continue, go to step 2. Otherwise, stop.
different simulation and modelling tools like Matlab and GAMS. Sev-
eral local and global single objective optimization methods and their
hybrids are available. It is also possible to utilize, for example, the opti-
mization methods of GAMS. IND-NIMBUS has different tools for sup-
porting graphical comparison of selected solutions and it also contains
implementations of the Pareto Navigator method and the NAUTILUS
method (see Subsections 8.2 and 6.2, respectively).
Applications and modifications of the NIMBUS method can be found
in [47, 66, 67, 68, 71, 72, 114, 115, 138, 142, 145, 146, 148, 151, 152, 202,
209, 218].
Problem (1.12) is solved more that P times so that solutions very close
to each other do not have to be presented to the decision maker. On the
other hand, the predetermined number of iterations is not necessarily
conclusive. The decision maker can stop iterating when (s)he obtains a
satisfactory solution or continue the solution process longer if necessary.
In this method, the decision maker is only asked to compare Pareto
optimal objective vectors. The number of these alternatives and the
number of objective functions affect the easiness of the comparison. The
personal capabilities of the decision maker are also important. Note
that some consistency is required from the decision maker because the
discarded parts of the weighting vector space cannot be restored.
It must be mentioned that a great deal of calculation is needed in the
method. That is why it may not be applicable for large and complex
problems. However, parallel computing can be utilized when generating
the alternatives.
The Chebyshev method is a non ad hoc method. It is easy to compare
the alternative solutions with the help of a value function.
Applications and modifications of the Chebyshev method are given in
[2, 87, 93, 126, 185, 192, 208, 221, 230, 265].
may be more satisfied with a given solution if the previous one was very
undesirable and this lays the foundation of the NAUTILUS method.
The method utilizes the scalarized problem (1.3) of the reference point
method but unlike other methods utilizing this problem where weights
are kept unaltered during the whole solution process while their purpose
is mainly to normalize different ranges of objectives, in NAUTILUS the
weights have a different role as proposed in [124]. In NAUTILUS, the
weights are varied to get different Pareto optimal solutions and some
preference information is included in the weights. As mentioned earlier,
the optimal solution of problem (1.3) is assured to be Pareto optimal for
any reference point (see, for example, [136]).
As said, the NAUTILUS method starts from the nadir objective vec-
tor and at every iteration the decision maker gets a solution where all
objective function values improve from the previous iteration. Thus,
only the solution of the last iteration is Pareto optimal. To get started,
the decision maker is asked to give the number of iterations (s)he plans
to carry out, denoted by itn. This is an initial estimate and can be
changed at any time.
As before, we denote by zh the objective vector corresponding to
the iteration h. We set z0 = znad . Therefore, z0 (except in trivial
problems) is not Pareto optimal. Furthermore, we denote by ith the
number of iterations left (including iteration h). Thus, it1 = itn. At
each iteration, the range of reachable values that each objective function
can have without impairment in any other objective function (in this
and further iterations) will shrink. Lower and upper bounds on these
reachable values will be calculated when possible. For iteration h, we
denote by zh,lo = (z1h,lo , . . . , zkh,lo )T and zh,up = (z1h,up , . . . , zkh,up )T these
lower and upper bounds, respectively. Initially, z1,lo = z⋆ and z1,up =
znad . This information can be regarded as an actualization of the pay-
off table (see, for example, [136]) indicating new ideal and nadir values
at each iteration, thus informing the decision maker of what values are
achievable for each objective function.
For iteration h − 1, the objective vector zh−1 = (z1h−1 , . . . , zkh−1 )T is
shown to the decision maker, who has two possibilities to provide her or
his preference information:
1 Ranking the objective functions according to the relative impor-
tance of improving current objective function values. Here the
decision maker is not asked to give any global preference ranking
of the objectives, but the local importance of improving each of
the current objective function values. (S)he is asked to assign ob-
jective functions to classes in an increasing order of importance
for improving the corresponding objective value zih−1 . With this
Interactive Nonlinear Methods 29
feasible objective set Z for problem (1.1) or there is some Pareto optimal
objective vector where each objective function has a better value. On
the other hand, each objective vector zh produced has better objective
function values than the corresponding values in all previous iterations.
In addition, at each iteration, a part of the Pareto optimal set is elim-
inated from consideration in the sense that it is not reachable unless a
step backwards is taken.
Vectors zh,lo providing bounds for the objective values that can be
attained at the next iteration can be calculated by solving k problems of
the ε-constraint method so that each objective function is optimized in
turn and the upper bounds for the other objective functions are taken
from the corresponding components of zh−1 .
Thus, the attainable values of zh are bound in the following way:
zih ∈ [zih,lo , zih−1 ] (i = 1, . . . , k).
By denoting zh,up = zh−1 , we have
zih ∈ [zih,lo , zih,up ] (i = 1, . . . , k). (1.16)
1 Ask the decision maker to give the number of iterations, itn. Set
h = 1, z0 = f 1,up = znad , f 1,lo = z⋆ and it1 = itn.
2 Ask the decision maker to provide preference information in either
of the two ways and calculate weights wih (i = 1, . . . , k).
3 Set the reference point and the weights and solve problem (1.3) to
get xh and the corresponding f h .
4 Calculate zh according to (1.15).
5 Given zh , find f h+1,lo by solving k ε-constraint problems. Fur-
thermore, set f h+1,up = zh . Calculate the distance to the Pareto
optimal set.
6 Show the current objective values zih (i = 1, . . . , k), together with
the additional information [fih+1,lo , fih+1,up ] (i = 1, . . . , k) and the
distance to the decision maker.
7 Set a new value for ith if the decision maker wants to change the
number of remaining iterations.
8 Ask the decision maker whether (s)he wants to take a step back-
wards. If so, go to step 10. Otherwise, continue.
9 If ith = 1, stop with the last solution xh and f h . Otherwise, set
ith+1 = ith − 1 and h = h + 1. If the decision maker wants to
give new preference information, go to step 1. Alternatively, the
decision maker can take a new step in the same direction (using
the preference information of the previous iteration). Then, set
f h = f h−1 , and go to step 4.
10 Ask the decision maker whether (s)he would like to provide new
preference information starting from zh−1 . If so, go to step 2.
Alternatively, the decision maker can take a shorter step with the
same preference information given in step 2. Then, set zh = 12 zh +
1 h−1
2z and go to step 5.
The algorithm looks more complicated than it actually is. There are
many steps to provide to the decision maker different options of how to
continue the solution process. A good user interface plays an important
role in making the options available intuitive.
The NAUTILUS method has been located in this class of methods
because the decision maker must compare at each iteration the solution
generated to the solution of the previous iteration and decide whether to
proceed or to go backwards. Naturally, preference information indicating
32
As far as stopping criteria are concerned, one can always stop when
the decision maker wants to do so. A common stopping criterion is the
situation where all the surrogate worth values equal zero. One more
criterion is the case when the decision maker wants to proceed only in
an infeasible direction.
In the ISWT method, the decision maker is asked to specify surrogate
worth values and compare Pareto optimal alternatives. It may be diffi-
cult for the decision maker to provide consistent surrogate worth values
throughout the solution process. In addition, if there is a large number
of objective functions, the decision maker has to specify a lot of surro-
gate worth values at each iteration. On the other hand, the easiness of
the comparison of alternatives depends on the number of objectives and
on the personal abilities of the decision maker.
The ISWT method can be regarded as a non ad hoc method. The sign
of the surrogate worth values can be judged by comparing trade-off rates
with marginal rates of substitution (obtainable from the value function).
Furthermore, when comparing alternatives, it is easy to select the one
with the highest value function value.
Modification of the ISWT method are presented in [24, 28, 49, 63, 69].
8. Navigation Methods
By navigation we refer to methods where new Pareto optimal solution al-
ternatives are generated in a real-time imitating fashion along directions
that are derived from the information the decision maker has specified.
In this way, the decision maker can learn about the interdependencies
among the objective functions. The decision maker can either continue
the movement along the current direction or change the direction, that
is, one’s preferences. Increased interest has been devoted to navigation
based methods in the literature in recent years. In these methods, the
user interface plays a very important role in enabling the navigation.
[ ]
fi (x) − z̄ih
minimize max
i∈I wi (1.19)
subject to z̄h = zh + tdh+1 ,
x ∈ S,
Interactive Nonlinear Methods 37
2 Show the current objective vector to the decision maker and ask
her or him whether a preferred solution has been found. If yes, go
to step 6. Otherwise, continue.
3 Ask the decision maker whether (s)he would like to proceed to some
other direction. If the decision maker does not want to change the
direction, go to step 5.
4 Ask the decision maker to specify how the current objective vector
should be improved by giving aspiration levels for the objectives.
To aid her or him, show the ideal and the nadir objective vectors.
Based on the resulting reference point z̄ and the current objective
vector zc , set a search direction.
This allows replacing the feasible region of the original problem with the
set of convex combination coefficients v1 , . . . , vm in the definition of X .
The current state of the navigation process is represented by the cur-
rent Pareto optimal solution xh and the vector of current upper bounds
b ∈ Rk on objective function values. Using the surrogate problem with
these bounds as additional constraints, the ideal objective vector is cal-
culated and the nadir objective vector is estimated via a pay-off table.
They define ranges of objective function values for Pareto optimal solu-
tions. These ranges together with the current solution are displayed in
a radar chart also known as a spider-web chart.
By moving sliders on the radar chart with the mouse, the decision
maker can provide two types of preference information: upper bounds
on objective values and a desired value (aspiration level) of any objective
function. Changes made by the decision maker are immediately reflected
in the current state of the navigation process and shown in the radar
chart. Setting the upper bounds influences the objective function ranges
as described above. Setting the value of any objective function fi∗ to a
desired value τ yields updating the current solution with the solution of
the following problem
minimize max yi − fi (xh )
i=1,...,k,
i̸=i∗ ( )
∑
m
subject to y=f vj x(j) + s,
j=1
yi ≤ bi , i = 1, . . . , k,
yi∗ = τ,
∑
m
vj = 1,
j=1
v and s are non-negative.
By using the two above-described mechanisms of expressing prefer-
ences the decision maker explores the set of Pareto optimal solutions
of the surrogate problem until a most preferred or satisfactory solution
is found. Because the decision maker must provide upper bounds and
aspiration levels, the method is ad hoc by nature.
The method has been developed and implemented for intensity mod-
ulated radiation therapy treatment planning. Therefore, in addition to
the radar chart, some application-specific information about the cur-
rent solution (treatment plan) is displayed. Nevertheless, there are no
obstacles of adapting the method elsewhere when the multiobjective op-
timization problem is convex and the convex hull of some finite set of
pre-calculated Pareto optimal solutions may serve as a good enough ap-
proximation of the Pareto optimal set.
42
Tests with human decision makers are described in [16, 18, 20, 21, 33,
34, 41, 111, 134, 184, 247] while tests with value functions are reported
in [3, 59, 161, 191]. Finally, comparisons based on intuition are provided
in [45, 98, 99, 113, 131, 135, 185, 193, 207, 243, 246].
11. Conclusions
We have outlined several interactive methods for solving nonlinear multi-
objective optimization problems and indicated references to many more.
One of the challenges in this area is spreading the word about the exist-
ing methods to those who solve real-world problems. Another challenge
is to develop methods that support the decision maker even better. User-
friendliness cannot be overestimated because interactive methods must
be able to correspond to the characteristics of the decision maker. Spe-
cific methods for different areas of application that take into account the
characteristics of the problems are also important.
An alternative to creating new methods is to use different methods in
different phases of the solution process. This hybridization means that
the positive features of various methods can be exploited to their best
advantage in appropriate phases. In this way, it may also be possible to
overcome some of the weaknesses of the methods like proposed. Ways
to enable changing the type of preference information specified, that is,
the method used during the solution process are presented in [125, 201].
The decision maker can be supported by using visual illustrations
and further development of such tools is essential. For instance, one
may visualize (parts of) the Pareto optimal set and, for example, use 3D
slices of the feasible objective region (see [122, 123], among others) and
other tools. On the other hand, one can illustrate sets of alternatives by
means of bar charts, value paths, spider-web charts and petal diagrams
etc. For more details see, for example, [136] and references therein as
well as [139] for a more detailed survey.
References
[1] IND-NIMBUS website. http://ind-nimbus.it.jyu.fi/.
[2] P. J. Agrell, B. J. Lence, and A. Stam. An interactive multicri-
teria decision model for multipurpose reservoir management: The
Shellmouth reservoir. Journal of Multi-Criteria Decision Analysis,
7:61–86, 1998.
[3] Y. Aksoy, T. W. Butler, and E. D. Minor III. Comparative stud-
ies in interactive multiple objective mathematical programming.
European Journal of Operational Research, 89:408–422, 1996.
44