Linear and Multiobjective Programming With Fuzzy Stochastic Extensions
Linear and Multiobjective Programming With Fuzzy Stochastic Extensions
Introduction
2x1 C 6x2 27
3x1 C 2x2 16
4x1 C x2 18
x1 0; x2 0:
For convenience in our subsequent discussion, let the opposite of the total
profit be
zD 3x1 8x2 ;
and convert the profit maximization problem to the problem to minimize z under the
above constraints, i.e.,
It is easy to see that, in the x1 -x2 plane, the linearly constrained set of points
.x1 ; x2 / satisfying the above constraints is the boundary lines and interior points of
the convex pentagon ABCDE shown in Fig. 1.1.
The set of points satisfying x1 2x2 D z for a fixed value of z is a line in the
x1 -x2 plane. As z is varied, the line is moved parallel to itself. The optimal value of
this problem is the smallest value of z for which the corresponding line has at least
one point in common with the linearly constrained set ABCDE. As can been seen
from Fig. 1.1, this occurs at point D. Hence, the optimal solution to this problem is
x1 D 3; x2 D 3:5; zD 37:
1.2 Extensions of Linear Programming 3
x2 x2
z = −3x1−8x2
5 2x1+6x2 27
E (0, 4.5)
4 D (3, 3.5)
3 4x1+x2 18
2 C (4, 2)
X X
3x1+2x2 16
1
0 x1 x1
1 2 3 4 5 A (0, 0) B (4.5, 0)
Fig. 1.1 Feasible region and optimal solution for production planning problem
C (4, 2)
X
A (0, 0) B (4.5, 0) x1
z2= 5x1+4x2
P1 and P2 yields 5 and 4 units of pollution, respectively. Thus, the manager should
not only maximize the total profit but also minimize the amount of pollution.
For simplicity, assume that the amount of pollution is a linear function of two
decision variables x1 and x2 such as
5x1 C 4x2
˙
Before discussing the solution concepts of multiobjective linear programming
problems, it is instructive to consider the geometric interpretation of the two-
objective linear programming problem in Example 1.2. The feasible region X for
this problem in the x1 -x2 plane is composed of the boundary lines and interior
points of the convex pentagon ABCDE in Fig. 1.2. Among the five extreme points
A; B; C; D, and E, observe that z1 is minimized at the extreme point D.3; 3:5/ while
z2 is minimized at the extreme point A.0; 0/.
As will be discussed in Chap. 3, these two extreme points A and D are obviously
Pareto optimal solutions since they cannot improve respective objective functions
z1 and z2 anymore. In addition to the extreme points A and D, the extreme point
E and all of the points of the segments AE and ED are Pareto optimal solutions
1.2 Extensions of Linear Programming 5
since they can be improved only at the expense of either z1 or z2 . However, all of
the remaining feasible points are not Pareto optimal since there always exist other
feasible points which improve at least one of the objective functions.
In another perspective, recalling the imprecision or fuzziness inherent in human
judgments, two types of inaccuracies of human judgments should be incorporated in
multiobjective optimization problems. One is the fuzzy goals of the decision maker
for each of the objective functions, and the other is the experts’ ambiguous under-
standing of the nature of the parameters in the problem-formulation process. The
motivation for multiobjective optimization under imprecision or fuzziness comes
from this observation. In Chap. 4, we will deal with fuzzy linear programming
and fuzzy multiobjective linear programming. Multiobjective linear programming
problems involving fuzzy parameters will be also discussed.
From an uncertain viewpoint different with fuzziness, linear programming prob-
lems with random variable coefficients, called stochastic programming problems,
are developed. They are two-stage models and constrained programming. In two-
stage models, a shortage or an excess arising from the violation of the constraints is
penalized, and then the expectation of the amount of the penalties for the constraint
violation is minimized. In a model of the constrained programming, from the
observation that the stochastic constraints are not always satisfied, the problem is
formulated so as to permit constraint violations up to specified probability levels.
Chapter 5 will discuss such stochastic programming techniques.
Problems
1.1 A manufacturing company desires to maximize the total profit from producing
two products P1 and P2 utilizing three different materials M1 , M2 , and M3 . The
company knows that to produce 1 ton of product P1 requires 2 tons of material
M1 , 8 tons of material M2 , and 3 tons of material M3 , while to produce 1 ton of
product P2 requires 6 tons of material M1 , 6 tons of material M2 , and 1 ton of
material M3 . The total amounts of available materials are limited to 27, 45, and
15 tons for M1 , M2 , and M3 , respectively. It also knows that product P1 yields a
profit of 2 million yen per ton, while P2 yields 5 million yen. Given these limited
materials, the company is trying to figure out how many units of products P1 and
P2 should be produced to maximize the total profit.
(1) Let x1 and x2 denote decision variables for the numbers of tons produced
of products P1 and P2 , respectively. Formulate the problem as a linear
programming problem.
(2) Graph the problem in the x1 x2 plane, find an optimal solution.
1.2 (Transportation problem)
Consider a planning problem of transporting goods from m warehouses to n
retail stores. Assume that a quantity ai is available at warehouse i , a quantity
6 1 Introduction
Since G.B. Dantzig first proposed the simplex method around 1947, linear
programming, as an optimization method of maximizing or minimizing a linear
objective function subject to linear constraints, has been extensively studied and,
with the significant advances in computer technology, widely used in the fields of
operations research, industrial engineering, systems science, management science,
and computer science.
In this chapter, after an overview of the basic concepts of linear programming
via a simple numerical example, the standard form of linear programming and
fundamental concepts and definitions are introduced. The simplex method and the
two-phase method are presented with the details of the computational procedures.
By reviewing the procedure of the simplex method, the revised simplex method,
which provides a computationally efficient implementation, is also discussed. Asso-
ciated with linear programming problems, dual problems are formulated, and duality
theory is discussed which also leads to the dual simplex method.
In Sect. 1.1, we have presented a graphical method for solving the two-dimensional
production planning problem of Example 1.1.
Minimize the opposite of the linear total profit
zD 3x1 8x2
2x1 C 6x2 27
3x1 C 2x2 16
4x1 C x2 18
x1 0; x2 0:
Since in multiple dimensions more than two the graphical method used in
Sect. 1.1 cannot be applied, it becomes necessary to develop an algebraic method. In
this section, as a prelude to the development of the general theory, consider an alge-
braic approach to two-dimensional linear programming problems for understanding
the basic ideas of linear programming. To do so, by introducing the amounts, x3
. 0/, x4 . 0/, and x5 . 0/, of unused (idle) materials for M1 , M2 , and M3 ,
respectively, and converting the inequalities into the equalities, the problem with
the equation 3x1 8x2 z D 0 for the objective function can then be stated as
follows:
Find values of xj 0, j D 1; 2; 3; 4; 5 so as to minimize z, satisfying the
augmented system of linear equations
9
2x1 C 6x2 C x3 D 27 >
>
3x1 C 2x2 C x4 D 16
=
(2.1)
4x1 C x2 C x5 D 18 >
>
;
3x1 8x2 z D 0:
9
1 1 >
x1 C x2 C x3 D 4:5 >
>
>
3 6 >
>
>
>
7 1 >
>
x1 x3 C x4 D7 > =
3 3 (2.2)
11 1 >
C x5 D 13:5 >
>
x1 x3 >
3 6 >
>
>
>
1 4 >
>
x1 C x3 z D 36: ;
>
3 3
ai1 x1 C ai2 x2 C C ai n xn bi :
c1 x 1 C c 2 x 2 C C cn x n (2.4)
xj 0; j D 1; 2; : : : ; n: (2.6)
˙
Compared with such a production planning problem maximizing the linear
objective function of the total profit subject to the linear inequality constraints
in a direction of the less than or equal to symbol , the following diet problem
minimizing the linear objective function of the total cost subject to the linear
inequality constraints in a direction of the greater than or equal to symbol is well
2.2 Typical Examples of Linear Programming Problems 11
known as a nearly symmetric one. It should be noted here that both of the problems
have the nonnegativity conditions for all decision variables xj 0, j D 1; : : : ; n.
Example 2.2 (Diet problem). How can we determine the most economical diet that
satisfies the basic minimum nutritional requirements for good health? Assume n
different foods are available at the market and the selling price for food j is cj
per unit. Moreover, there are m basic nutritional ingredients for the human body,
and at least bi units of nutrient i are required everyday to achieve a balanced diet
for good health. In addition, assume that each unit of food j contains aij units of
nutrient i . The problem is to determine the most economical diet that satisfies the
basic minimum nutritional requirements.
For this problem, let xj , j D 1; : : : ; n denote a decision variable for the number
of units of food j in the diet, and then it is required that xj 0, j D 1; : : : ; n. The
total amount of nutrient i
ai1 x1 C ai2 x2 C C ai n xn
contained in the purchased foods must be greater than or equal to the daily
requirement bi of nutrient i . Thus, the economic diet can be represented as a linear
programming problem where the linear cost function
c1 x 1 C c 2 x 2 C C cn x n (2.7)
xj 0; j D 1; 2; : : : ; n: (2.9)
˙
To develop a better understanding, as a simple numerical example of the diet
problem, we present the following diet problem with two decision variables and
three constraints.
Example 2.3 (Diet problem with 2 decision variables and 3 constraints). A house-
wife is planning a menu by utilizing two foods F1 and F2 containing three nutrients
N1 , N2 , and N3 in order to meet the nutritional requirements at a minimum cost.
Each 1 g (gram) of the food F1 contains 1 mg (milligram) of N1 , 1 mg of N2 , and
2 mg of N3 ; and each 1 g of the food F2 contains 3 mg of N1 , 2 mg of N2 , and 1
mg of N3 . The recommended amounts of the nutrients N1 , N2 , and N3 are known
12 2 Linear Programming
to be at least 12 mg, 10 mg, and 15 mg, respectively. Also, it is known that the costs
per gram of the foods F1 and F2 are, respectively, 4 and 3 thousand yen. These data
concerning the nutrients and foods are summarized in Table 2.1.
The housewife’s problem is to determine the purchase volumes of foods F1 and
F2 which minimize the total cost satisfying the nutritional requirements for the
nutrients N1 , N2 , and N3 .
Let xj denote a decision variable for the number of units of food Fj to
be purchased, and then we can formulate the corresponding linear programming
problem minimizing the linear cost function
x1 0; x2 0: (2.12)
In order to deal with such nearly symmetrical production planning problems and
diet problems in a unified way, the standard form of linear programming is defined
as follows:
The standard form of linear programming is to minimize the linear objective
function
z D c 1 x 1 C c2 x 2 C C c n x n (2.13)
2.3 Standard Form of Linear Programming 13
xj 0; j D 1; 2; : : : ; n; (2.15)
where the aij , bi , and cj are fixed real constants. In particular, bi is called a right-
hand side constant, and cj is sometimes called a cost coefficient in a minimization
problem, while called a profit coefficient in a maximization one.
In this book, the standard form of linear programming is written in the following
form:
9
minimize z D c1 x1 C c2 x2 C C cn xn >
>
>
subject to a11 x1 C a12 x2 C C a1n xn D b1 > >
>
>
a21 x1 C a22 x2 C C a2n xn D b2
=
(2.16)
>
>
>
am1 x1 C am2 x2 C C amn xn D bm > >
>
>
xj 0; j D 1; 2; : : : ; n;
;
where
c D .c1 ; c2 ; : : : ; cn /; (2.19)
2 3 0 1 0 1
a11 a12 a1n x1 b1
6 a21 a22 a2n 7 B x2 C B b2 C
AD6 : ; x D B : C; bDB : C; (2.20)
6 7 B C B C
:: : : :: 7
4 :: : : : 5 @ :: A @ :: A
am1 am2 amn xn bm
and 0 is an n dimensional column vector with zero components.
Moreover, by denoting the j th column of an m n matrix A by
0 1
a1j
B a2j C
pj D B : C ; j D 1; 2; : : : ; n (2.21)
B C
:
@ : A
amj
z D c1 x 1 C c2 x 2 C C c n x n
z C c1 x1 C c2 x2 C C cn xn D 0; (2.23)
It should be noted here that the standard form of linear programming deals with a
linear minimization problem with nonnegative decision variables and linear equality
2.3 Standard Form of Linear Programming 15
we can also transform the inequality (2.27) into the equality (2.28). It should
be noted here that both the slack variables and the surplus variables must be
nonnegative in order that the inequalities (2.25) and (2.27) are satisfied for all
i D 1; 2; : : : ; m.
If, in the original formulation of the problem, some decision variable xk is
not restricted to be nonnegative, it can be replaced with the difference of two
nonnegative variables, i.e.,
Similarly, for the diet problem with n decision variables, introducing the m
nonnegative surplus variables xnCi . 0/, i D 1; : : : ; m yields the following
standard form of linear programming:
9
minimize c1 x1 C c2 x2 C C cn xn >
>
>
subject to a11 x1 C a12 x2 C C a1n xn xnC1 D b1 >
>
>
>
a21 x1 C a22 x2 C C a2n xn xnC2 D b2
=
(2.32)
>
>
>
am1 x1 C am2 x2 C C amn xn xnCm D bm >
>
>
>
xj 0; j D 1; 2; : : : ; n; n C 1; : : : ; n C m:
;
The basic ideas of linear programming are to first detect whether solutions
satisfying equality constraints and nonnegativity conditions exist and, if so, to find
a solution yielding the minimum value of z.
However, in the standard form of linear programming (2.16) or (2.18), if there is
no solution satisfying the equality constraint, or if there exists only one, we do not
need optimization. Also, if any of the equality constraints is redundant, i.e., a linear
combination of the others, it could be deleted without changing any solutions of the
system. Therefore, we are mostly interested in the case where the system of linear
equations (2.16) is nonredundant and has an infinite number of solutions.
For that purpose, assume that the number of variables exceeds the number of
equality constraints, i.e.,
n>m (2.33)
rank.A/ D m: (2.34)
1
These assumptions, introduced to establish the principle theoretical results, will be relaxed in
Sect. 2.5 and are no longer necessary when solving general linear programming problems.
2.3 Standard Form of Linear Programming 17
nŠ
n Cm D :
.n m/ŠmŠ
Example 2.4 (Basic solutions). Consider the basic solutions of the standard form of
the linear programming (2.30) discussed in Example 1.1.
Choosing x3 , x4 , and x5 as basic variables, we have the corresponding basic
solution .x1 ; x2 ; x3 ; x4 ; x5 / D .0; 0; 27; 16; 18/ which is a nondegenerate basic
feasible solution and corresponds to the extreme point A in Fig. 1.1. After making
another choice of x1 , x2 , and x4 as basic variables, solving
2x1 C 6x2 D 27
3x1 C 2x2 C x4 D 16
4x1 C x2 D 18
2
In this book, the superscript T denotes the transpose operation for a vector or a matrix.
18 2 Linear Programming
2x1 C 6x2 D 27
3x1 C 2x2 D 16
4x1 C x2 C x5 D 18;
and then we have a basic feasible solution .x1 ; x2 ; x3 ; x4 ; x5 / D .3; 3:5; 0; 0; 2:5/.
It corresponds to the extreme point D in Fig. 1.1 which is an optimal solution. ˙
For generalizing the basic ideas of linear programming grasped in the algebraic
approach to the two-dimensional production planning problem of Example 1.1, con-
sider the following linear programming problem with basic variables x1 ; x2 ; : : : ; xm :
Find values of x1 0; x2 0; : : : ; xn 0 so as to minimize z, satisfying the
augmented system of linear equations
9
x1 C aN 1;mC1 xmC1 C aN 1;mC2 xmC2 C C aN 1n xn D bN1 >
>
bN2
>
x2 C aN 2;mC1 xmC1 C aN 2;mC2 xmC2 C C aN 2n xn D >
>
=
D bNm
>
xm C aN m;mC1 xmC1 C aN m;mC2 xmC2 C C aN mn xn >
>
>
>
z C cNmC1 xmC1 C cNmC2 xmC2 C C cNn xn D zN:
;
(2.35)
As in the previous section, here it is assumed that n > m and the system
of m equality constrains is nonredundant. As with the augmented system of
equations (2.35), a system of linear equations in which each of the variables
x1 ; x2 ; : : : ; xm has a coefficient of unity in one equation and zeros elsewhere is called
a canonical form or a basic form. In a canonical form, the variables x1 ; x2 ; : : : ; xm
and . z/ are called basic variables, and the remaining variables xmC1 ; xmC2 ; : : : ; xn
are called nonbasic variables. In such a canonical form, observing that . z/ always
is a basic variable, with no further notice, only x1 ; x2 ; : : : ; xm are called basic
variables.
It is useful to set up such a canonical form (2.35) in tableau form as shown in
Table 2.2. This table is called a simplex tableau, in which only the coefficients of
the algebraic representation in (2.35) are given.
From the canonical form (2.35) or the simplex tableau given in Table 2.2, it
follows directly that a basic solution with basic variables x1 ; x2 ; : : : ; xm becomes
z D zN: (2.37)
If
In this formulation, by using the m slack variables xnC1 ; xnC2 ; : : : ; xnCm as basic
variables, it is evident that (2.39) is a canonical form, and then the corresponding
basic solution is
From the fact that the right-hand side constant bi means the available amount of
resource i , it should be nonnegative, i.e., bi 0, i D 1; 2; : : : ; m, and therefore this
canonical form is feasible.
20 2 Linear Programming
In contrast, for the diet problem of Example 2.2, introducing m surplus variables
xnCi 0, i D 1; 2; : : : ; m and then multiplying both sides of the resulting
constraints by 1 yields a basic solution
cNj 0; j D m C 1; m C 2; : : : ; n; (2.42)
The nonbasic variables xmC1 ; xmC2 ; : : : ; xn are presently zeros, and they are
restricted to be nonnegative. If cNj 0 for j D m C 1; m C 2; : : : ; n, then from
cNj xj 0, j D mC1; mC2; : : : ; n, increasing any xj cannot decrease the objective
function z. Thus, since any change in the nonbasic variables cannot decrease z, the
present solution must be optimal.
The coefficient cNj of xj in (2.35) represents the rate of change of z with respect
to the nonbasic variable xj . From this observation, the coefficient cNj is called the
relative cost coefficient or, alternatively, the reduced cost coefficient.
2.4 Simplex Method 21
Since the coefficient cNs of the last equation in (2.44) is negative, i.e., cNs < 0,
increasing the value of xs decreases the value of z. The only factor limiting the
increase of xs is that all of the variables x1 ; x2 ; : : : ; xm must be nonnegative. In other
words, keeping the feasibility of the solution requires
xi D bNi aN is xs 0; i D 1; 2; : : : ; m: (2.45)
aN is 0; i D 1; 2; : : : ; m; (2.46)
then xs can increase infinitely. Hence since cNs < 0, from the last equation of (2.44),
it follows that
z D zN C cNs xs ! 1:
bNi
xs D ; aN is > 0: (2.48)
aN is
The value of xs is maximized under the condition of the nonnegativity of the
basic variables xi , i D 1; 2; : : : ; m, and it is given by
bNi bNr
min D D ™: (2.49)
N is
aN i s >0 a aN rs
The basic variable xr determined by (2.49) then becomes nonbasic, and instead, the
nonbasic variable xs becomes basic. That is, xr becomes zero while xs increases
from zero to bNr =aN rs D ™ . 0/. Also, from the last equation of (2.44), the value of
objective function z decreases by jcNs xs j D jcNs ™j.
A new canonical form in which xs is selected as a basic variable in place of xr can
be easily obtained by pivoting on aN rs , which is called the pivot element determined
by (2.43) and (2.49). That is, finding cNs D mincNj <0 cNj tells us that the pivot term
is in column s, and finding the minimum bNr =aN rs of all the ratios bNi =aN is such that
aN is > 0 tells us that it is in row r.
Fundamental to linear programming is a pivot operation defined as follows.
2.4 Simplex Method 23
bN1 >
9
x1 C aN 1;mC1 xmC1 C C aN 1s xs C C aN 1n xn D >
C aN 2;mC1 xmC1 C C aN 2s xs C C aN 2n xn D bN2 >
>
x2 >
>
>
>
>
=
xr C aN r;mC1 xmC1 C C aN rs xs C C aN rn xn D Nbr
>
>
>
>
bNm >
>
xm C aN m;mC1 xmC1 C C aN ms xs C C aN mn xn D
>
>
>
z C cNmC1 xmC1 C C cNs xs C CcNn xn D zN;
;
(2.50)
where bNi 0, i D 1; 2; : : : :m, and then we have the new canonical form
bN1 >
9
x1 C aN 1r xr C aN 1;mC1 xmC1 C C 0 C C aN 1n xn D >
x2 C aN 2r xr
C aN 2;mC1 xmC1 C C 0 C C aN 2n xn D bN2 >
>
>
>
>
>
>
=
aN rr xr
C aN r;mC1
xmC1 C C xs C C aN rn xn D Nb
r >
>
>
>
Nb >
>
aN mr xr C xm C aN m;mC1 xmC1 C C 0 C C aN mn xn D m>
>
>
cNr xr z C cNmC1 xmC1 C C 0 C C cNn xn D zN ;
;
(2.51)
where the superscript is added to a revised coefficient, and the revised coefficients
for j D r; m C 1; m C 2; : : : ; n are calculated as follows:
aN rj bNr
aN rj D ; bNr D ; (2.52)
aN rs aN rs
aN rj bNr
aN ij D aN ij aN is ; bNi D bNi aN is ; i D 1; 2; : : : ; mI i ¤ r; (2.53)
aN rs aN rs
aN rj bNr
cNj D cNj cNs ; zN D zN cNs : (2.54)
aN rs aN rs
24 2 Linear Programming
`C1 x1 1
aN 1r
aN 1;mC1 0
aN 1n bN1
:: :: :: :: :: ::
: : : : : :
xs
aN rr
aN r;mC1 1
aN rn bNr
:: :: :: :: :: ::
: : : : : :
xm
aN mr 1
aN m;mC1 0
aN mn bNm
z cNr cNmC1 0 cNn zN
aNrj bNr
aN rj D aNrs
; bNr D aNrs
aN N
aN ij D aN ij aN i s aNrjrs D aN ij
aN i s aN rj ; bNi D bNi aN i s aNbrsr D bNi aN i s bNr .i ¤ r/
aN N
cNj D cNj cNs aNrjrs D cNj
cNs aN rj ; zN D zN cNs aNbrsr D zN cNs bNr
Since the pivot element aN rs is determined by (2.43) and (2.49), it is expected that
the new canonical form (2.51) with basic variables x1 ; x2 ; : : : ; xr 1 ; xs ; xrC1 ; : : : ; xm
also becomes feasible. This fact can be formally verified as follows.
It is obvious that bNr D bNr =aN rs 0. For i .i ¤ r/ such that aN is > 0, from (2.49),
it follows that
!
aN is N bN i bN r
bNi D bNi br D aN is 0;
aN rs aN is aN rs
aN is N
bNi D bNi br 0:
aN rs
Hence, it holds that bNi 0 for all i , and then (2.51) is a feasible canonical form.
The pivot operation on aN rs replacing xr with xs as a new basic variable can be
summarized in Table 2.3.
As described so far, starting with a feasible canonical form and updating it
through a series of pivot operations, the simplex method seeks for an optimal
solution satisfying the optimality criterion or the unboundedness information. The
procedure of the simplex method, starting with a feasible canonical form, can be
summarized as follows.
2.4 Simplex Method 25
Step 2 If all of the coefficients in column s are nonpositive, i.e., aN is 0 for all
indices i of the basic variables, then the optimal value is unbounded, and stop.
Step 3 If some of aN is are positive, find the index r such that
bNi bNr
min D D ™:
N is
aN i s >0 a aN rs
Step 4 Perform the pivot operation on aN rs for obtaining a new feasible canonical
form (simplex tableau) with xs replacing xr as a new basic variable. The
coefficients of the new feasible canonical form after pivoting on aN rs ¤ 0 are
calculated as follows:
(i) Replace row r (the rth equation) with row r multiplied by 1=aN rs (divide
row r by aN rs ), i.e.,
aN rj bNr
aN rj D ; bNr D :
aN rs aN rs
aN ij D aN ij
aN is aN rj ; bNi D bNi aN is bNr :
(iii) Replace row m C 1 (the .m C 1/th equation for the objective function) with
the sum of row m C 1 and the revised row r multiplied by cNs , i.e.,
cNj D cNj
cNs aN rj ; zN D zN cNs bNr :
Return to step 1.
It should be noted here that when multiple candidates exist for the index s of the
variable entering the basis in step 1 or the index r of the variable leaving the basis
in step 3, for the sake of convenience, we choose the smallest index.
26 2 Linear Programming
Example 2.5 (Simplex method for the production planning problem of Example 1.1).
Using the simplex method, solve the production planning problem in the standard
form given in Example 1.1:
Introducing the slack variables x3 , x4 , and x5 and using them as the basic
variables, we have the initial basic feasible solution
min . 3; 8/ D 8 < 0;
x2 becomes a new basic variable. The minimum ratio, minaN i 2 >0 bNi =aN i2 , is calcu-
lated as
27 16 18 27
min ; ; D D 4:5;
6 2 1 6
and then x3 becomes a nonbasic variable. From s D 2 and r D 1, the pivot element
is 6 bracketed by [ ] in Table 2.4. After the pivot operation on 6, the result at cycle
1 is obtained.
2.4 Simplex Method 27
At cycle 1, since the negative relative cost coefficient is only 1=3, x1 becomes
a basic variable. Since
4:5 7 13:5 7
min ; ; D D 3;
1=3 7=3 11=3 7=3
7/3 bracketed by [ ] becomes the pivot element. After the pivot operation on 7/3, the
result at cycle 2 is obtained. At cycle 2, all of the relative cost coefficients become
positive, and then the following optimal solution is obtained:
The above optimal solution corresponds to the extreme point D in Fig. 1.1. ˙
Example 2.6 (Example with multiple optima). To show a simple linear program-
ming problem having multiple optima, consider the following modified production
planning problem in which the coefficients of x1 and x2 in the original objective
function given in Example 1.1 are changed to 1 and 3, respectively:
minimize z D x1 3x2
subject to 2x1 C 6x2 C x3 D 27
3x1 C 2x2 C x4 D 16 :
4x1 C x2 C x5 D 18
xj 0; j D 1; 2; 3; 4; 5
is obtained, observing the relative cost coefficient of x1 is zero, which means that the
value of the objective function is unchanged even if x1 becomes positive, provided
that it is not violating the constraints. Replacing x1 with x4 as a basic variable yields
an alternative optimal solution
giving the same value of the objective function. It should be noted here that the
optimal solutions obtained in cycles 1 and 2, respectively, correspond to the extreme
points E and D in Fig. 1.1, and all of the points on the line segment ED are also
optimal. ˙
The simplex method requires a basic feasible solution as a starting point. Such a
starting point is not always easy to find, and in fact none will exist if the constraints
are inconsistent. Phase I of the simplex method finds an initial basic feasible solution
or derives the information that no feasible solution exists. Phase II then proceeds
from this starting point to an optimal solution or derives the information that the
optimal value is unbounded. Both phases use the procedure of the simplex method
given in the previous section.
Phase I starts with a linear programming problem in the standard form (2.24),
where all the constants bi are nonnegative. For this purpose, if some bi is negative,
multiply the corresponding equation by 1. In order to set up an initial feasible
solution for phase I, the linear programming problem in the standard form is
augmented with a set of nonnegative variables xnC1 0, xnC2 0, : : :, xnCm 0,
so that the problem becomes as follows:
Find values of xj 0, j D 1; 2; : : : ; n; n C 1; : : : ; n C m so as to minimize z,
satisfying the augmented system of linear equations
9
a11 x1 C a12 x2 C C a1n xn CxnC1 D b1 . 0/ >
>
>
a21 x1 C a22 x2 C C a2n xn CxnC2 D b2 . 0/ >
>
=
>
am1 x1 C am2 x2 C C amn xn CxnCm D bm . 0/ >
>
>
>
c1 x 1 C c 2 x 2 C C cn x n z D 0: ;
(2.55)
The newly introduced nonnegative variables xnC1 0, xnC2 0, : : :, xnCm 0
are called artificial variables.
In the canonical form (2.55), using the artificial variables xnC1 ; xnC2 ; : : : ; xnCm
as basic variables, the following initial basic feasible solution is directly obtained:
Although a basic feasible solution to (2.55) such as (2.56) is not always feasible
to the original system, basic feasible solutions to (2.55) such that all the artificial
variables xnC1 ; xnC2 ; : : : ; xnCm are equal to zeros are also feasible to the original
system. Thus, one way to find a basic feasible solution to the original system is to
start from the initial basic solution (2.56) and use the simplex method to drive a
basic feasible solution such that all the artificial variables are equal to zeros. This
can be done by minimizing a function of the artificial variables
subject to the equality constraints (2.55) and the nonnegativity conditions for all
variables. By its very nature, the function (2.57) is sometimes called the infeasibility
form.
That is, the phase I problem is to find values of x1 0; x2 0; : : : ; xn 0,
xnC1 0; : : : ; andxnCm 0 so as to minimize w, satisfying the augmented system
of linear equations
9
a11 x1 C a12 x2 C C a1n xn CxnC1 Db1 . 0/ >
>
>
a21 x1 C a22 x2 C C a2n xn CxnC2 Db2 . 0/ >
>
>
>
=
am1 x1 Cam2 x2 C Camn xn CxnCm Dbm . 0/>>
>
c1 x1 C c2 x2 C C cn xn z D0 >
>
>
>
xnC1 CxnC2 C CxnCm wD 0:
;
(2.58)
Since the artificial variables are nonnegative, the function w which is the sum of
the artificial variables is obviously larger than or equal to zero. In particular, if the
optimal value of w is zero, i.e., w D 0, then all the artificial variables are zeros, i.e.,
xnCi D 0 for all i D 1; 2; : : : ; m. In contrast, if it is positive, i.e., w > 0, then no
feasible solution to the original system exists because some artificial variables are
not zeros and then the corresponding original constraints are not satisfied. Given an
initial basic feasible solution, the simplex method generates other basic feasible
solutions in turn, and then the end product of phase I must be a basic feasible
solution to the original system if such a solution exists.
It should be mentioned that a full set of m artificial variables may not be
necessary. If the original system has some variables that can be used as initial basic
variables, then they should be chosen in preference to artificial variables. The result
is less work in phase I.
For obtaining an initial basic feasible solution through the minimization of w
with the simplex method, it is necessary to convert the augmented system (2.58)
into the canonical form with the row of w, where w must be expressed by the
current nonbasic variables x1 ; x2 ; : : : ; xn . Since from (2.58) the artificial variables
are represented by using the nonbasic variables, i.e.,
w C d 1 x 1 C d2 x 2 C C d n x n D w0 : (2.61)
In this way, the augmented system (2.58) is converted into the following initial
feasible canonical form for phase I with the row of w in which the artificial
variables xnC1 ; xnC2 ; : : : ; xnCm are selected as basic variables:
9
a11 x1 C a12 x2 C C a1n xn CxnC1 D b1 . 0/ >
>
>
a21 x1 C a22 x2 C C a2n xn CxnC2 D b2 . 0/ >
>
>
>
=
am1 x1 C am2 x2 C C amn xn CxnCm D bm . 0/ >
>
>
c1 x1 C c2 x2 C C cn xn z D 0 >
>
>
>
d1 x 1 C d 2 x 2 C C dn x n w D w0 :
;
(2.62)
Now it becomes possible to solve the phase I problem as given by (2.62) using
the simplex method. Finding the pivot element aN rs by using the rule
and
bNr bNi
D min (2.64)
aN rs N is
aN i s >0 a
and performing the pivot operation on it, we minimize the objective function w in
phase I. When
dNj 0; j D 1; : : : ; n; n C 1; : : : ; n C mI w D 0; (2.65)
all the artificial variables become zeros. In this case, if all the artificial variables
become nonbasic ones, an initial basic feasible solution to the original problem is
2.5 Two-Phase Method 31
obtained. Hence, after eliminating all the artificial variables together with the row
of w, initiate phase II of the simplex method for minimizing the original objective
function z.
In the two-phase method, phase I finds an initial basic feasible solution or derives
the information that no feasible solution exists, and phase II then proceeds from this
starting point to an optimal solution or derives the information that the optimal value
is unbounded.
Whenever the original system contains redundancies and often when degenerate
solutions occur, artificial variables will remain in the basis at the end of phase I.
Thus, it is necessary to prevent their values from becoming positive in phase II. One
possible way is to drop all nonartificial variables whose relative cost coefficients for
w are positive and all nonbasic artificial variables before starting phase II. To see
this, we note that the equation for w at the end of phase I satisfies
nCm
X
dNj xj D w w0 ; (2.66)
j D1
Before summarizing the procedure of the two-phase method, the following useful
remarks are given. In the simplex tableau, it is customary to omit the artificial
variable columns because these, once dropped from the basis, can be eliminated
from further consideration. Moreover, if the pivot operations for minimizing w in
phase I are also simultaneously performed on the row of z, the original objective
function z will be expressed in terms of nonbasic variables at each cycle. Thus, if an
initial basic feasible solution is found for the original problem, the simplex method
can be initiated immediately on z. Therefore, the row of z is incorporated into the
pivot operations in phase I.
Following the above discussions, the procedure of the two-phase method can be
summarized as follows.
32 2 Linear Programming
Phase I Starting with the simplex tableau in Table 2.6, perform the simplex
method with the row of w as an objective function in phase I, where the pivot
element is not selected from the row of z, but the pivot operation is performed
to the row of z. When an optimal tableau is obtained, if w > 0, no feasible
solution exists to the original problem. Otherwise, i.e., if w D 0, proceed to
phase II.
Phase II After dropping all columns of xj such that dNj > 0 and the row of w,
perform the simplex method with the row of z as the objective function in
phase II.
Example 2.7 (Two-phase method for diet problem with two decision variables).
Using the two-phase method, solve the diet problem in the standard form given
in Example 2.3.
is obtained. ˙
2.5 Two-Phase Method 33
Example 2.8 (Example of infeasible problem with two decision variables and
four constraints). As an example of simple infeasible problem, consider a linear
programming problem in the standard form for the diet problem of Example 2.7
including the additional inequality constraint
4x1 C 5x2 8:
Using the slack variable x6 and the artificial variables x7 , x8 , and x9 as initial
basic variables, phase I of the simplex method is performed. As shown in Table 2.8,
phase I is terminated at cycle 1 because of d1 > 0, d3 > 0, d4 > 0, d5 > 0, and
d6 > 0. However, from w D 27:4 > 0, no feasible solution exists to this problem.
34 2 Linear Programming
It should be noted here that since the slack variable x6 is used as a basic variable,
the row of w is calculated only from the rows of x7 , x8 , and x9 in cycle 0. For
example, d1 D .1 C 1 C 2/ D 4. ˙
is obtained. ˙
The procedure of the simplex method considered thus far provides a means of
going from one basic feasible solution to another one such that the objective function
z is lower than the previous value of z if there is no degeneracy or at least equal to it
2.5 Two-Phase Method 35
(as can occur in the degenerate case). It continues until (i) the condition of optimality
test (2.42) is satisfied, or (ii) the information of unboundedness on the optimal value
is provided. Therefore, in case of no degeneracy, the following convergence theorem
can be easily understood.
Theorem 2.4 (Finite convergence of simplex method (nondegenerate case)).
Assuming nondegeneracy at each iteration, the simplex method will terminate in
a finite number of iterations.
Proof. Since the number of basic feasible solutions is at most n Cm and it is finite,
the algorithm of the simplex method fails to finitely terminate only if the same basic
feasible solution repeatedly appears. Such repetition implies that the value of the
objective function z is the same. Under nondegeneracy, however, since each value
of z is lower than the previous, no repetition can occur and therefore the algorithm
finitely terminates.
Recall that there is at least one basic variable whose value is zero in a degenerate
basic feasible solution. Such degeneracy may occur in an initial feasible canonical
form, and it is also possible that after some pivot operations in the procedure of the
simplex method, degenerate basic feasible solution may occur.
For example, in step 3 of the procedure of the simplex method, if the minimum
of fbNi =aN is for all i j aN is > 0g is attained by two or more basic variables, i.e.,
36 2 Linear Programming
either xr1 or xr2 can be removed from the basis and the other remains in the basis.
In either case, both xr1 and xr2 become zeros, i.e.,
xr1 D bNr1
aN r1 s ™ D 0
(2.68)
xr2 D bNr2 aN r2 s ™ D 0:
Thus, since there is at least one basic variable whose value is zero, the new basic
feasible solution is degenerate.
This in itself does not undermine the feasibility of the solution. However, if at
some iteration a basic feasible solution is degenerate, the value of objective function
z could remain the same for some number of subsequent iterations. Moreover, there
is a possibility that after a series of pivot operations without decrease of z, the same
basis appears, and then the simplex method may be trapped into an endless loop
without termination. This phenomenon is called cycling or circling.3
The following example given by H.W. Kuhn shows that the simplex method
could be trapped into the cycling problem if the smallest index is used as tie breaker.
Example 2.10 (Kuhn’s example of cycling). As an example of cycling, consider the
following problem given by H.W. Kuhn:
Using x1 , x2 , and x3 as the initial basic variables and performing the simplex
method, we have the result shown in Table 2.10. Observing that the tableau of cycle
6 is completely identical to that of cycle 0 in Table 2.10, one finds that cycling
occurs. ˙
To avoid the trap of cycling, some means to prevent the procedure from cycling
is required. Observe that in the absence of degeneracy the objective function
values in a series of iterations of the simplex method form a strictly decreasing
monotone sequence that guarantees the same basis does not repeatedly appear. With
a degenerate basic solution, the sequence is no longer strictly decreasing. To prevent
the procedure from revisiting the same basis, we need to incorporate another rule to
keep a strictly monotone decreasing sequence.
3
In his famous 1963 book, G.B. Dantzig adopted the term “circling” for avoiding possible
confusion with the term “cycle,” which was used synonymously with iteration.
2.5 Two-Phase Method 37
Several methods besides the random choice rule exist for avoiding cycling in
the simplex method. Among them, a very simple and elegant (but not necessarily
efficient) rule due to Bland (1977) is theoretically interesting. Bland’s rule is
summarized as follows:
(i) Among all candidates to enter the basis, choose the one with the smallest index.
(ii) Among all candidates to leave the basis, choose the one with the smallest index.
The procedure of the simplex method incorporating Bland’s anticycling rule, just
specifying the choice of both the entering and leaving variables, can now be given
in the following.
38 2 Linear Programming
Step 1B If all of the relative cost coefficients are nonnegative, i.e., cNj 0 for all
indices j of the nonbasic variables, then the current solution is optimal, and stop.
Otherwise, by using the relative cost coefficients cNj , find the index s such that
˚
min j j cNj < 0 D s: .j W nonbasic/
That is, if there are two or more indices j such that cNj < 0 for all indices of
the nonbasic variables, choose the smallest index s as the index of a nonbasic
variable newly entering the basis.
Step 2 If all of the coefficients in column s are nonpositive, i.e., aN is 0 for all
indices i of the basic variables, then the optimal value is unbounded, and stop.
Step 3B If some of aN is are positive, find the index r such that
bNi bNr
min D D ™:
N is
aN i s >0 a aN rs
If there is a tie in the minimum ratio test, choose the smallest index r as the index
of a basic variable leaving the basis.
Step 4 Perform the pivot operation on aN rs for obtaining a new feasible canonical
form with xs replacing xr as a basic variable. Return to step 1B.
It is interesting to note here that the use of Bland’s rule for cycling prevention
can be proven by contradiction on the basis of the following observation.
In a degenerate pivot operation, if some variable xq enters the basis, then xq
cannot leave the basis until some other variable with a higher index than q, which
was nonbasic when xq entered, also enters the basis. If this holds, then cycling
cannot occur because in a cycle any variable that enters must also leave the basis,
which means that there exists some highest indexed variable that enters and leaves
the basis. This contradicts the foregoing monotone feature.4
In practice, however, such a procedure is found to be unnecessary because the
simplex procedure generally does not enter a cycle even if degenerate solutions
are encountered. However, an anticycling procedure is simple, and therefore many
codes incorporate such a procedure for the sake of safety.
Example 2.11 (Simplex method incorporating Bland’s rule for Kuhn’s example).
Apply the simplex method incorporating Bland’s rule to the example given by Kuhn.
After only two pivot operations, the algorithm stops, and the result is shown in
Table 2.11. At cycle 2 in Table 2.11, degeneracy is ended, and an optimal solution
x1 D 2; x2 D 0; x3 D 0; x4 D 2; x5 D 0; x6 D 2; x7 D 0; zD 2
is obtained. ˙
4
The interested reader should refer to the solution of Problem 2.8 for a full discussion of the proof.
2.6 Revised Simplex Method 39
In performing the simplex method, all the information contained in the tableau is
not necessarily used. Only the following items are needed:
(i) Using the relative cost coefficients cNj , find the index s such that
(ii) Assuming cNs < 0, we require the elements of the sth column (pivot column)
pN s D .aN 1s ; aN 2s ; : : : ; aN ms /T
bNr bNi
D min
aN rs N is
aN i s >0 a
are calculated for finding the index r. Then, a pivot operation is performed on
aN rs for updating the tableau.
40 2 Linear Programming
From the above discussion, note that only one nonbasic column pN s in the current
tableau is required. Since there are many more columns than rows in a linear
programming problem, dealing with all the columns pN j wastes much computation
time and computer storage. A more efficient procedure is to calculate first the
relative cost coefficients cNj and then the pivot column pN s from the data of the original
problem. The revised simplex method does precisely this, and the inverse of the
current basis matrix is what is needed to calculate them.
We assume again that the m n rectangular matrix A D Œp1 p2 pn for
the constraints has the rank of m and n > m. Moreover, we assume that a linear
programming problem in the standard form is feasible. A basis matrix B is defined
as an mm nonsingular submatrix formed by selecting some m linearly independent
columns from the n columns of matrix A. Note that matrix A contains at least one
basis matrix B due to rank.A/ D m and n > m.
For notational simplicity, without loss of generality, assume that the basis matrix
B is formed by selecting the first m columns of matrix A, i.e.,
B D Œ p1 p2 pm : (2.69)
Let
BxB D b; (2.71)
xB D B 1 N
b D b: (2.72)
xB 0: (2.73)
" # ! " # !
p1 p2 pm 0 xB pmC1 pn b
C xN D : (2.75)
c 1 c 2 cm 1 z cmC1 cn 0
is also a feasible basis matrix for the enlarged system (2.74). It is easily verified by
direct matrix multiplication that the inverse of BO is
" #
1
B 0
BO 1
D : (2.77)
1
cB B 1
Such an .mC1/.mC1/ matrix BO is called an enlarged basis matrix and its inverse
BO 1 is called an enlarged basis inverse matrix.
Introducing a simplex multiplier vector
1
D . 1 ; 2 ; : : : ; m / D cB B (2.78)
associated with the basis matrix B, the enlarged basis inverse matrix BO 1
is written
more compactly as
" #
1
B 0
BO 1
D : (2.79)
1
n
! ! !
xB X pN j bN
C xj D ; (2.81)
z j DmC1 cNj zN
or equivalently
42 2 Linear Programming
bN1 >
2 3 19 0
x1
n B bN2 C >
>
6 x2 7 X B C>
>
7C pN j xj D B : C >
6 7
6 :: @ :: A =
>
>
4 : 5
j DmC1
(2.82)
xm bNm >
n
>
>
X >
>
zC cNj xj D zN; >
>
>
;
j DmC1
where
! ! " # ! !
1 1
pN j pj B 0 pj B pj
D BO 1
D D ; (2.83)
cNj cj 1 cj cj p j
! ! " # ! !
bN b B 10 b B 1
b
D BO 1
D D : (2.84)
zN 0 1 0 b
These two formulas are fundamental to calculation in the revised simplex method,
and as seen in (2.85) and (2.86), cNj and pN j , which are used in each iteration of the
simplex method, can be calculated by using the original coefficients cj and pj given
in the initial simplex tableau, provided that the enlarged basis inverse matrix BO 1 , or
equivalently the basis inverse matrix B 1 and the simplex multipliers are given.
Now, assume that the smallest cNs is found among the relative cost coefficients
cNj calculated by (2.86) for all nonbasic variables and the corresponding column
vector pN s D .aN 1s ; : : : ; aN ms /T is obtained by (2.85). If the vector bN of the values
of the basic variables is known at the beginning of each cycle, the pivot element
aN rs is immediately determined. Furthermore, however, we need the inverse matrix
BO 1 of the revised enlarged basis matrix BO corresponding to a new basis made
by replacing xr in the current basis with the new basic variable xs . From the current
.m C 1/ .m C 1/ enlarged basis matrix
" #
p1 pr 1 pr prC1 pm 0
BO D ; (2.87)
c1 cr 1 cr crC1 cm 1
by removing pr and cr and entering ps and cs instead, the new enlarged basis matrix
2.6 Revised Simplex Method 43
" #
p1 pr 1 ps prC1 pm 0
BO D (2.88)
c1 cr 1 cs crC1 cm 1
is obtained.
It can be shown that, without the direct calculations of inverse matrices,5 the
new enlarged basis inverse matrix BO 1 can be obtained from BO 1 by performing
the pivot operation on aN rs in BO 1 . Since BO 1 is the same as BO 1 except the rth
column, the product of BO 1 and BO can be represented as
2 3
1 aN 1s
6
:: :: 7
:
6 7
6 : 7
6 7
6 aN rs 7
BO 1 BO D 6 7; (2.89)
6 7
:: ::
6
6 : : 7
7
6 7
6
4 aN ms 1 7
5
cNs 1
! !
N
p s ps
where the rth column .aN 1s ; : : : ; aN rs ; : : : ; aN ms ; cNs /T D is BO 1 , and
cNs cs
the i column (i 6D r) is the .m C 1/ dimensional unit vectors such that the i th
element is one.
After introducing the .m C 1/ .m C 1/ nonsingular square matrix
2 3
1 aN 1s =aN rs
6
:: :: 7
:
6 7
6 : 7
6 7
O
6 1=aN rs 7
ED6 (2.90)
6 7
:: :: 7
6
6 : : 7
7
6 7
6
4 aN ms =aN rs 1 7
5
cNs =aN rs 1
that differs from the .m C 1/ .m C 1/ unit matrix IO in only the rth column,
premultiplying (2.89) by EO yields
EO BO 1
BO D IO: (2.91)
5
Annoying calculations of the inverse matrix make no sense of the revised method.
44 2 Linear Programming
BO 1
D EO BO 1
: (2.92)
Setting
2 3 2 3
“ij 0 “ij 0
BO 1
D4 5; BO 1
D4 5 (2.93)
j 1 j 1
This means that performing the pivot operation on aN rs to the current enlarged basis
inverse matrix BO 1 gives the new enlarged basis inverse matrix BO 1 .
By adding the superscript to the values of bN and zN for the new enlarged basis
matrix BO and premultiplying the enlarged system (2.75) corresponding to BO by
BO 1 , the right-hand side becomes as
! ! ! !
bN b b bN
D BO 1
D EO BO 1
D EO : (2.95)
zN 0 0 zN
This also means that performing the pivot operation on aN rs to the current bN and zN
gives the new constants bN and zN . As just described, since premultiplying the
current enlarged basis inverse matrix BO 1 and the right-hand side of (2.81) by EO
corresponds to a pivot operation, EO is called a pivot matrix or an elementary matrix.
Now, the procedure of the revised simplex method, starting with an initial basic
feasible solution, can be summarized as follows.
2.6 Revised Simplex Method 45
Assume that the coefficients A, b, and c of the initial feasible canonical form and
the inverse matrix B 1 of the initial feasible basis are available.
1
Step 0 By using B , calculate
D cB B 1
; xB D bN D B 1
b; zN D b;
and put them in the revised simplex tableau shown in Table 2.12.
Step 1 Calculate the relative cost coefficients cNj for all indices j of the nonbasic
variables by
cNj D cj pj :
If all of the relative cost coefficients are nonnegative, i.e., cNj 0, then the current
solution is optimal, and stop. Otherwise, find the index s such that
Step 2 Calculate
1
pN s D B ps :
bNr bNi
D min :
aN rs N is
aN i s >0 a
It should be noted here that in the procedure of the revised simplex method, since
!
0
the .m C 1/th column of the enlarged basis inverse matrix BO 1 is always ; it
1
is recommended to neglect it and to use the revised simplex tableau without the
.m C 1/th column of BO 1 as given in Table 2.12.
46 2 Linear Programming
When starting the revised simplex method with phase I of the two-phase method,
the enlarged basis inverse matrix BO 1 can be considered as
2 1 3
B 0 0
BO 1
D4 1 0 5; (2.97)
¢ 0 1
and the pivot column is determined by min dNj . Including the row of w, the pivot
dNj <0
operations are performed according to step 4 in the procedure of the revised simplex
method. After eliminating the row of w, i.e., the .m C 2/th row at the beginning
of phase II, the above-mentioned revised simplex method is continued.
Example 2.12 (Revised simplex method for production planning problem of Exam-
ple 1.1). Using the revised simplex method, solve the standard form of the
production planning problem of Example 1.1.
Employing the slack variables x3 , x4 , and x5 as basic variables, one finds that the
basis matrix B is a 3 3 unit matrix and its inverse B 1 is also the same unit matrix.
Thus, from (2.78) and (2.84), it follows that
D cB B 1
D .0; 0; 0/; bN D B 1
b D b D .27; 16; 18/T ; zN D b D 0:
2.6 Revised Simplex Method 47
Putting these values in the revised simplex tableau at cycle 0 of Table 2.13.
After calculating cNj for nonbasic variables, the minimum of them is calculated
in order to select a new basic variable as follows:
0 1
2
cN1 D c1 p1 D 3 .0; 0; 0/ @ 3 A D 3;
4
0 1
6
cN2 D c2 p2 D 8 .0; 0; 0/ 2 A D 8;
@
1
min cNj D . 3; 8/ D cN2 D 8 < 0:
cNj <0
Since cN2 is the minimum, x2 becomes a new basic variable. The corresponding
coefficient column vector pN 2 is calculated as
2 30 1 0 1
1 0 0 6 6
1
pN 2 D B p2 D 0 1 04 5 @ 2 D 2 A;
A @
0 0 1 1 1
and then it is filled in on the rightmost column of the revised simplex tableau. Since
! !
bN3 bN4 bN5 27 16 18 27
min ; ; D min ; ; D D 4:5;
aN 32 aN 42 aN 52 6 2 1 6
the constants at cycle 0 of Table 2.13, and the result of cycle 1 is obtained. These
values become B 1 , , b, and zN for the new basis matrix B D Œ p2 p4 p5 and
the procedure returns to step 1.
From
0 1
2
4 1
cN1 D c1 p1 D 3 ; 0; 0 @ 3 A D
3 3
4
0 1
1
4 4
cN3 D c3 p3 D 0 ; 0; 0 @ 0 A D
3 3
0
1
min cNj D cN1 D < 0;
cNj <0 3
1
x1 becomes a new basic variable. By using B at cycle 1, pN 1 is calculated as
2 30 1 0 1
1=6 0 0 2 1=3
pN 1 D B 1 p1 D 4 1=3 1 0 5@ 3 A D @ 7=3 A ;
1=6 0 1 4 11=3
and it is filled in on the rightmost column of the revised simplex tableau. Since
!
bN2 bN4 bN5
4:5 7 13:5 7
min ; ; D min ; ; D D 3;
aN 21 aN 41 aN 51 1=3 7=3 11=3 7=3
an optimal solution
is obtained. ˙
2.7 Duality 49
2.7 Duality
The notion of duality is one of the most important concepts in linear programming.
Basically, associated with a linear programming problem (we may call it the primal
problem), defined by the constraint matrix A, the right-hand side constant vector
b, and the cost coefficient vector c, there is the corresponding linear programming
problem (called the dual problem) which is specified by the same set of coefficients
A, b, and c. These two problems bear interesting and useful relationships to one
another.
Consider the standard form of linear programming
9
minimize z D cx =
subject to Ax D b (2.99)
;
x 0;
which implies that the coefficients of the system of inequalities (2.101) are given by
the transposed matrix AT of A.
Any primal problem can be changed into a linear programming problem in a
different format by using the following devices: (i) replace an unconstrained variable
with the difference of two nonnegative variables; (ii) replace an equality constraint
50 2 Linear Programming
with two opposing inequalities; and (iii) replace an inequality constraint with an
equality by adding a slack or surplus variable.
For example, consider the following linear programming problem involving not
only equality constraints but also inequality constraints and free variables:
9
minimize z D c1 x1 C c2 x2 C c3 x3 >
>
A11 x1 C A12 x2 C 3 1 >
>
subject to A13 x b > =
A21 x1 C A22 x2 C 3
A23 x b 2 (2.102)
A31 x1 C A32 x2 C A33 x3 D b3 >
>
>
>
x1 0; x2 0:
>
;
By converting this problem to its standard form by introducing slack and surplus
variables, and substituting x2 D x2C .x2C 0/ and x3 D x3C x3 .x3C 0,
x3 0/, it can be easily understood that its dual becomes
9
maximize v D 1 b1 C 2 b2 C 3 b3 >
>
1 A11 C 2 A21 C 3 1 >
>
subject to A31 c > =
1 A12 C 2 A22 C 3 A32 c2 (2.103)
1 A13 C 2 A23 C 3 A33 D c3 >
>
>
>
1 0; 2 0:
>
;
Carefully comparing this dual problem (2.103) with the primal problem (2.102)
gives the relationships between the primal and dual pair summarized in Table 2.14.
For example, an unrestricted variable corresponds to an equality constraint.
By utilizing the relationships in Table 2.14, it is possible to write the dual problem
for a given linear programming problem without going through the intermediate step
of converting the problem to the standard form. From Table 2.14, the symmetric
primal-dual pair given in Table 2.15 is immediately obtained. In a symmetric form,
it is especially easy to see that the dual of the dual is the primal.
The relationship between the primal and dual problems is called duality. The
following theorem, sometimes called the weak duality theorem, is easily proven
and gives us an important relationship between the two problems. In the following,
it is convenient to deal with a primal problem in the standard form.
2.7 Duality 51
N D vN :
zN D cNx b (2.104)
N
c A; and ANx D b; xN 0;
which implies
N x D b:
cNx AN N
This theorem shows that the primal (minimization) problem is always bounded
below by the dual (maximization) problem and the dual (maximization) problem is
always bounded above by the primal (minimization) problem if they are feasible.
From the weak duality theorem, several corollaries can be immediately obtained.
Corollary 2.1. If xN o and N o are feasible primal and dual solutions and cxo D o b
N o are optimal solutions to their respective problems.
holds, then xN o and
This corollary implies that if a pair of feasible solutions can be found to the
primal and dual problems with the same objective value, then they are both optimal.
Corollary 2.2. If the primal problem is unbounded below, then the dual problem is
infeasible.
Corollary 2.3. If the dual problem is unbounded above, then the primal problem is
infeasible.
With these results, the following duality theorem, sometimes called the strong
duality theorem, can be established as a stronger result.
Theorem 2.6 (Strong duality theorem).
(i) If either the primal or the dual problem has a finite optimal solution, then so
does the other, and the corresponding values of the objective functions are the
same.
(ii) If one problem has an unbounded objective value, then the other problem has
no feasible solution.
52 2 Linear Programming
Proof. (i) It is sufficient, in proving the first statement, to assume that the primal
has a finite optimal solution, and then we show that the dual has a solution with
the same value of the objective function.
To show that the optimal values are the same, let xo solve the primal. Since the
primal must have a basic optimal solution, we may as well assume xo as the basic,
with the optimal basis matrix B o , and the vector of basic variables xoB o . Thus
B o xoB o D b; xoB o 0:
o D cB o .B o / 1 ;
cNj D cj o pj 0; j D 1; : : : ; n;
o A c:
Thus, o satisfies the dual constraints, and the corresponding objective value is
vo D o b D cB o .B o / 1 b D cB o xoB o D zo :
Hence, from Corollary 2.1, it directly follows that o is an optimal solution to the
dual problem.
(ii) The second statement is an immediate consequence of Corollaries 2.2 and 2.3.
The preceding proof illustrates some important points.
(i) The constraints of the dual problem exactly represent the optimality conditions
of the primal problem, and the relative cost coefficients cNj can be interpreted as
slack variables in them.
(ii) The simplex multiplier vector o associated with a primal optimal basis solves
the corresponding dual problem. Since, as shown in the previous section, the
vector is contained in the bottom row of the revised simplex tableau for the
primal problem, the optimal revised simplex tableau inherently provides a dual
optimal solution.
For interpreting the relationships between the primal and dual problems, recall
the two-variable diet problem of Example 2.3. Associated with this problem, we
examine the following problem, though it is somewhat intentional.
2.7 Duality 53
Example 2.13 (Dual problem for the diet problem of Example 2.3). A drug
company wants to maximize the total profit by producing three pure tablets V1 ,
V2 , and V3 which contain exactly one mg (milligram) of the nutrients N1 , N2 , and
N3 , respectively. To do so, the company attempts to determine the prices of three
tablets which compare favorably with those of the two foods F1 and F2 . Let 1 , 2 ,
and 3 denote the prices in yens of one tablet of V1 , V2 , and V3 , respectively.
One gram of the food F1 provides 1, 1, and 2 mg of N1 , N2 , and N3 and costs 4
yen. If the housewife replaces one gram of this food F1 with tablets of V1 , V2 , and
V3 , one tablet of V1 , one tablet of V2 , and two tablets of V3 are needed. This would
cost 1 C 2 C2 3 , which should be less than or equal to the price of one gram of the
food F1 , i.e., 1 C 2 C 2 3 4. Similarly, one gram of the food F2 provides 1, 1,
and 2 mg of N1 , N2 , and N3 and costs 3 yen. Thus, the inequality 3 1 C 2 2 C 3
3 is imposed. Since the housewife understands that the daily requirements of the
nutrients N1 , N2 , and N3 are 12, 10, and 15 mg, respectively, the cost of meeting
these requirements by using the tablets would be v D 12 1 C10 2 C15 3 . Thus, the
company should determine the prices of the tablets V1 , V2 , and V3 so as to maximize
this function subject to the above two inequalities. That is, the company determines
the prices of the three tablets which maximize the profit function
1 C 2 C 2 3 4
3 1 C 2 2 C 3 3
1 0; 2 0; 3 0:
It should be noted here that this linear programming problem is precisely the dual
of the original diet problem of Example 2.3. ˙
As thus far discussed, the dual variables, corresponding to the constraints of
the primal problem, coincide with the simplex multipliers for the optimal basic
solution of the primal problem. Consider the economic interpretation of the simplex
multiplier. Let
be the optimal solutions of the primal and dual problems, respectively. From the
strong duality theorem, it follows that
and then it can be intuitively understood that when one unit of the right-hand side
constant bi of the i th constraint ai1 x1 C ai2 x2 C C ai n xn D bi of the primal
problem is changed from bi to bi C1, the value of the objective function will increase
by oi as long as the basis does not change.
To be more precise, from (2.105), the amount of change in the objective function
z for a small change in bi is obtained by partially differentiating z with respect to
the right-hand side bi , i.e.,
@zo
oi D ; i D 1; : : : ; m: (2.106)
@bi
Thus, the simplex multiplier i indicates how much the value of the objective
function varies for a small change in the right-hand side of the constraint, and
therefore it is referred to as the shadow price or the marginal price.
Using the duality theorem, the following result, known as Farkas’s theorem
concerning systems of linear equalities and inequalities, can be easily proven.
Theorem 2.7 (Farkas’s theorem). One and only one of the following two
alternatives holds.
(i) There exists a solution x 0 such that Ax D b.
(ii) There exists a solution such that A 0T and b > 0.
Proof. Consider the (primal) linear problem
minimize z D 0T x
9
=
subject to Ax D b (2.107)
;
x 0;
If the statement (i) holds, the primal problem is feasible. Since the value of the
objective function z is always zero, any feasible solution is optimal. From the strong
duality theorem, the value of the objective function v of the dual is zero. Thus, the
statement (ii) does not hold.
Conversely, if the statement (ii) holds, the dual problem has a feasible solution
such that the objective function v is positive. From the weak duality theorem, this
implies that the objective function z of the primal is positive, and therefore the primal
problem has no feasible solution.
Associated with Farkas’s theorem, Gordon’s theorem also plays an important role
for deriving the optimality conditions of nonlinear programming.
2.8 Dual Simplex Method 55
Theorem 2.8 (Gordon’s theorem). One and only one of the following two
alternatives holds.
(i) There exists a solution x 0, x ¤ 0 such that Ax D 0.
(ii) There exists a solution such that A < 0T .
The following theorem, relating the primal and dual problems, is often useful.
Theorem 2.9 (Complementary slackness theorem). Let x be a feasible solution
to the primal problem (2.99) and be a feasible solution to the dual prob-
lem (2.100). Then they are respectively optimal if and only if the complementary
slackness condition
.c o A/xo D 0 (2.109)
is satisfied.
There are a number of algorithms for linear programming which start with an
infeasible solution to the primal and iteratively force a sequence of solutions to
become feasible as well as optimal. The most prominent among such methods is
the dual simplex method (Lemke 1954). Operationally, its procedure still involves a
sequence of pivot operations, but with different rules for choosing the pivot element.
Consider a primal problem in the standard form
9
minimize z D cx =
subject to Ax D b
;
x 0;
Consider the canonical form of the primal problem starting with the basis
.x1 ; x2 ; : : : ; xm / expressed as
bN1 >
2 3 0 19
x1
n B bN2 C >
>
6 x2 7 X B C>
>
C N
p x D
6 7
B : C>
>
6 :: 7 j j
: >
4 : 5
j DmC1
@ : A=
(2.110)
xm bNm >
n
>
>
X >
>
zC cNj xj D zN; >
>
>
;
j DmC1
56 2 Linear Programming
where not all right-hand side constants bNi may be nonnegative, i.e., for some i ,
bNi 0 may not hold.
In this canonical form, if cNj D cj pj 0 for all j D m C 1; : : : ; n, which
can be alternatively expressed as A c in a vector-matrix form, is a feasible
solution to the dual problem. Thus, the canonical form of the primal problem (2.110)
satisfying cNj 0, j D m C 1; : : : ; n is called the dual feasible canonical one.
Obviously, if the dual feasible canonical form is also feasible to the primal problem,
i.e., for all i , bi 0 hold, then it is an optimal canonical form.
Now, in a quite similar way to the selection rule of cNs in the simplex method, find
the pivot row by
It should be noted that if bNr 0 for all r, it follows that an optimal solution is
obtained.
If aN rj 0 for all j , from bNr < 0, in the rth equation
n
X
xr D bNr aN rj xj ;
j DmC1
cNj cNs
min D D ; (2.112)
aN rj <0 aN rj aN rs
aN rj
cNj D cNj
cNs aN rj D cNj cNs :
aN rs
aN rj
cNj D cNj cNs cNj 0:
aN rs
For any column index j of a nonbasic variable such that aN rj < 0 holds, from (2.112),
it follows that its relative cost coefficient is also nonnegative, i.e.,
cNj
cNs
cNj D aN rj 0:
aN rs aN rj
Hence, it holds that cNj 0 for all j , the resulting new canonical form (tableau) is a
dual feasible canonical form.
Moreover, by the pivot operation on aN rs , we also have the updated value of the
objective function
bNr
zN D zN C cNs D zN bNr ;
aN rs
and from bNr < 0 and 0, the value is increased by jbNr j compared to the
previous value of zN. 6
After starting with the dual feasible canonical form, the dual simplex method
improves feasible solutions of the dual problems through a series of pivot operations
in order to seek for an optimal solution. Although the dual simplex method uses the
pivot operations in a similar way to the simplex method, it employs a different rule
for choosing the pivot element and the value of the objective function increases with
the number of iterations. The procedure of the dual simplex method, starting with
the dual feasible canonical form, can be summarized as follows.
Start with the dual feasible canonical form. That is, assume that cNj 0 for all j .
Step 1 If bNi 0 for all indices i of the basic variables, then the current solution
is optimal, and stop. Otherwise, choose the index r for the pivot row such that
6
If cNs D 0 and dual degeneracy occurs, it is possible to avoid cycling by utilizing the similar
anticycling rule in the simplex method.
58 2 Linear Programming
Step 2 If aN rj 0 for all indices j of the nonbasic variables, then the primal
problem is infeasible, and stop.
Step 3 If some of aN rj are negative, find the index s for the pivot column such that
cNj cNs
min D D :
aN rj <0 aN rj aN rs
Step 4 Perform the pivot operation on aN rs for obtaining a new dual feasible
canonical form with xs replacing xr as a basic variable. Return to step 1.
Example 2.14 (Dual simplex method for the diet problem of Example 2.3). Using
the dual simplex method, solve the diet problem in the standard form given in
Example 2.3:
Multiplying both sides of the three equations of the constraints by 1 yields the
dual feasible canonical form
x1 3x2 C x3 D 12
x1 2x2 C x4 D 10
2x1 x2 C x5 D 15 :
4x1 C 3x2 zD 0
xj 0; j D 1; 2; 3; 4; 5
Since cN1 D 4 > 0 and cN2 D 3 > 0, this canonical form with basic variables x3 ,
x4 , and x5 is dual feasible. However, it is not primal feasible because bN1 D 12 < 0,
bN2 D 10 < 0 and bN3 D 9 < 0.
At cycle 0 in Table 2.16, from
x1 becomes a basic variable in the next cycle, and the pivot element is determined
at aN 51 D 2 bracketed by Œ in Table 2.16. After performing the pivot operation on
aN 51 D 2, the tableau at cycle 1 is obtained. At cycle 1, from bN1 > 0 and
x2 becomes a basic variable in the next cycle, and the pivot element is determined at
aN 32 D 2:5 bracketed by Œ . After performing the pivot operation on aN 32 D 2:5,
the tableau at cycle 2 is obtained. At cycle 2, all of the constants bNi become positive,
and an optimal solution
is obtained. Observe that the tableau of cycle 2 in Table 2.16 coincides with that of
cycle 3 in Table 2.7 when the row of w is dropped. ˙
It should be noted here that the idea of the revised simplex method can be
employed in the discussion of the dual simplex method. In the dual simplex method,
in addition to the data of the initial feasible canonical form A, b, and c, the
coefficients aN rj for all indices j of the nonbasic variables with respect to xr left
from the basis and the relative cost coefficients cNj for all indices j of the nonbasic
variables are required, where cNj can be computed by the formula cNj D cj pj of
the revised simplex method. Hence, if the formula for calculating aN rj for all indices
j of the nonbasic variables through the basis inverse matrix B 1 is given, the dual
60 2 Linear Programming
simplex method can be expressed in a style followed by the revised simplex method.
Since the coefficient aN rj is the rth element of pN j , by using the rth row vector of B 1 ,
denoted by ŒB 1 r , it can be calculated just as
1
aN rj D ŒB r pj ; j W nonbasic: (2.113)
With the above discussion, the procedure of the revised dual simplex method can be
summarized as follows.
Assume that the coefficients A, b, and c of the initial dual feasible canonical form
and the inverse matrix B 1 of the initial dual feasible basis are available.
1
Step 0 Using B , calculate
D cB B 1
; xB D bN D B 1
b; zN D b
and put them in the revised simplex tableau shown in Table 2.12.
Step 1 If bNi 0 for all indices i of the basic variables, then the current solution
is optimal, and stop. Otherwise, choose the index r for the pivot row such that
If aN rj 0 for all indices j of the nonbasic variables, then the primal problem is
infeasible, and stop.
Step 3 If some of aN rj are negative, calculate
cNj D cj pj
and find the index s for the pivot column such that
cNj cNs
min D D :
aN rj <0 aN rj aN rs
and put the values of pON s D .Nps ; cNs /T in the column pON s of Table 2.12. Perform the
pivot operation on aN rs to B 1 , , b, N zN of Table 2.12, and return to step 1.
Example 2.15 (Revised dual simplex method for the diet problem of Example 2.3).
The canonical form
x1 3x2 C x3 D 12
x1 2x2 C x4 D 10
2x1 x2 C x5 D 15
4x1 C 3x2 zD 0
xj 0; j D 1; 2; 3; 4; 5;
for the diet problem discussed in Examples 2.3 and 2.14, where x3 , x4 , and x5 are
basic variables, is dual feasible because cN1 D 4 > 0 and cN2 D 3 > 0. However,
since bN1 D 12 < 0, bN2 D 10 < 0, and bN3 D 9 < 0, the primal problem is not
feasible. The initial basis matrix B is the 3 3 unit matrix and its inverse B 1 is
also the same unit matrix. Hence, from (2.78) and (2.84), it follows that
D cB B 1
D .0; 0; 0/; bN D B 1
b D b D . 12; 10; 15/T ; zN D b D 0:
Putting these values in the revised dual simplex tableau at cycle 0 of Table 2.17.
At cycle 0 in Table 2.17, since
x5 becomes a nonbasic variable in the next cycle and the index r of the variable
leaving the basis is determined as r D 3.
According to (2.113), we calculate the coefficients aN rj , r D 3, j D 1; 2 for the
nonbasic variables. That is, using the third row ŒB 1 3 of B 1 , p1 , and p2 , we have
62 2 Linear Programming
1
aN 31 D ŒB 3 p1 D .0; 0; 1/. 1; 1; 2/T D 2
1
aN 32 D ŒB 3 p2 D .0; 0; 1/. 3; 2; 1/T D 1:
From
cNj
cN1 cN2 4 3 4
min D min ; D min ; D ;
aN rj <0 a N rj aN 31 aN 32 2 1 2
and put pN 1 and cN1 D 4 in the column of pON s at cycle 0 in Table 2.17. Since r D 3, the
pivot element is 2 bracketed by Œ . By performing the pivot operation on 2 at
cycle 0, the tableau at cycle 1 is obtained.
At cycle 1, the variables x3 , x4 , and x1 are basic variables, and in Table 2.17,
since bN1 > 0 and
x3 becomes a nonbasic variable in the next cycle, and the index r of the variable
leaving the basis is determined as r D 1.
From (2.113), we calculate the coefficients aN rj , r D 1, j D 2; 5 for nonbasic
variables. Using the first row ŒB 1 1 of B 1 , p2 , and p5 , we have
1
aN 12 D ŒB 1 p2 D .1; 0; 1=2/. 3; 2; 1/T D 5=2
1
aN 15 D ŒB 1 p5 D .1; 0; 1=2/.0; 0; 1/T D 1=2:
From
cNj
cN2 cN5 1 2 1
min D min ; D min ; D ;
aN rj <0 a N rj aN 12 aN 15 5=2 1=2 5=2
and put pN 2 and cN2 D 1 in the column of pON s at cycle 1 in Table 2.17. Since r D 1, the
pivot element is 5=2 bracketed by Œ . By performing the pivot operation on 5=2
at cycle 1, the tableau at cycle 2 is obtained.
At cycle 2, the variables x2 , x4 and x1 are basic variables. Since all of the
constants bNi are positive, an optimal solution
33 9 1 159
x1 D ; x2 D x3 D 0; x4 D ; x5 D 0 ; zD
5 5 5 5
is obtained. ˙
Finally, consider the sensitivity analysis, which examines the effects of small
changes in the parameters of a linear programming problem on its optimal solution.
In particular, we deal with a case where the right-hand side vector is changed, which
is closely related to the dual simplex method.
Assume that in the standard form of linear programming
9
minimize z D cx =
subject to Ax D b (2.114)
;
x 0;
an optimal basis B is known, and then the corresponding optimal basic solution
xB is
xB D bN D B 1
b: (2.115)
zN D cB xB D cB bN D b: (2.117)
64 2 Linear Programming
is satisfied.
In discussing changes in the right-hand side vector, assume that b is changed to
b C b. Consider the following linear programming problem:
9
minimize z D cx =
subject to Ax D b C b (2.119)
;
x 0:
Since the simplex multiplier vector and the relative cost coefficients cNj for
all indices j of the nonbasic variables do not depend on b as shown in (2.116)
and (2.118), they remain the same even if b is changed to b C b. However, the
basic solution xB itself may no longer be feasible.
The new basic solution and the value of the objective function are calculated as
xB D B 1
.b C b/ D xB C B 1
b (2.120)
and
respectively.
Therefore, the following statements hold:
(i) If xB 0 holds, then xB is an optimal solution, and the variation in the objective
function is b.
(ii) If xB 0 does not hold, since the optimality condition cNj 0 for all indices
j of the nonbasic variables is satisfied, the dual simplex method can be used to
find a new optimal solution.
Example 2.16 (Sensitivity analysis for the production planning problem of Exam-
ple 1.1). In the production planning problem of Example 1.1, we calculate optimal
solutions when the total amounts of available materials are changed as follows:
(i) The available amounts of material M1 is changed from 27 tons to 32 tons.
(ii) The available amounts of material M2 is changed from 16 tons to 23 tons.
Although the optimal solution to the original problem is given at cycle 2 in the
revised simplex method of Table 2.13, for the sake of convenience, we rewrite the
initial tableau (cycle 0) and the optimal tableau (cycle 2) in Table 2.18.
From the optimal tableau, one finds that the basic variables are xB D
.x2 ; x1 ; x5 /T , the basis inverse matrix is
2.8 Dual Simplex Method 65
2 3
3=14 1=7 0
1
6 7
B D6
4 1=7 3=7 5;
0 7
5=14 11=7 1
xB D B 1
.b C b/
2 30 1 0 1
3=14 1=7 0 32 32=7
D 4 1=7 3=7 0 5 @ 16 A D @ 16=7 A
5=14 11=7 1 18 30=7
1 0
32
zN D .b C b/ D . 9=7; 1=7; 0/ @ 16 A D 304=7:
18
Since xB 0 holds, xB is an optimal basic solution, and then an optimal
solution
is obtained.
66 2 Linear Programming
1 0 0 1
0 27
(ii) Let the amounts of changes be b D @ 7 A, and from b D @ 16 A, it follows
0 18
that
1 0 0 1
27 5=2
xB D B 1 .b C b/ D B 1 @ 23 A D @ 6 A
18 17=2
1 0
27
zN D .b C b/ D . 9=7; 1=7; 0/ @ 23 A D 38:
18
Since the negative component 17=2 appears in xB , using the revised dual
simplex method, we can obtain an optimal tableau shown in Table 2.19.
1 1
That is, using the third row ŒB 3 of B , p3 , and p4 , we have
1
aN 33 D ŒB 3 p3 D .5=14; 11=7; 1/.1; 0; 0/T D 5=14;
1
aN 34 D ŒB 3 p4 D .5=14; 11=7; 1/.0; 1; 0/T D 11=7:
Thus, x4 becomes a basic variable in the next cycle. The relative cost coefficients cN4
is calculated as
We calculate pN 4 as
2 30 1 0 1
3=14 1=7 0 0 1=7
4 1=7 3=7 0 5 @ 1 A D @ 3=7 A :
5=14 11=7 1 0 11=7
2.8 Dual Simplex Method 67
These values are put in the column of pON s in the tableau. By performing the pivot
operation on Œ 11=7, a new tableau is obtained.
In this example, after the only one pivot operation, an optimal solution
is obtained. ˙
When the coefficients of the objective function are changed, since only the
changes in the cost coefficients c affect the optimality criterion and the value of the
objective function, the (revised) simplex method is used for finding the new optimal
solution only when some relative cost coefficients become negative, i.e., cNj < 0 for
some j .
Problems
2.1 Convert the following problems to the standard form of linear program-
ming:
(i) (Absolute value problem)
n
P
minimize z D cj jxj j
j D1
n
P
subject to aij xj D bi ; i D 1; 2; : : : ; m;
j D1
n
P
where dj xj C d0 > 0 holds for all feasible solutions.
j D1
68 2 Linear Programming
where œ and are positive real numbers. Explain the relationships between these
two problems. What happens if either œ or is negative?
2.8 Dual Simplex Method 69
Furthermore, let I 0 be the index set of basic variables when xq leaves the
basis, and let J 0 D f1; 2; ; ng I 0 be the index set of nonbasic variables.
The corresponding canonical form is represented by
X X
xi C aN ij0 xj D bNi ; i 2 I 0 ; zC cNj0 xj D z:
j 2J 0 j 2J 0
Solve the problem using the simplex method incorporating Bland’s rule.
2.8 Dual Simplex Method 71
minimize cx
subject to Ax b
x0
is self-dual.
2.13 Prove the complementary slackness theorem.
2.14 Prove Gordon’s theorem.
2.15 Solve the following problems using the dual simplex method:
(i) Minimize 4x1 C 3x2
subject to x1 C 3x2 12
x1 C 2x2 10
2x1 C x2 9
xj 0; j D 1; 2
(ii) Minimize 3x1 C 5x2
subject to 2x1 C 3x2 20
2x1 C 5x2 22
5x1 C 3x2 25
xj 0; j D 1; 2
(iii) Minimize 4x1 C 2x2 C 3x3
subject to 5x1 C 3x2 2x3 10
3x1 C 2x2 C 4x3 8
xj 0; j D 1; 2; 3
72 2 Linear Programming
9
minimize z1 D 3x1 8x2 >
>
>
subject to 2x1 C 6x2 27 >>
=
3x1 C 2x2 16 (3.1)
>
4x1 C x2 18 > >
>
>
x1 0; x2 0:
;
˙
Example 3.2 (Production planning with environmental consideration). Unfortu-
nately, however, in production process, it is pointed out that product P1 yields 5
units of pollution per ton and product P2 yields 4 units of pollution per ton. Thus,
the manager should not only maximize the total profit but also minimize the amount
of pollution.
For simplicity, assume that the amount of pollution is a linear function of two
variables x1 and x2 such as
5x1 C 4x2 ;
˙
The problem to optimize such multiple conflicting linear objective functions
simultaneously under the given linear constraints is called the multiobjective linear
programming problem and can be generalized as follows:
9
minimize z1 .x/ D c1 x >
>
>
minimize z2 .x/ D c2 x >
>
>
>
=
(3.3)
minimize zk .x/ D ck x >
>
>
subject to Ax b >
>
>
>
x 0;
;
3.1 Problem Formulation and Solution Concepts 75
where
ci D .ci1 ; : : : ; ci n /; i D 1; : : : ; k
0 1 2 3 0 1
x1 a11 a12 a1n b1
B x2 C 6 a21 a22 a2n 7 B b2 C
x D B : C; AD6 : ; bDB : C:
B C 6 7 B C
:: : : :: 7
@ :: A 4 :: : : : 5 @ :: A
xn am1 am2 amn bm
subject to Ax b (3.4)
;
x 0;
3
z1= −3x1−8x2
2
C (4, 2)
1
0 x1
A (0, 0) 1 2 3 4 5
B (4.5, 0)
z2= 5x1+4x2
X CO X P X WP : (3.7)
E (–36, 18)
z1
A (0, 0)
segments AE and ED are Pareto optimal solutions since they can be improved only
at the expense of either z1 or z2 . However, all of the remaining feasible points are
not Pareto optimal since there always exist other feasible points which improve both
objective functions or at least one of them without sacrificing the other.
This situation can be more easily understood by observing the feasible region
z1 D x1 4x2 ; z2 D 2x1 x2 ;
The weighting method for obtaining a Pareto optimal solution is to solve the
weighting problem formulated by taking the weighted sum of all of the objective
functions in the original multiobjective linear programming problem (Kuhn and
Tucker 1951; Zadeh 1963). Thus, the weighting problem is defined by
k
X
minimize wz.x/ D wi zi .x/; (3.9)
x2X
iD1
w D .w1 ; : : : ; wk / 0; w ¤ 0:
The relationships between the optimal solution x of the weighting problem and the
Pareto optimality concept of the multiobjective linear programming problems can
be characterized by the following theorems.
Theorem 3.1. If x 2 X is an optimal solution of the weighting problem for
some w > 0, then x is a Pareto optimal solution of the multiobjective linear
programming problem.
Proof. If an optimal solution x of the weighting problem is not a Pareto optimal
solution of the multiobjective linear programming problem, then there exists x 2 X
such that zj .x/ < zj .x / for some j and zi .x/ zi .x /; i D 1; : : : ; kI i 6D j .
From w D .w1 ; : : : ; wk / > 0, this implies kiD1 wi zi .x/ < kiD1 wi zi .x /. Thus,
P P
it does not follow that x is an optimal solution of the weighting problem for some
w > 0. t
u
It should be noted here that the condition of Theorem 3.1 can be replaced with a
unique optimal solution of the weighting problem for w 0, w 6D 0.
Theorem 3.2. If x 2 X is a Pareto optimal solution to a multiobjective linear
programming problem, then x is an optimal solution to the weighting problem for
some w D .w1 ; : : : ; wk / 0, w 6D 0.
Proof. First we prove that x is an optimal solution of the linear programming
problem
minimize 1T C x
9
=
subject to C x C x (3.10)
;
Ax b; x 0;
If x is not an optimal solution of this problem, then there exists x 2 X such that
C x C x and 1T C x < 1T C x . This means that there exists x 2 X such that
k
X k
X
T
1 Cx D ci x < ci x D 1T C x and ci x ci x ; i D 1; : : : ; k
iD1 iD1
or equivalently
maximize . bT ; xT C T /y
9
=
T T T
subject to . A ; C /y C 1 (3.11)
;
y0
. bT ; xT C T /y D 1T C x
and thus
.1T C yT
2 /C x D bT y1 ; yT D .yT T
1 ; y2 /:
Letting
w D 1T C yT
2 ;
we have
k
X
wi ci x D wT C x D .1 C y2 /T C x 8x 2 X:
iD1
Next observe that the dual problem of the linear programming problem
minimize .1 C y2 /T C x
(3.12)
subject to Ax b; x 0
becomes
bT u
9
maximize =
T T
subject to A u C .1 C y2 / (3.13)
;
u 0:
80 3 Multiobjective Linear Programming
5
E(0, 4.5)
4 D(3, 3.5)
z1= −3x1−8x2
3
2 C(4, 2)
0 x1
A(0, 0) 1 2 3 4 5
B(4.5, 0)
z2= 5x1+4x2
Then, x D x and u D y1 are feasible solutions for the corresponding linear
programming problems, and each value of the objective functions is equal. Hence,
for any x 2 X , it follows that
k
X k
X
wi ci x D .1 C y2 /T C x D min.1 C y2 /T C x .1 C y2 /T C x D wi ci x:
x2X
iD1 iD1
represents the hyperplane (note that in the case of two objectives it is a line, and
in the case of three objectives, a plane) with the normal vector w D .w1 ; : : : ; wk /.
Solving the weighting problem for the given weighting coefficients w > 0 yields
the minimum c such that this hyperplane has at least one common point with the
feasible region Z in the z D .z1 ; : : : ; zk / space, and the corresponding Pareto
optimal solution x is obtained as in Fig. 3.3.
The hyperplane for this minimum c is the supporting hyperplane of the feasible
region Z at the point z.x / on the Pareto optimal surface. The condition for the
small displacement from the point z.x / belonging to this supporting hyperplane is
W D 0, i.e.,
3.2 Scalarization Methods 81
@z1 w
D i : (3.17)
@zi w1
Therefore, the ratio of the weighting coefficients wi =w1 gives a trade-off rate
between the two-objective functions at z.x /.
Example 3.4 (Weighting method for production planning of Example 3.2). To
illustrate the weighting method, consider the problem of Example 3.2. The cor-
responding weighting problem becomes as follows:
For this problem, for example, if we choose w1 D 0:4 and w2 D 0:6, we obtain
As depicted in Fig. 3.3, it can be easily understood that solving the corresponding
weighting problem yields the extreme point E.0; 4:5/ as a Pareto optimal solution.
Also, as two extreme cases, if we set w1 D 1, w2 D 0 and w1 D 0, w2 D 1, from
Fig. 3.3, the optimal solutions of the corresponding weighting problems become
the extreme points D.3; 3:5/ and A.0; 0/, respectively. In these cases, although the
condition w > 0 of Theorem 3.1 is not satisfied, from Fig. 3.3, it can be seen that
these two extreme points are Pareto optimal solutions. ˙
The constraint method for characterizing Pareto optimal solutions is to solve the
constraint problem formulated by taking one objective function of a multiobjective
linear programming problem as the objective function of the constraint problem and
letting all the other objective functions be inequality constraints (Haimes and Hall
1974; Haimes et al. 1971). The constraint problem is defined by
82 3 Multiobjective Linear Programming
9
minimize zj .x/ =
subject to zi .x/ ©i ; i D 1; 2; : : : ; kI i ¤ j (3.18)
;
x 2 X:
or
which contradicts the fact that x is a Pareto optimal solution to the multiobjective
linear programming problem. t
u
Example 3.5 (Constraint method for production planning of Example 3.2). To
illustrate the constraint method, consider Example 3.2. The constraint problem for
j =1 becomes
3.2 Scalarization Methods 83
18
E(–36, 18)
14
(–28, 14)
A(0, 0) z1
or equivalently
9
minimize v =
subject to wi zi .x/ v; i D 1; 2; : : : ; k (3.20)
;
x 2 X;
c/w2
0 c/w1 z1
Z D fz.x/ j x 2 X g
wi zi .x/ wi zi .x /; i D 1; : : : ; k:
Hence,
This contradicts the fact that x is a unique optimal solution to the weighted
minimax problem for w D .w1 ; : : : ; wk / 0. t
u
From the proof of this theorem, in the absence of the uniqueness of a solution in
Theorem 3.5, only weak Pareto optimality is guaranteed.
Theorem 3.6. If x 2 X is a Pareto optimal solution to a multiobjective linear
programming problem, then x is an optimal solution of the weighted minimax
problem for some w D .w1 ; : : : ; wk / > 0.
Proof. For a Pareto optimal solution x 2 X of the multiobjective linear program-
ming problem, choose w D .w1 ; : : : ; wk / > 0 such that wi zi .x / D v; i D
1; : : : ; k. Now assume that x is not an optimal solution of the weighted minimax
problem, then there exists x 2 X such that
Noting w D .w1 ; : : : ; wk / > 0, this implies the existence of x 2 X such that
or equivalently
9
minimize v >
>
2x1 C 6x2
>
subject to 27 >
>
>
>
3x1 C 2x2 16 >
>
=
4x1 C x2 18
>
2:4x1 6:4x2 C 29:6 v >>
>
>
2x1 C 1:6x2 v >>
>
>
x1 0; x2 0:
;
86 3 Multiobjective Linear Programming
z2 z2
E(–36, 18)
z^1 A(0, 0) z1
Noting that zO1 D z1 C 37, as illustrated in Fig. 3.6, it can be understood that a vector
.7:4; 14:8/ of the objective function values of this problem is a point moved from the
point . 29:6; 14:8/ of the original problem by 37 along the z1 axis. Hence, the point
. 29:6; 14:8/ is a vector of the original objective function values which corresponds
to a Pareto optimal solution .x1 ; x2 / D .0; 3:7/. ˙
From Theorems 3.3 and 3.5, if the uniqueness of the optimal solution x for
the scalarizing problem is not guaranteed, it is necessary to perform the Pareto
optimality test of x . The Pareto optimality test for x can be performed by
solving the following linear programming problem with the decision variables
x D .x1 ; : : : ; xn /T and © D .©1 ; : : : ; ©k /T :
k
9
X >
maximize ©i >
>
=
iD1 (3.21)
subject to zi .x/ C ©i D zi .x /; i D 1; : : : ; k >
>
>
;
x 2 X; © D .©1 ; : : : ; ©k / 0:
For an optimal solution .Nx; ©N / of this linear programming problem, the following
theorem holds.
Theorem 3.7. For an optimal solution .Nx; ©N / of the Pareto optimality test problem,
the following statements hold.
(i) If ©N i D 0 for all i D; : : : ; k, then x is a Pareto optimal solution of the
multiobjective linear programming problem.
(ii) If ©N i > 0 for at least one i , then x is not a Pareto optimal solution of the
multiobjective linear programming problem. Instead of x , xN is the Pareto
optimal solution corresponding to the scalarization problem.
(ii) If at least one ©N i > 0 and xN is not a Pareto optimal solution of the multiobjective
linear programming problem, then there exists x 2 X such that zj .x/ < zj .Nx/
for some j and zi .x/ zi .Nx/; i D 1; : : : ; kI i 6D j . Hence, there exists x 2 X
such that z.x/ C ©0 D z.Nx/ for some ©0 0, and then z.x/ C ©0 C ©N D z.x /.
This contradicts the optimality of ©N .
t
u
As discussed above, when an optimal solution x is not unique, x is not always
Pareto optimal, and then for the Pareto optimality test, (3.21) is solved. However, it
should be noted here that although the formulation is somewhat complex, by solving
the following augmented minimax problem, a Pareto optimal solution is obtainable:
9
minimize v >
>
k
X
>
=
subject to wi zi .x/ C ¡ wi zi .x/ v; i D 1; 2; : : : ; k (3.22)
>
iD1 >
>
;
x 2 X;
The term “goal programming” first appeared in the 1961 text by Charnes and Cooper
to deal with multiobjective linear programming problems that assumed the decision
maker (DM) could specify goals or aspiration levels for the objective functions.
Subsequent works on goal programming have been numerous, including texts on
goal programming by Ijiri (1965), Lee (1972), and Ignizio (1976, 1982) and survey
papers by Charnes and Cooper (1977) and Ingnizio (1983).
The key idea behind goal programming is to minimize the deviations from goals
or aspiration levels set by the DM. Goal programming therefore, in most cases,
seems to yield a satisficing solution in the same spirit as March and Simon (1958)
rather than an optimal solution.
As discussed in the previous subsection, in general, the multiobjective linear
programming problem can be formulated as follows:
X D fx 2 Rn j Ax b; x 0g (3.24)
where zO D .Oz1 ; : : : ; zOk / is the goal vector specified by the DM and d .z.x/; zO/
represents the distance between z.x/ and zO in some selected norm.
The simplest version of (3.25), where the absolute value or the `1 norm is used, is
k
X
minimize d1 .z.x/; zO/ D jci x zOi j: (3.26)
x2X
iD1
More generally, using the `1 norm with weights (the weighted `1 norm), it becomes
k
X
minimize d1w .z.x/; zO/ D wi jci x zOi j; (3.27)
x2X
iD1
1
diC D fjzi .x/ zOi j C .zi .x/ zOi /g (3.28)
2
and
1
di D fjzi .x/ zOi j .zi .x/ zOi /g (3.29)
2
It is appropriate to consider here the practical significance of diC and di . From the
definition of diC and di , it can be easily understood that
(
zi .x/ zOi if zi .x/ zOi
diC D (3.31)
0 if zi .x/ < zOi
and
(
zOi zi .x/ if zOi zi .x/
di D (3.32)
0 if zOi < zi .x/:
Pl PlC1 ; l D 1; : : : ; L 1 (3.34)
tPlC1 Pl ; l D 1; : : : ; L 1; (3.35)
where 1 L k.
By incorporating such preemptive priorities Pl together with the over- and
underachievement weights wC i and wi , the general linear goal programming
formulation takes on the following form:
0 1 9
L
X X >
Pl @ .wC C
i di C wi di /
>
minimize A >
>
>
>
lD1 i 2Il >
>
=
subject to zi .x/ diC C
di D zOi i D 1; : : : ; k (3.36)
Ax b; x 0
>
>
>
>
C
di di D 0; i D 1; : : : ; k
>
>
>
>
C ;
di 0; di 0; i D 1; : : : ; k;
where Il .¤ ;/ is the index set of objective functions in the lth priority class.
Observe that when there are k distinct ordinal ranking classes with the i th objective
function ci x belonging to the i th priority class, i.e., L D k, the objective function
of (3.36) then becomes simply
k
X
Pi .wC C
i di C wi di /: (3.37)
iD1
To graphically obtain an optimal solution for this simple example in the x1 -x2
plane, the two priority goals are depicted as straight lines together with the original
feasible region in Fig. 3.7. Although only the decision variables x1 and x2 are used
in this graph, the effect of increasing either diC or di is reflected by the arrow
signs. The region which satisfies both the original constraints and the first priority
goal, i.e., d1C 0 and d1 D 0, is shown as the cross-hatched region. To achieve the
second priority goal without degrading the achievement of the first priority goal, the
area of feasible solution should be limited to the crisscross-hatched area in Fig. 3.7.
However, as can been seen, concerning the third priority goals, d3C cannot increase
92 3 Multiobjective Linear Programming
to be positive. As just described, the final solution of this problem occurs at the point
.x1 ; x2 / D .1:2; 3/ in which only the first and second priority goals are satisfied. ˙
k
!1=p
X
p
minimize dpw .z.x/; zmin / D wi jzi .x/ zmin
i j
(3.38)
x2X
iD1
k
X
minimize dQpw .z.x/; zmin / D wi .zi .x/ zmin p
i / ; (3.39)
x2X
iD1
and for p D 1,
Observe that for p D 1, all deviations from zmin i are taken into account in direct
proportion to their magnitudes, while for 2 p < 1, the largest deviation has the
greatest influence. Ultimately for p D 1, only the largest deviation is taken into
account.
It should be noted here that any solution of (3.39) for any 1 p < 1 or a unique
solution of (3.40) with wi > 0 for all i D 1; : : : ; k is a Pareto optimal solution of
the multiobjective linear programming problem.
The compromise set Cw , given the weighting vector w, is defined as the set of all
p
compromise solutions xw , 1 p 1. To be more explicit,
3.4 Compromise Programming 93
In the context of linear programming problems, Zeleny (1973) suggested that the
compromise set Cw can be approximated by the Pareto optimal solutions of the
following two-objective problem:
Although it can be seen that the compromise solution set Cw is a subset of the set of
Pareto optimal solutions, Cw may still be too large to select the final solution and,
hence, should be reduced further.
Zeleny (1973, 1976) suggests several methods to reduce the compromise solution
set Cw . One possible reduction method without the DM’s aid is to generate another
compromise solution set CN w similar to Cw by maximizing the distance from the
so-called anti-ideal point zmax D .zmax max max
1 ; : : : ; zk /, where zi D max zi .x/. The
x2X
problem to be solved thus becomes
k
!1=p
X
maximize wi jzmax
i zi .x/jp (3.43)
x2X
iD1
k
X
maximize wi .zmax
i zi .x//p ; (3.44)
x2X
iD1
and for p D 1,
The compromise solution set Cw based on the ideal point is not identical with the
compromise solution set CN w based on the anti-ideal point. Zeleny (1976) suggests
using this fact to further reduce the compromise solution set by considering the
intersection Cw \ CN w .
An interactive strategy for reducing the compromise solution set proposed by
Zeleny (1976) is based on the concept of the so-called displaced ideal and thus
is called the method of the displaced ideal. In this approach, the ideal point
with respect to the new Cw displaces the previous ideal point, and the (reduced)
compromise solution set eventually encloses the new ideal point, terminating the
process.
94 3 Multiobjective Linear Programming
minimize zi .x/; i D 1; : : : ; k:
.r 1/
x2Cw
.r/
Step 3 Construct the compromise solution set Cw by finding the Pareto optimal
solution set of
.r/ .r/
Step 4 If the DM can select the final solution from Cw , or if Cw contains zmin.r/ ,
stop. Otherwise, set r D r C 1 and return to step 2.
It should be noted here that the method of displaced ideal can be viewed as the
best ideal-seeking process, not the ideal itself. Further refinements and details can
be found in Zeleny (1976, 1982).
Example 3.8 (Displaced ideal method for production planning of Example 3.2). To
illustrate the method of displaced ideal, consider the problem of Example 3.2:
.0/
Let w1 D w2 D 1 and Cw D X D f.x1 ; x2 / 2 R2 j 2x1 C 6x2 27; 3x1 C 2x2
16; 4x1 C x2 18; x1 0; x2 0g. From the definition, we have
min.1/ min.1/
z1 D min z1 .x/ D 37; z2 D min z2 .x/ D 0:
.0/ .0/
x2Cw x2Cw
where
(–36, 18)
A ( 32.2, 16.1)
C ( 24.7, 12.3)
(z1min(2) z2min(2)) B
= (–36, 12.3)
dQ1w .z.x/; zmin.1/ / D maxf. 3x1 8x2 . 37//; .5x1 C 4x2 0/g:
.1/
From this problem, we have the compromise solution set Cw which is a straight-
line segment between points A. 36; 18/ and B. 24:667; 12:333/ shown in
Fig. 3.8, where point A corresponding to a solution x D .0; 4:5/ minimizes
dQ1w .z.x/; zmin.1/ / and point B corresponding to a solution x D .0; 3:0833/
minimizes dQ1w .z.x/; zmin.1/ /:
Suppose that the DM cannot select the final solution. In step 2, we have
min.2/ min.2/
z1 D min z1 .x/ D 36; z2 D min z2 .x/ D 12:333:
.1/ .1/
x2Cw x2Cw
where
dQ1w .z.x/; zmin.2/ / D maxf. 3x1 8x2 . 36//; .5x1 C 4x2 12:333/g:
.2/
From this problem, we have the revised compromise solution set Cw which is a
straight-line segment between points A. 36; 18/ and C. 32:222; 16:111/ shown
in Fig. 3.8, where point A minimizes dQ1w .z.x/; zmin.2/ / and point C corresponding
to a solution x D .0; 4:028/ minimizes dQ1w .z.x/; zmin.2/ /:
.1/ .2/
One finds that the compromise solution set diminishes from Cw to Cw . If
.2/
the DM still cannot select the final solution in Cw , it follows that the procedure
continues. ˙
96 3 Multiobjective Linear Programming
z22(x2) z2(x2)
or equivalently
9
minimize v =
subject to zi .x/ zOi v; i D 1; : : : ; k (3.49)
;
x 2 X;
This contradicts the assumption that x is a unique optimal solution of the minimax
problem. t
u
From the proof of this theorem, in the absence of the uniqueness of a solution in
the theorem, only weak Pareto optimality is guaranteed.
Theorem 3.9. If x is a Pareto optimal solution of the multiobjective linear
programming problem, then x is an optimal solution of the minimax problem for
some reference point zO.
Proof. For a Pareto optimal solution x 2 X of the multiobjective linear program-
ming, choose a reference point zO D .Oz1 ; : : : ; zOk /T such that zi .x / zOi D v ,
i D 1; : : : ; k. For this reference point, if x is not an optimal solution of the minimax
problem, then there exists x 2 X such that
k
9
X >
maximize ©i >
>
=
iD1 (3.50)
subject to zi .x/ C ©i D zi .x /; i D 1; : : : ; k >
>
>
;
x 2 X; © D .©1 ; : : : ; ©k / 0:
For an optimal solution .Nx; ©N / of this linear programming problem, as was shown in
Theorem 3.7, (i) if ©N i D 0 for all i D 1; : : : ; k, then x is a Pareto optimal solution
of the multiobjective linear programming problem, and (ii) if ©N i > 0 for at least
3.5 Interactive Multiobjective Linear Programming 99
one i , then not x but xN is a Pareto optimal solution of the multiobjective linear
programming problem.
Now, given a Pareto optimal solution for the reference point specified by the DM
by solving the corresponding minimax problem, the DM must either be satisfied
with the current Pareto optimal solution or modify the reference point. To help
the DM express a degree of preference, trade-off information between a standing
objective function z1 .x/ and each of the other objective functions is very useful.
Such a trade-off between z1 .x/ and zi .x/ for each i D 2; : : : ; k is easily obtainable
since it is closely related to the strict positive simplex multipliers of the minimax
problem (3.49). Let the simplex multipliers associated with be denoted by i ; i D
1; : : : ; k. If all i > 0 for each i , it can be proved that the following expression
holds:
@zi .x/ 1
D : (3.51)
@z1 .x/ i
H.z1 ; : : : ; zk ; w/ D a1 z1 C C ak zk C bw D c:
The necessary and sufficient condition for the small displacement from this point
belonging to this tangent hyperplane is H D 0, i.e.,
a1 z1 C ai zi D 0:
Similarly, we have
zi a1 a1 =b w=z1
D D D :
z1 ai ai =b w=zi
@zi @w=@z1
D :
@z1 @w=@zi
100 3 Multiobjective Linear Programming
Recall that the two objectives in this problem are to minimize both the opposite of
the total profit .z1 / and the amount of pollution .z2 /.
3.5 Interactive Multiobjective Linear Programming 101
(z1, z2)=(–35, 7)
A(0, 0) z1
First, observe that the individual minima and maxima for the objective func-
tions are
zmin
1 D 37; zmax
1 D 0; zmin
2 D 0; zmax
2 D 29:
Considering these values, suppose that the DM specifies the reference point as
For this reference point, as can be easily seen from Fig. 3.10, solving the corre-
sponding minimax problem yields a Pareto optimal solution
@z2
D 0:5:
@z1
On the basis of such information, suppose that the DM updates the reference point to
in order to improve the satisfaction level of the profit at the expense of that of
the pollution amount. For the updated reference point, solving the corresponding
minimax problem yields a Pareto optimal solution
@z2
D 0:5:
@z1
102 3 Multiobjective Linear Programming
If the DM is satisfied with the current values of the objective functions, the
procedure stops. Otherwise, a similar procedure continues in this fashion until the
satisficing solution of the DM is derived. ˙
It may be appropriate to point out here that several interactive multiobjective
programming methods including the methods presented in this chapter were
developed (Changkong and Haimes 1983; Miettinen 1999; Miettinen et al. 2008;
Sakawa 1993; Steuer 1986; Vanderpooten and Vincke 1989) from the 1970s to the
1980s, and a basic distinction has been made concerning the underlying approach.
Especially, Vanderpooten and Vincke (1989) highlighted an evolution from search-
oriented methods to learning-oriented procedures.
The recently published book entitled “Multiple Criteria Decision Making—From
Early History to the 21st Century—” (Köksalan et al. 2011), which begins with the
early history of Multiple Criteria Decision Making and proceeds to give a decade
by decade account of major developments in the field starting from the 1970s until
now, would be very useful for interested readers.
Problems
3.1 Graph the following two-objective linear programming problem in the x1 -x2
plane and z1 -z2 plane, and find all Pareto optimal solutions.
minimize z1 D x1 3x2
minimize z2 D x1 x2
subject to x1 C 8x2 112
x1 C 2x2 34
9x1 C 2x2 162
x1 0; x2 0:
3.4 For the two-objective linear programming problem discussed in Problem 3.3,
solve the following problems.
(i) Obtain a Pareto optimal solution for the weighting method with w1 D 0:5 and
w2 D 0:5.
(ii) Obtain a Pareto optimal solution for the constraint method with ©2 D 8.
(iii) Setting zO1 D z1 . 23:5/, obtain a Pareto optimal solution for the minimax
method with w1 D 0:5 and w2 D 0:5.
3.5 Find an optimal solution to the following linear goal programming problem
graphically.
3.7 Using the Excel solver, solve Examples 3.2–3.4 and confirm the Pareto
optimal solutions.
3.8 Apply the interactive multiobjective linear programming to the two-objective
linear programming problem of Problem 3.3.