0% found this document useful (0 votes)
116 views

Linear and Multiobjective Programming With Fuzzy Stochastic Extensions

The document discusses a production planning problem as a linear programming example with two decision variables. It then extends the problem by adding an environmental quality objective to make it a two-objective linear programming problem. The feasible region and Pareto optimal solutions are analyzed geometrically for the multiobjective problem.

Uploaded by

Kharolin Mau
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views

Linear and Multiobjective Programming With Fuzzy Stochastic Extensions

The document discusses a production planning problem as a linear programming example with two decision variables. It then extends the problem by adding an environmental quality objective to make it a two-objective linear programming problem. The feasible region and Pareto optimal solutions are analyzed geometrically for the multiobjective problem.

Uploaded by

Kharolin Mau
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 103

Chapter 1

Introduction

In this chapter, as an introductory numerical example, a simple production planning


problem is considered. A production planning problem having two decision vari-
ables is formulated as a linear programming problem, and a graphical method for
obtaining an optimal solution is illustrated. Moreover, by considering environmental
quality, a two-objective linear programming problem is formulated, and the notion
of Pareto optimality is outlined.

1.1 Linear Programming in Two Dimensions

First, consider the following simple production planning problem as an example of


a problem that can be solved using linear programming.
Example 1.1 (Production planning problem). A manufacturing company desires to
maximize the total profit from producing two products P1 and P2 utilizing three
different materials M1 , M2 , and M3 . The company knows that to produce 1 ton
of product P1 requires 2 tons of material M1 , 3 tons of material M2 , and 4 tons of
material M3 , while to produce 1 ton of product P2 requires 6 tons of material M1 ,
2 tons of material M2 , and 1 ton of material M3 . The total amounts of available
materials are limited to 27, 16, and 18 tons for M1 ; M2 , and M3 , respectively. It also
knows that product P1 yields a profit of 3 million yen per ton, while P2 yields 8
million yen (see Table 1.1). Given these limited materials, the company is trying to
figure out how many units of products P1 and P2 should be produced to maximize
the total profit. ˙
Let x1 and x2 denote decision variables representing the numbers of tons
produced of products P1 and P2 , respectively. Using these decision variables, this
production planning problem can be formulated as the following linear program-
ming problem:

M. Sakawa et al., Linear and Multiobjective Programming with Fuzzy Stochastic 1


Extensions, International Series in Operations Research & Management Science 203,
DOI 10.1007/978-1-4614-9399-0__1, © Springer Science+Business Media New York 2013
2 1 Introduction

Table 1.1 Production conditions and profit


Product P1 Product P2 Amounts available
Material M1 (ton) 2 6 27
Material M2 (ton) 3 2 16
Material M3 (ton) 4 1 18
Profit (million yen) 3 8

Maximize the linear profit function


3x1 C 8x2

subject to the linear constraints

2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18

and nonnegativity conditions for these decision variables

x1  0; x2  0:

For convenience in our subsequent discussion, let the opposite of the total
profit be

zD 3x1 8x2 ;

and convert the profit maximization problem to the problem to minimize z under the
above constraints, i.e.,

minimize z D 3x1 8x2


subject to 2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18
x1  0; x2  0:

It is easy to see that, in the x1 -x2 plane, the linearly constrained set of points
.x1 ; x2 / satisfying the above constraints is the boundary lines and interior points of
the convex pentagon ABCDE shown in Fig. 1.1.
The set of points satisfying x1 2x2 D z for a fixed value of z is a line in the
x1 -x2 plane. As z is varied, the line is moved parallel to itself. The optimal value of
this problem is the smallest value of z for which the corresponding line has at least
one point in common with the linearly constrained set ABCDE. As can been seen
from Fig. 1.1, this occurs at point D. Hence, the optimal solution to this problem is

x1 D 3; x2 D 3:5; zD 37:
1.2 Extensions of Linear Programming 3

x2 x2

z = −3x1−8x2
5 2x1+6x2 27
E (0, 4.5)
4 D (3, 3.5)
3 4x1+x2 18
2 C (4, 2)
X X
3x1+2x2 16
1

0 x1 x1
1 2 3 4 5 A (0, 0) B (4.5, 0)

Fig. 1.1 Feasible region and optimal solution for production planning problem

It is significant to realize here that the optimal solution is located at a vertex of


the linearly constrained set since the constrained set has a finite number of vertices
and the contours of constant value of the objective function are linear. Note that
vertices are usually called extreme points in linear programming.
The values of the objective function corresponding to the extreme points A, B,
C , D, and E are 0, 5, 7:5, 10, and 9, respectively. Therefore, for example,
starting from the extreme point A, if we move A ! B ! C ! D or A ! E ! D
such that the values of the objective function are decreasing, it seems to be possible
to reach the extreme point which gives the minimum value of z.
Obviously, for more than two or three variables, such a graphical method cannot
be applied and it becomes necessary to characterize extreme points algebraically.
The simplex method for linear programming originated by Dantzig (1963) is well
known and widely used as a powerful computational procedure for solving linear
programming problems. The simplex method consists of two phases. Phase I finds
an initial extreme point of the feasible region or gives the information that none
exists due to the inconsistency of the constraints. In Phase II, starting from an initial
extreme point, it determines whether it is optimal or not. If not, it finds an adjacent
extreme point at which the value of z is less than or equal to the previous value.
The process is repeated until it finds an optimal solution or gives the information
that the optimal value is unbounded. The details of linear programming can be
found in standard texts including Dantzig (1963), Dantzig and Thapa (1997), Gass
(1958), Hadley (1962), Hillier and Lieberman (1990), Ingnizio and Cavalier (1994),
Luenberger (1973, 1984, 2008), Nering and Tucker (1993), and Thie (1988).

1.2 Extensions of Linear Programming

Recall the production planning problem discussed in Example 1.1.


Example 1.2 (Production planning with environmental considerations). Unfortu-
nately, however, in the production process, it is pointed that out producing 1 ton of
4 1 Introduction

Fig. 1.2 Feasible region and x2


solutions maximizing the
total profit and minimizing
the pollution E (0, 4.5)
D (3, 3.5)
z1= −3x1−8x2

C (4, 2)
X

A (0, 0) B (4.5, 0) x1
z2= 5x1+4x2

P1 and P2 yields 5 and 4 units of pollution, respectively. Thus, the manager should
not only maximize the total profit but also minimize the amount of pollution.
For simplicity, assume that the amount of pollution is a linear function of two
decision variables x1 and x2 such as

5x1 C 4x2

where x1 and x2 denote the numbers of tons produced of products P1 and P2 ,


respectively.
Considering environmental quality, the production planning problem can be
reformulated as the following two-objective linear programming problem:

minimize z1 D 3x1 8x2


minimize z2 D 5x1 C 4x2
subject to 2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18
x1  0; x2  0 .

˙
Before discussing the solution concepts of multiobjective linear programming
problems, it is instructive to consider the geometric interpretation of the two-
objective linear programming problem in Example 1.2. The feasible region X for
this problem in the x1 -x2 plane is composed of the boundary lines and interior
points of the convex pentagon ABCDE in Fig. 1.2. Among the five extreme points
A; B; C; D, and E, observe that z1 is minimized at the extreme point D.3; 3:5/ while
z2 is minimized at the extreme point A.0; 0/.
As will be discussed in Chap. 3, these two extreme points A and D are obviously
Pareto optimal solutions since they cannot improve respective objective functions
z1 and z2 anymore. In addition to the extreme points A and D, the extreme point
E and all of the points of the segments AE and ED are Pareto optimal solutions
1.2 Extensions of Linear Programming 5

since they can be improved only at the expense of either z1 or z2 . However, all of
the remaining feasible points are not Pareto optimal since there always exist other
feasible points which improve at least one of the objective functions.
In another perspective, recalling the imprecision or fuzziness inherent in human
judgments, two types of inaccuracies of human judgments should be incorporated in
multiobjective optimization problems. One is the fuzzy goals of the decision maker
for each of the objective functions, and the other is the experts’ ambiguous under-
standing of the nature of the parameters in the problem-formulation process. The
motivation for multiobjective optimization under imprecision or fuzziness comes
from this observation. In Chap. 4, we will deal with fuzzy linear programming
and fuzzy multiobjective linear programming. Multiobjective linear programming
problems involving fuzzy parameters will be also discussed.
From an uncertain viewpoint different with fuzziness, linear programming prob-
lems with random variable coefficients, called stochastic programming problems,
are developed. They are two-stage models and constrained programming. In two-
stage models, a shortage or an excess arising from the violation of the constraints is
penalized, and then the expectation of the amount of the penalties for the constraint
violation is minimized. In a model of the constrained programming, from the
observation that the stochastic constraints are not always satisfied, the problem is
formulated so as to permit constraint violations up to specified probability levels.
Chapter 5 will discuss such stochastic programming techniques.

Problems

1.1 A manufacturing company desires to maximize the total profit from producing
two products P1 and P2 utilizing three different materials M1 , M2 , and M3 . The
company knows that to produce 1 ton of product P1 requires 2 tons of material
M1 , 8 tons of material M2 , and 3 tons of material M3 , while to produce 1 ton of
product P2 requires 6 tons of material M1 , 6 tons of material M2 , and 1 ton of
material M3 . The total amounts of available materials are limited to 27, 45, and
15 tons for M1 , M2 , and M3 , respectively. It also knows that product P1 yields a
profit of 2 million yen per ton, while P2 yields 5 million yen. Given these limited
materials, the company is trying to figure out how many units of products P1 and
P2 should be produced to maximize the total profit.
(1) Let x1 and x2 denote decision variables for the numbers of tons produced
of products P1 and P2 , respectively. Formulate the problem as a linear
programming problem.
(2) Graph the problem in the x1 x2 plane, find an optimal solution.
1.2 (Transportation problem)
Consider a planning problem of transporting goods from m warehouses to n
retail stores. Assume that a quantity ai is available at warehouse i , a quantity
6 1 Introduction

bj is required at store j , and the cost of transportation of one unit of goods


from warehouses i to store j is cij . Also P it is assumed
P that the total amount
available is equal to the total required, i.e., ai D bj . Let xij be a decision
variable for the amount shipped from warehouse i to store j . Formulate a linear
programming problem so as to satisfy the shipping requirements and minimize
the total transportation cost.
1.3 (Assignment problem)
Suppose that each of n candidates to be assigned to one of n jobs and the
number cij which measures the effectiveness of candidate i in job j is known.
Introducing n2 decision variables xij , i D 1; : : : ; nI j D 1; : : : ; n with the
interpretation that xij D 1 if candidate i is assigned to job j , and xij D 0
otherwise, formulate a linear programming problem so as to maximize the overall
effectiveness.
Chapter 2
Linear Programming

Since G.B. Dantzig first proposed the simplex method around 1947, linear
programming, as an optimization method of maximizing or minimizing a linear
objective function subject to linear constraints, has been extensively studied and,
with the significant advances in computer technology, widely used in the fields of
operations research, industrial engineering, systems science, management science,
and computer science.
In this chapter, after an overview of the basic concepts of linear programming
via a simple numerical example, the standard form of linear programming and
fundamental concepts and definitions are introduced. The simplex method and the
two-phase method are presented with the details of the computational procedures.
By reviewing the procedure of the simplex method, the revised simplex method,
which provides a computationally efficient implementation, is also discussed. Asso-
ciated with linear programming problems, dual problems are formulated, and duality
theory is discussed which also leads to the dual simplex method.

2.1 Algebraic Approach to Two-Dimensional Linear


Programming

In Sect. 1.1, we have presented a graphical method for solving the two-dimensional
production planning problem of Example 1.1.
Minimize the opposite of the linear total profit

zD 3x1 8x2

subject to the linear inequality constraints

M. Sakawa et al., Linear and Multiobjective Programming with Fuzzy Stochastic 7


Extensions, International Series in Operations Research & Management Science 203,
DOI 10.1007/978-1-4614-9399-0__2, © Springer Science+Business Media New York 2013
8 2 Linear Programming

2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18

and nonnegativity conditions for all decision variables

x1  0; x2  0:

Since in multiple dimensions more than two the graphical method used in
Sect. 1.1 cannot be applied, it becomes necessary to develop an algebraic method. In
this section, as a prelude to the development of the general theory, consider an alge-
braic approach to two-dimensional linear programming problems for understanding
the basic ideas of linear programming. To do so, by introducing the amounts, x3
. 0/, x4 . 0/, and x5 . 0/, of unused (idle) materials for M1 , M2 , and M3 ,
respectively, and converting the inequalities into the equalities, the problem with
the equation 3x1 8x2 z D 0 for the objective function can then be stated as
follows:
Find values of xj  0, j D 1; 2; 3; 4; 5 so as to minimize z, satisfying the
augmented system of linear equations
9
2x1 C 6x2 C x3 D 27 >
>
3x1 C 2x2 C x4 D 16
=
(2.1)
4x1 C x2 C x5 D 18 >
>
;
3x1 8x2 z D 0:

In (2.1), setting x1 D x2 D 0 yields x3 D 27, x4 D 16, x5 D 18, and z D 0,


which corresponds to the extreme point A in Fig. 1.1. Now, from the fourth equation
of (2.1) for the objective function, we see that any increase in the values of x1 and x2
from 0 to positive would decrease the value of the objective function z. Considering
that the profit of P2 is larger than that of P1 (in the above formulation, the opposite of
the profit is smaller), choose to increase x2 from 0 to a positive value, while keeping
x1 D 0. In Fig. 1.1, this corresponds to the movement from the extreme point A
to E along the edge AE. From (2.1), if x2 can be made positive, the values of x3 ,
x4 , and x5 decrease. However, since x3 , x4 , and x5 cannot become negative, the
increase amount of x2 is restricted by the first three equations of (2.1). In the first
three equations of (2.1), remaining x1 D 0, the values of x2 to be increased are
restricted to at most 27=6 D 4:5, 16=2 D 8, and 18=1 D 18, respectively. Hence,
the largest permissible value of x2 not yielding the negative values of x3 , x4 , and x5
is the smallest of 4.5, 8, and 18, that is, 4.5. Increasing the values of x2 from 0 to 4.5
yields x3 D 0, which implies that the available amount of material M1 is used up.
Dividing the first equation of (2.1) by the coefficient 6 of x2 and eliminating x2
from the second, third, and fourth equations yields
2.1 Algebraic Approach to Two-Dimensional Linear Programming 9

9
1 1 >
x1 C x2 C x3 D 4:5 >
>
>
3 6 >
>
>
>
7 1 >
>
x1 x3 C x4 D7 > =
3 3 (2.2)
11 1 >
C x5 D 13:5 >
>
x1 x3 >
3 6 >
>
>
>
1 4 >
>
x1 C x3 z D 36: ;
>
3 3

In (2.2), setting x1 D x3 D 0 yields x2 D 4:5, x4 D 7, x5 D 13:5, and z D 36.


This implies the resulting point .x1 ; x2 / D .0; 4:5/ corresponds to the extreme point
E and the value of the objective function z is decreased from 0 to 36.
Next, from the fourth equation of (2.2), keeping x3 D 0, by increasing the
value of x1 from 0 to positive, the value of z can be decreased. This corresponds to
the movement from the extreme point E to D along the edge ED in Fig. 1.1. From
the first three equations of (2.2), to keep the values of x2 , x4 , and x5 nonnegative, the
values of x1 to be increased are restricted to at most 4:5=.1=3/ D 13:5, 7=.7=3/ D
3, and 13:5=.11=3/ ' 3:682, respectively. Hence, increasing the values of x2 from
0 to 3, the smallest among them, yields x4 D 0, which implies that the available
amount of material M2 is used up.
Dividing the second equation of (2.2) by the coefficient 7/3 of x1 and eliminating
x1 from the first, third, and fourth equations yields
9
3 1 >
x2 C x3 x4 D 3:5 >
>
>
14 7 >
>
>
>
1 3 >
>
x1 x3 C x4 D3 > =
7 7 (2.3)
5 11 >
x4 C x5 D 2:5 >
>
x3 >
14 7 >
>
>
>
9 1 >
>
x3 C x4 z D 37: ;
>
7 7

In (2.3), setting x3 D x4 D 0 yields x1 D 3, x2 D 3:5, x5 D 2:5, and z D 37,


which corresponds to the extreme point D in Fig. 1.1, and the value of z is decreased
from 36 to 37.
From the fourth equation of (2.3), both coefficients of x3 and x4 are positive. This
means that increasing the value of x3 or x4 increases the value of z. Therefore, the
minimum of z is 37, that is, the maximum of the total profit is 37 million yen, and
the production numbers of products P1 and P2 are 3 and 3.5 tons, respectively.
10 2 Linear Programming

2.2 Typical Examples of Linear Programming Problems

Thus far, we have outlined linear programming through the two-dimensional


production planning problem, which can be generalized as the following production
planning problem with n decision variables.
Example 2.1 (Production planning problem). A manufacturing company has fixed
amounts of m different resources at its disposal. These resources are used to
produce n different commodities. The company knows that to produce one unit
of commodity j , aij units of resource i are required. The total number of units of
resource i available is bi . It also knows that a profit per unit of commodity j is cj .
It desires to produce a combination of commodities which will maximize the total
profit.
Let xj denote a decision variable for the production amount of commodity j .
Since the amount of resource i that is used must be less than or equal to the
available number bi of units of resource i , we have, for each i D 1; 2; : : : ; m, a
linear inequality

ai1 x1 C ai2 x2 C    C ai n xn  bi :

As a negative xj has no appropriate interpretation, it is required that xj  0, j D


1; 2; : : : ; n. The profit arising from producing xj units of commodity j is calculated
as cj xj . Our formulation is represented as a linear programming problem where the
linear profit function

c1 x 1 C c 2 x 2 C    C cn x n (2.4)

is maximized subject to the linear inequality constraints


9
a11 x1 C a12 x2 C    C a1n xn  b1 >
>
a21 x1 C a22 x2 C    C a2n xn  b2
=
(2.5)
 >
>
;
am1 x1 C am2 x2 C    C amn xn  bm

and nonnegativity conditions for all decision variables

xj  0; j D 1; 2; : : : ; n: (2.6)

˙
Compared with such a production planning problem maximizing the linear
objective function of the total profit subject to the linear inequality constraints
in a direction of the less than or equal to symbol , the following diet problem
minimizing the linear objective function of the total cost subject to the linear
inequality constraints in a direction of the greater than or equal to symbol  is well
2.2 Typical Examples of Linear Programming Problems 11

known as a nearly symmetric one. It should be noted here that both of the problems
have the nonnegativity conditions for all decision variables xj  0, j D 1; : : : ; n.
Example 2.2 (Diet problem). How can we determine the most economical diet that
satisfies the basic minimum nutritional requirements for good health? Assume n
different foods are available at the market and the selling price for food j is cj
per unit. Moreover, there are m basic nutritional ingredients for the human body,
and at least bi units of nutrient i are required everyday to achieve a balanced diet
for good health. In addition, assume that each unit of food j contains aij units of
nutrient i . The problem is to determine the most economical diet that satisfies the
basic minimum nutritional requirements.
For this problem, let xj , j D 1; : : : ; n denote a decision variable for the number
of units of food j in the diet, and then it is required that xj  0, j D 1; : : : ; n. The
total amount of nutrient i

ai1 x1 C ai2 x2 C    C ai n xn

contained in the purchased foods must be greater than or equal to the daily
requirement bi of nutrient i . Thus, the economic diet can be represented as a linear
programming problem where the linear cost function

c1 x 1 C c 2 x 2 C    C cn x n (2.7)

is minimized subject to the linear constraints


9
a11 x1 C a12 x2 C    C a1n xn  b1 >
>
a21 x1 C a22 x2 C    C a2n xn  b2
=
(2.8)
 >
>
;
am1 x1 C am2 x2 C    C amn xn  bm

and nonnegativity conditions for all decision variables

xj  0; j D 1; 2; : : : ; n: (2.9)

˙
To develop a better understanding, as a simple numerical example of the diet
problem, we present the following diet problem with two decision variables and
three constraints.
Example 2.3 (Diet problem with 2 decision variables and 3 constraints). A house-
wife is planning a menu by utilizing two foods F1 and F2 containing three nutrients
N1 , N2 , and N3 in order to meet the nutritional requirements at a minimum cost.
Each 1 g (gram) of the food F1 contains 1 mg (milligram) of N1 , 1 mg of N2 , and
2 mg of N3 ; and each 1 g of the food F2 contains 3 mg of N1 , 2 mg of N2 , and 1
mg of N3 . The recommended amounts of the nutrients N1 , N2 , and N3 are known
12 2 Linear Programming

Table 2.1 Data for two foods diet problem


Food F1 (g) Food F2 (g) Minimum requirement
Nutrient N1 (mg) 1 3 12
Nutrient N2 (mg) 1 2 10
Nutrient N3 (mg) 2 1 15
Price (thousand yen) 4 3

to be at least 12 mg, 10 mg, and 15 mg, respectively. Also, it is known that the costs
per gram of the foods F1 and F2 are, respectively, 4 and 3 thousand yen. These data
concerning the nutrients and foods are summarized in Table 2.1.
The housewife’s problem is to determine the purchase volumes of foods F1 and
F2 which minimize the total cost satisfying the nutritional requirements for the
nutrients N1 , N2 , and N3 .
Let xj denote a decision variable for the number of units of food Fj to
be purchased, and then we can formulate the corresponding linear programming
problem minimizing the linear cost function

4x1 C 3x2 (2.10)

subject to the linear constraints


9
x1 C 3x2  12 =
x1 C 2x2  10 (2.11)
;
2x1 C x2  15

and nonnegativity conditions for all variables

x1  0; x2  0: (2.12)

2.3 Standard Form of Linear Programming

In order to deal with such nearly symmetrical production planning problems and
diet problems in a unified way, the standard form of linear programming is defined
as follows:
The standard form of linear programming is to minimize the linear objective
function

z D c 1 x 1 C c2 x 2 C    C c n x n (2.13)
2.3 Standard Form of Linear Programming 13

subject to the linear equality constraints


9
a11 x1 C a12 x2 C    C a1n xn D b1 >
>
a21 x1 C a22 x2 C    C a2n xn D b2
=
(2.14)
 >
>
;
am1 x1 C am2 x2 C    C amn xn D bm

and nonnegativity conditions for all decision variables

xj  0; j D 1; 2; : : : ; n; (2.15)

where the aij , bi , and cj are fixed real constants. In particular, bi is called a right-
hand side constant, and cj is sometimes called a cost coefficient in a minimization
problem, while called a profit coefficient in a maximization one.
In this book, the standard form of linear programming is written in the following
form:
9
minimize z D c1 x1 C c2 x2 C    C cn xn >
>
>
subject to a11 x1 C a12 x2 C    C a1n xn D b1 > >
>
>
a21 x1 C a22 x2 C    C a2n xn D b2
=
(2.16)
 >
>
>
am1 x1 C am2 x2 C    C amn xn D bm > >
>
>
xj  0; j D 1; 2; : : : ; n;
;

or using summation notation, it is compactly rewritten as


n
X
9
minimize z D cj x j
>
>
>
>
>
j D1 >
=
n
X (2.17)
subject to aij xj D bi ; i D 1; : : : ; m >
>
>
j D1
>
>
>
;
xj  0; j D 1; : : : ; n:

By introducing an n dimensional row vector c, an m  n matrix A, an n


dimensional column vector x, and an m dimensional column vector b, the standard
form of linear programming can be then written in a more compact vector–matrix
form as follows:
9
minimize z D cx =
subject to Ax D b (2.18)
;
x  0;
14 2 Linear Programming

where

c D .c1 ; c2 ; : : : ; cn /; (2.19)

2 3 0 1 0 1
a11 a12    a1n x1 b1
6 a21 a22    a2n 7 B x2 C B b2 C
AD6 : ; x D B : C; bDB : C; (2.20)
6 7 B C B C
:: : : :: 7
4 :: : : : 5 @ :: A @ :: A
am1 am2    amn xn bm
and 0 is an n dimensional column vector with zero components.
Moreover, by denoting the j th column of an m  n matrix A by
0 1
a1j
B a2j C
pj D B : C ; j D 1; 2; : : : ; n (2.21)
B C
:
@ : A
amj

and writing A D Œ p1 p2    pn ; the standard form linear programming (2.16) can


also be represented in column form:
9
minimize z D c1 x1 C c2 x2 C    C cn xn =
subject to p1 x1 C p2 x2 C    C pn xn D b (2.22)
;
xj  0; j D 1; 2; : : : ; n:
In the standard form of linear programming (2.16), the objective function

z D c1 x 1 C c2 x 2 C    C c n x n

can be treated as just another equation, i.e.,

z C c1 x1 C c2 x2 C    C cn xn D 0; (2.23)

and by including it in an augmented system of equations, the problem can then be


stated as follows:
Find values of the nonnegative decision variables x1  0; x2  0; : : : ; xn  0 so
as to minimize z, satisfying the augmented system of linear equations
9
a11 x1 C a12 x2 C    C a1n xn D b1 >
>
>
a21 x1 C a22 x2 C    C a2n xn D b2 >
>
=
 (2.24)
>
am1 x1 C am2 x2 C    C amn xn D bm >
>
>
>
z C c 1 x 1 C c2 x 2 C    C c n x n D 0:
;

It should be noted here that the standard form of linear programming deals with a
linear minimization problem with nonnegative decision variables and linear equality
2.3 Standard Form of Linear Programming 15

constraints. We introduce a mechanism to convert any general linear programming


problem into the standard form. A linear inequality can be easily converted into an
equality. When the i th constraint is represented as
n
X
aij xj  bi ; i D 1; 2; : : : ; m; (2.25)
j D1

by adding a nonnegative slack variable xnCi  0 such that


n
X
aij xj C xnCi D bi ; i D 1; 2; : : : ; m; (2.26)
j D1

the inequality (2.25) becomes the equality (2.26).


Similarly, if the i th constraint is
n
X
aij xj  bi ; i D 1; 2; : : : ; m; (2.27)
j D1

by subtracting a nonnegative surplus variable xnCi  0 such that


n
X
aij xj xnCi D bi ; i D 1; 2; : : : ; m; (2.28)
j D1

we can also transform the inequality (2.27) into the equality (2.28). It should
be noted here that both the slack variables and the surplus variables must be
nonnegative in order that the inequalities (2.25) and (2.27) are satisfied for all
i D 1; 2; : : : ; m.
If, in the original formulation of the problem, some decision variable xk is
not restricted to be nonnegative, it can be replaced with the difference of two
nonnegative variables, i.e.,

xk D xkC xk ; xkC  0; xk  0: (2.29)

If an objective function is to be maximized, we simply multiply the objective


function by 1 to convert a maximization problem into a minimization problem.
Recall that, in the algebraic method for the production planning problem of
Example 1.1, multiplying the objective function by 1 and introducing the three
nonnegative slack variables x3 , x4 , and x5 yields the following standard form of
linear programming:
9
minimize z D 3x1 8x2 >
>
>
subject to 2x1 C 6x2 C x3 D 27 >
>
=
3x1 C 2x2 C x4 D 16 (2.30)
>
4x1 C x2 C x5 D 18 >
>
>
>
xj  0; j D 1; 2; 3; 4; 5: ;
16 2 Linear Programming

For the general production planning problem with n decision variables, by


introducing the m nonnegative slack variables xnCi . 0/, i D 1; : : : ; m, it can
be converted into the following standard form of linear programming:
9
minimize c1 x1 C c2 x2 C    C cn xn >
>
>
subject to a11 x1 C a12 x2 C    C a1n xn CxnC1 D b1 >
>
>
>
a21 x1 C a22 x2 C    C a2n xn CxnC2 D b2
=
(2.31)
 >
>
>
am1 x1 C am2 x2 C    C amn xn CxnCm D bm >
>
>
>
xj  0; j D 1; 2; : : : ; n; n C 1; : : : ; n C m:
;

Similarly, for the diet problem with n decision variables, introducing the m
nonnegative surplus variables xnCi . 0/, i D 1; : : : ; m yields the following
standard form of linear programming:
9
minimize c1 x1 C c2 x2 C    C cn xn >
>
>
subject to a11 x1 C a12 x2 C    C a1n xn xnC1 D b1 >
>
>
>
a21 x1 C a22 x2 C    C a2n xn xnC2 D b2
=
(2.32)
 >
>
>
am1 x1 C am2 x2 C    C amn xn xnCm D bm >
>
>
>
xj  0; j D 1; 2; : : : ; n; n C 1; : : : ; n C m:
;

The basic ideas of linear programming are to first detect whether solutions
satisfying equality constraints and nonnegativity conditions exist and, if so, to find
a solution yielding the minimum value of z.
However, in the standard form of linear programming (2.16) or (2.18), if there is
no solution satisfying the equality constraint, or if there exists only one, we do not
need optimization. Also, if any of the equality constraints is redundant, i.e., a linear
combination of the others, it could be deleted without changing any solutions of the
system. Therefore, we are mostly interested in the case where the system of linear
equations (2.16) is nonredundant and has an infinite number of solutions.
For that purpose, assume that the number of variables exceeds the number of
equality constraints, i.e.,

n>m (2.33)

and the system of linear equations is linearly independent, i.e.,

rank.A/ D m: (2.34)

Under these assumptions, we introduce a number of definitions for the standard


form of linear programming (2.16) or (2.18).1

1
These assumptions, introduced to establish the principle theoretical results, will be relaxed in
Sect. 2.5 and are no longer necessary when solving general linear programming problems.
2.3 Standard Form of Linear Programming 17

Definition 2.1 (Feasible solution). A feasible solution to the linear programming


problem (2.16) is a vector x D .x1 ; x2 ; : : : ; xn /T which satisfies the linear equalities
and the nonnegativity conditions of (2.16).2
Definition 2.2 (Basis matrix). A basis matrix is an m  m nonsingular submatrix
formed by choosing some m columns of the rectangular matrix A. Observe that A
contains at least one basis matrix due to rank.A/ D m.
Definition 2.3 (Basic solution). A basic solution to the linear programming prob-
lem (2.16) is a solution obtained by setting n m variables (called nonbasic
variables) equal to zeros and solving for the remaining m variables (called basic
variables). A basic solution is also a unique vector determined by choosing a basis
matrix from the mn matrix A and solving the resulting square, nonsingular system
of equations for the m variables. The set of all basic variables is called the basis.
Definition 2.4 (Basic feasible solution). A basic feasible solution to the linear
programming problem (2.16) is a basic solution which satisfies not only the linear
equations but also the nonnegativity conditions of (2.16), that is, all basic variables
are nonnegative. Observe that at most m variables can be positive by Definition 2.3.
Definition 2.5 (Nondegenerate basic feasible solution). A nondegenerate basic
feasible solution to the linear programming problem (2.16) is a basic solution with
exactly m positive xj , that is, all basic variables are positive.
Definition 2.6 (Optimal solution). An optimal solution to the linear programming
problem (2.16) is a feasible solution which also minimizes z in (2.16). The
corresponding value of z is called the optimal value.
The number of basic solutions is the number of ways that m variables are selected
from a group of n variables, i.e.,


n Cm D :
.n m/ŠmŠ

Example 2.4 (Basic solutions). Consider the basic solutions of the standard form of
the linear programming (2.30) discussed in Example 1.1.
Choosing x3 , x4 , and x5 as basic variables, we have the corresponding basic
solution .x1 ; x2 ; x3 ; x4 ; x5 / D .0; 0; 27; 16; 18/ which is a nondegenerate basic
feasible solution and corresponds to the extreme point A in Fig. 1.1. After making
another choice of x1 , x2 , and x4 as basic variables, solving

2x1 C 6x2 D 27
3x1 C 2x2 C x4 D 16
4x1 C x2 D 18

2
In this book, the superscript T denotes the transpose operation for a vector or a matrix.
18 2 Linear Programming

yields x1 D 81=22, x2 D 36=11, and x4 D 35=22. The resulting basic solution


.x1 ; x2 ; x3 ; x4 ; x5 / D .81=22; 36=11; 0; 35=22; 0/ is not feasible.
Choosing x1 , x2 , and x5 as basic variables, we solve

2x1 C 6x2 D 27
3x1 C 2x2 D 16
4x1 C x2 C x5 D 18;

and then we have a basic feasible solution .x1 ; x2 ; x3 ; x4 ; x5 / D .3; 3:5; 0; 0; 2:5/.
It corresponds to the extreme point D in Fig. 1.1 which is an optimal solution. ˙

2.4 Simplex Method

For generalizing the basic ideas of linear programming grasped in the algebraic
approach to the two-dimensional production planning problem of Example 1.1, con-
sider the following linear programming problem with basic variables x1 ; x2 ; : : : ; xm :
Find values of x1  0; x2  0; : : : ; xn  0 so as to minimize z, satisfying the
augmented system of linear equations
9
x1 C aN 1;mC1 xmC1 C aN 1;mC2 xmC2 C    C aN 1n xn D bN1 >
>
bN2
>
x2 C aN 2;mC1 xmC1 C aN 2;mC2 xmC2 C    C aN 2n xn D >
>
=

D bNm
>
xm C aN m;mC1 xmC1 C aN m;mC2 xmC2 C    C aN mn xn >
>
>
>
z C cNmC1 xmC1 C cNmC2 xmC2 C    C cNn xn D zN:
;
(2.35)
As in the previous section, here it is assumed that n > m and the system
of m equality constrains is nonredundant. As with the augmented system of
equations (2.35), a system of linear equations in which each of the variables
x1 ; x2 ; : : : ; xm has a coefficient of unity in one equation and zeros elsewhere is called
a canonical form or a basic form. In a canonical form, the variables x1 ; x2 ; : : : ; xm
and . z/ are called basic variables, and the remaining variables xmC1 ; xmC2 ; : : : ; xn
are called nonbasic variables. In such a canonical form, observing that . z/ always
is a basic variable, with no further notice, only x1 ; x2 ; : : : ; xm are called basic
variables.
It is useful to set up such a canonical form (2.35) in tableau form as shown in
Table 2.2. This table is called a simplex tableau, in which only the coefficients of
the algebraic representation in (2.35) are given.
From the canonical form (2.35) or the simplex tableau given in Table 2.2, it
follows directly that a basic solution with basic variables x1 ; x2 ; : : : ; xm becomes

x1 D bN1 ; x2 D bN2 ; : : : ; xm D bNm ; xmC1 D xmC2 D    D xn D 0 (2.36)


2.4 Simplex Method 19

Table 2.2 Simplex tableau


Basis x1 x2  xm xmC1 xmC2  xn Constants
x1 1 aN 1;mC1 aN 1;mC2  aN 1n bN1
x2 1 aN 2;mC1 aN 2;mC2  aN 2n bN2
:: :: :: :: :: ::
: : : :  : :
xm 1 aN m;mC1 aN m;mC2  aN mn N
bm
z cNmC1 cNmC2  cNn zN

and the value of the objective function is

z D zN: (2.37)

If

bN1  0; bN2  0; : : : ; bNm  0; (2.38)

then the solution .x1 ; : : : ; xm ; xmC1 ; : : : ; xn / D .bN1 ; : : : ; bNm ; 0; : : : ; 0/ is a basic


feasible solution. In this case, the corresponding canonical form (tableau) is called a
feasible canonical form (tableau). If, for one or more i , bNi D 0 holds, then it is said
that the basic feasible solution is degenerate.
As an example that we can directly formulate a feasible canonical form, consider
the production planning problem of Example 2.1. For this problem, by introducing
m slack variables xnCi  0, i D 1; 2; : : : ; m and multiplying the objective function
by 1 to convert the maximization problem into a minimization problem, the
following canonical form is obtained:
9
a11 x1 C a12 x2 C    C a1n xn C xnC1 D b1 > >
>
a21 x1 C a22 x2 C    C a2n xn C xnC2 D b2 > >
=
 (2.39)
>
am1 x1 C am;2 x2 C    C amn xn C xnCm D bm > >
>
>
c1 x 1 C c2 x 2 C    C cn x n z D 0: ;

In this formulation, by using the m slack variables xnC1 ; xnC2 ; : : : ; xnCm as basic
variables, it is evident that (2.39) is a canonical form, and then the corresponding
basic solution is

x1 D x2 D    D xn D 0; xnC1 D b1 ; : : : ; xnCm D bm : (2.40)

From the fact that the right-hand side constant bi means the available amount of
resource i , it should be nonnegative, i.e., bi  0, i D 1; 2; : : : ; m, and therefore this
canonical form is feasible.
20 2 Linear Programming

In contrast, for the diet problem of Example 2.2, introducing m surplus variables
xnCi  0, i D 1; 2; : : : ; m and then multiplying both sides of the resulting
constraints by 1 yields a basic solution

x1 D x2 D    D xn D 0; xnC1 D b1 ; : : : ; xnCm D bm : (2.41)

Unfortunately, however, since bi  0, i D 1; 2; : : : ; m, this operation cannot lead a


feasible canonical form.
In the following discussions of this section, assume that the canonical form (2.35)
is feasible. That is, starting with the canonical form (2.35) with the basic solution

x1 D bN1 ; x2 D bN2 ; : : : ; xm D bNm ; xmC1 D xmC2 D    D xn D 0;

we assume that this basic solution is feasible, i.e.,

bN1  0; bN2  0; : : : ; bNm  0:

From the last equation in (2.35), we have

z D zN C cNmC1 xmC1 C cNmC2 xmC2 C    C cNn xn :

Since xmC1 D xmC2 D    D xn D 0, one finds z D zN. This equation provides


even more valuable information than this. By merely glancing at the numbers cNj ,
j D m C 1; m C 2; : : : ; n, one can tell if this basic feasible solution is optimal or
not. Furthermore, one can find a better basic feasible solution if it is not optimal.
Consider first the optimality of the canonical form, given by the following theorem.
Theorem 2.1 (Optimality test). In the feasible canonical form (2.35), if all
coefficients cNmC1 , cNmC2 , : : :, cNn of the last equation are nonnegative, i.e.,

cNj  0; j D m C 1; m C 2; : : : ; n; (2.42)

then the basic feasible solution is optimal.


Proof. The last equation of (2.35) can be rewritten as

z D zN C cNmC1 xmC1 C cNmC2 xmC2 C    C cNn xn :

The nonbasic variables xmC1 ; xmC2 ; : : : ; xn are presently zeros, and they are
restricted to be nonnegative. If cNj  0 for j D m C 1; m C 2; : : : ; n, then from
cNj xj  0, j D mC1; mC2; : : : ; n, increasing any xj cannot decrease the objective
function z. Thus, since any change in the nonbasic variables cannot decrease z, the
present solution must be optimal. 
The coefficient cNj of xj in (2.35) represents the rate of change of z with respect
to the nonbasic variable xj . From this observation, the coefficient cNj is called the
relative cost coefficient or, alternatively, the reduced cost coefficient.
2.4 Simplex Method 21

The optimality condition (2.42) is sometimes referred to as the optimality


criterion or the simplex criterion. The feasible canonical form satisfying the
optimality criterion is called the optimal canonical form or the optimal basic form,
and the simplex tableau satisfying the optimality criterion is also called the optimal
tableau.
Note that since cNj D 0 for all basic variables, the optimality criterion (2.42)
could also be stated simply as cNj  0 for all j D 1; 2; : : : ; n in place of cNj  0 for
all j D m C 1; : : : ; n.
In addition to the optimality, the relative cost coefficients can also tell if there
are multiple optima. Assume that for all nonbasic variables xj , cNj  0, and for
some nonbasic variable xk , cNk D 0. In that case, if the increase of xk does not
violate the constraints, there are multiple optima because no change in z results.
Hence, the following theorem can be derived.
Theorem 2.2 (Unique optimal solution). In the feasible canonical form (2.35), if
cNj > 0 for all nonbasic variables, then the basic feasible solution is the unique
optimal solution.
Of course, if, for some nonbasic variable xj , cNj < 0, then z can be decreased
by increasing xj . Consider a method for finding better solutions than the current
nonoptimal solution.
If there is at least one negative coefficient, say cNj < 0, then, under the assumption
of nondegeneracy, i.e., bNi > 0 for all i , it is always possible to generate another basic
feasible solution with an improved value of the objective function. If there are two
or more negative coefficients, we choose a variable xs with the smallest relative cost
coefficient

cNs D min cNj (2.43)


cNj <0

and increase the value of xs .


Although this choice may not lead to the greatest possible decrease in z (since
only a limited extent of increase of xs may be allowed), it is at least intuitively a good
rule for choosing a variable to be made a basic one. It is the one used in practice
today because (i) it is simple and (ii) it generally leads to an optimal solution in
fewer iterations than just choosing any cNs < 0.
After a nonbasic variable xs is selected to be a basic one, we increase the value
of xs from zero, holding the other nonbasic variables zeros. Observe the effect of
this operation on the current basic variables. From (2.35), each of the current basic
variables can be represented as a function of xs :
9
x1 D bN1 aN 1s xs >>
N
>
x2 D b2 aN 2s xs > >
=
 (2.44)
xm D bNm aN ms xs >
>
>
>
>
z D zN C cNs xs : ;
22 2 Linear Programming

Since the coefficient cNs of the last equation in (2.44) is negative, i.e., cNs < 0,
increasing the value of xs decreases the value of z. The only factor limiting the
increase of xs is that all of the variables x1 ; x2 ; : : : ; xm must be nonnegative. In other
words, keeping the feasibility of the solution requires

xi D bNi aN is xs  0; i D 1; 2; : : : ; m: (2.45)

However, if all the coefficients aN is , i D 1; 2; : : : ; m are nonpositive, i.e.,

aN is  0; i D 1; 2; : : : ; m; (2.46)

then xs can increase infinitely. Hence since cNs < 0, from the last equation of (2.44),
it follows that

z D zN C cNs xs ! 1:

Thus, we have the following theorem.


Theorem 2.3 (Unboundedness). If in the feasible canonical form (2.35), for some
nonbasic variable xs , the coefficients aN is , i D 1; 2; : : : ; m are nonpositive and the
coefficient cNs is negative, i.e.,

aN is  0; i D 1; 2; : : : ; m; and cNs < 0; (2.47)

then the optimal value is unbounded.


If, however, at least one aN is is positive, then xs cannot be increased indefinitely
since eventually some basic variable, say xi , will decrease beyond zero and become
negative. From (2.44), xi becomes zero when the coefficient aN is is positive and xs
raises to bNi =aN is , i.e.,

bNi
xs D ; aN is > 0: (2.48)
aN is
The value of xs is maximized under the condition of the nonnegativity of the
basic variables xi , i D 1; 2; : : : ; m, and it is given by

bNi bNr
min D D ™: (2.49)
N is
aN i s >0 a aN rs
The basic variable xr determined by (2.49) then becomes nonbasic, and instead, the
nonbasic variable xs becomes basic. That is, xr becomes zero while xs increases
from zero to bNr =aN rs D ™ . 0/. Also, from the last equation of (2.44), the value of
objective function z decreases by jcNs xs j D jcNs ™j.
A new canonical form in which xs is selected as a basic variable in place of xr can
be easily obtained by pivoting on aN rs , which is called the pivot element determined
by (2.43) and (2.49). That is, finding cNs D mincNj <0 cNj tells us that the pivot term
is in column s, and finding the minimum bNr =aN rs of all the ratios bNi =aN is such that
aN is > 0 tells us that it is in row r.
Fundamental to linear programming is a pivot operation defined as follows.
2.4 Simplex Method 23

Definition 2.7 (Pivot operation). A pivot operation consists of m elementary steps


for replacing a linear system with an equivalent system in which a specified variable
has a coefficient of unity in one equation and zeros elsewhere. The detailed steps
are as follows:
(i) Select the nonzero element ars in row (equation) r and column s, which is
called the pivot element.
(ii) Replace the rth equation with the rth equation multiplied by 1=ars .
(iii) For each i D 1; 2; : : : ; m except i D r, replace the i th equation with the sum
of the i th equation and the replaced rth equation multiplied by ais .
In linear programming, pivot operations are sometimes counted by the term
“cycle.” Now, a pivot operation on aN rs .¤ 0/ is performed to the feasible canonical
form

bN1 >
9
x1 C aN 1;mC1 xmC1 C    C aN 1s xs C    C aN 1n xn D >
C aN 2;mC1 xmC1 C    C aN 2s xs C    C aN 2n xn D bN2 >
>
x2 >
>
>
 >
>
=
xr C aN r;mC1 xmC1 C    C aN rs xs C    C aN rn xn D Nbr
>
 >
>
>
bNm >
>
xm C aN m;mC1 xmC1 C    C aN ms xs C    C aN mn xn D
>
>
>
z C cNmC1 xmC1 C    C cNs xs C    CcNn xn D zN;
;
(2.50)
where bNi  0, i D 1; 2; : : : :m, and then we have the new canonical form

  
bN1 >
9
x1 C aN 1r xr C aN 1;mC1 xmC1 C    C 0 C    C aN 1n xn D >

x2 C aN 2r xr  
C aN 2;mC1 xmC1 C    C 0 C    C aN 2n xn D bN2 >
>
>
>
>
 >
>
=

aN rr xr 
C aN r;mC1 
xmC1 C    C xs C    C aN rn xn D Nb 
r >
 >
>
>
   Nb  >
>
aN mr xr C xm C aN m;mC1 xmC1 C    C 0 C    C aN mn xn D m>
>
>
  
cNr xr z C cNmC1 xmC1 C    C 0 C    C cNn xn D zN ;
;
(2.51)
where the superscript  is added to a revised coefficient, and the revised coefficients
for j D r; m C 1; m C 2; : : : ; n are calculated as follows:

aN rj bNr

aN rj D ; bNr D ; (2.52)
aN rs aN rs
aN rj bNr
aN ij D aN ij aN is ; bNi D bNi aN is ; i D 1; 2; : : : ; mI i ¤ r; (2.53)
aN rs aN rs
aN rj bNr
cNj D cNj cNs ; zN D zN cNs : (2.54)
aN rs aN rs
24 2 Linear Programming

Table 2.3 Pivot operation on aN rs


Cycle Basis x1  xr  xm xmC1  xs  xn Constants
` x1 1 aN 1;mC1  aN 1s  aN 1n bN1
:: :: :: :: :: ::
: : : : : :
xr 1 aN r;mC1  ŒaN rs   aN rn bNr
:: :: :: :: :: ::
: : : : : :
xm 1 aN m;mC1  aN ms  aN mn N
bm
z cNmC1  cNs  cNn zN

`C1 x1 1 
aN 1r 
aN 1;mC1  0  
aN 1n bN1
:: :: :: :: :: ::
: : : : : :
xs 
aN rr 
aN r;mC1  1  
aN rn bNr
:: :: :: :: :: ::
: : : : : :
xm 
aN mr 1 
aN m;mC1  0  
aN mn bNm

 
z cNr  cNmC1  0  cNn zN
aNrj bNr

aN rj D aNrs
; bNr D aNrs
aN N
aN ij D aN ij aN i s aNrjrs D aN ij 
aN i s aN rj ; bNi D bNi aN i s aNbrsr D bNi aN i s bNr .i ¤ r/
aN N
cNj D cNj cNs aNrjrs D cNj 
cNs aN rj ; zN D zN cNs aNbrsr D zN cNs bNr

Since the pivot element aN rs is determined by (2.43) and (2.49), it is expected that
the new canonical form (2.51) with basic variables x1 ; x2 ; : : : ; xr 1 ; xs ; xrC1 ; : : : ; xm
also becomes feasible. This fact can be formally verified as follows.
It is obvious that bNr D bNr =aN rs  0. For i .i ¤ r/ such that aN is > 0, from (2.49),
it follows that
!
aN is N bN i bN r
bNi D bNi br D aN is  0;
aN rs aN is aN rs

and for i .i ¤ r/ such that aN is  0, one finds that

aN is N
bNi D bNi br  0:
aN rs

Hence, it holds that bNi  0 for all i , and then (2.51) is a feasible canonical form.
The pivot operation on aN rs replacing xr with xs as a new basic variable can be
summarized in Table 2.3.
As described so far, starting with a feasible canonical form and updating it
through a series of pivot operations, the simplex method seeks for an optimal
solution satisfying the optimality criterion or the unboundedness information. The
procedure of the simplex method, starting with a feasible canonical form, can be
summarized as follows.
2.4 Simplex Method 25

Procedure of the Simplex Method

Start with a feasible canonical form (simplex tableau).


Step 1 If all of the relative cost coefficients are nonnegative, i.e., cNj  0 for all
indices j of the nonbasic variables, then the current solution is optimal, and stop.
Otherwise, by using the relative cost coefficients cNj , find the index s such that

min cNj D cNs :


cNj <0

Step 2 If all of the coefficients in column s are nonpositive, i.e., aN is  0 for all
indices i of the basic variables, then the optimal value is unbounded, and stop.
Step 3 If some of aN is are positive, find the index r such that

bNi bNr
min D D ™:
N is
aN i s >0 a aN rs

Step 4 Perform the pivot operation on aN rs for obtaining a new feasible canonical
form (simplex tableau) with xs replacing xr as a new basic variable. The
coefficients of the new feasible canonical form after pivoting on aN rs ¤ 0 are
calculated as follows:
(i) Replace row r (the rth equation) with row r multiplied by 1=aN rs (divide
row r by aN rs ), i.e.,

aN rj bNr

aN rj D ; bNr D :
aN rs aN rs

(ii) For each i D 1; 2; : : : ; m except i D r, replace row i (the i th equation)


with the sum of row i and the revised row r multiplied by aN is , i.e.,

aN ij D aN ij 
aN is aN rj ; bNi D bNi aN is bNr :

(iii) Replace row m C 1 (the .m C 1/th equation for the objective function) with
the sum of row m C 1 and the revised row r multiplied by cNs , i.e.,

cNj D cNj 
cNs aN rj ; zN D zN cNs bNr :

Return to step 1.
It should be noted here that when multiple candidates exist for the index s of the
variable entering the basis in step 1 or the index r of the variable leaving the basis
in step 3, for the sake of convenience, we choose the smallest index.
26 2 Linear Programming

Table 2.4 Simplex tableau for Example 1.1


Cycle Basis x1 x2 x3 x4 x5 Constants
0 x3 2 [6] 1 27
x4 3 2 1 16
x5 4 1 1 18
z 3 8 0
1 x2 1/3 1 1/6 4.5
x4 [7/3] 1=3 1 7
x5 11/3 1=6 1 13.5
z 1=3 4/3 36
2 x2 1 3/14 1=7 3.5
x1 1 1=7 3/7 3
x5 5/14 11=7 1 2.5
z 9/7 1/7 37

Example 2.5 (Simplex method for the production planning problem of Example 1.1).
Using the simplex method, solve the production planning problem in the standard
form given in Example 1.1:

minimize z D 3x1 8x2


subject to 2x1 C 6x2 C x3 D 27
3x1 C 2x2 C x4 D 16
4x1 C x2 C x5 D 18
xj  0; j D 1; 2; 3; 4; 5:

Introducing the slack variables x3 , x4 , and x5 and using them as the basic
variables, we have the initial basic feasible solution

x1 D x2 D 0; x3 D 27; x4 D 16; x5 D 18;

which is shown at cycle 0 of the simplex tableau given in Table 2.4.


At cycle 0, since the minimum of cN1 and cN2 is

min . 3; 8/ D 8 < 0;

x2 becomes a new basic variable. The minimum ratio, minaN i 2 >0 bNi =aN i2 , is calcu-
lated as
 
27 16 18 27
min ; ; D D 4:5;
6 2 1 6

and then x3 becomes a nonbasic variable. From s D 2 and r D 1, the pivot element
is 6 bracketed by [ ] in Table 2.4. After the pivot operation on 6, the result at cycle
1 is obtained.
2.4 Simplex Method 27

Table 2.5 Simplex tableau with multiple optima


Cycle Basis x1 x2 x3 x4 x5 Constants
0 x3 2 [6] 1 27
x4 3 2 1 16
x5 4 1 1 18
z 1 3
1 x2 1/3 1 1/6 4.5
x4 [7/3] 1=3 1 7
x5 11/3 1=6 1 13.5
z 0 1/2 13.5
2 x2 1 3/14 1=7 3.5
x1 1 1=7 3/7 3
x5 5/14 11=7 1 2.5
z 1/2 0 13.5

At cycle 1, since the negative relative cost coefficient is only 1=3, x1 becomes
a basic variable. Since
 
4:5 7 13:5 7
min ; ; D D 3;
1=3 7=3 11=3 7=3

7/3 bracketed by [ ] becomes the pivot element. After the pivot operation on 7/3, the
result at cycle 2 is obtained. At cycle 2, all of the relative cost coefficients become
positive, and then the following optimal solution is obtained:

x1 D 3; x2 D 3:5 .x3 D x4 D 0; x5 D 2:5/; zD 37

The above optimal solution corresponds to the extreme point D in Fig. 1.1. ˙
Example 2.6 (Example with multiple optima). To show a simple linear program-
ming problem having multiple optima, consider the following modified production
planning problem in which the coefficients of x1 and x2 in the original objective
function given in Example 1.1 are changed to 1 and 3, respectively:

minimize z D x1 3x2
subject to 2x1 C 6x2 C x3 D 27
3x1 C 2x2 C x4 D 16 :
4x1 C x2 C x5 D 18
xj  0; j D 1; 2; 3; 4; 5

By using the simplex method, at cycle 1 in Table 2.5, an optimal solution

x1 D 0; x2 D 4:5 .x3 D 0; x4 D 7; x5 D 13:5/; zD 13:5


28 2 Linear Programming

is obtained, observing the relative cost coefficient of x1 is zero, which means that the
value of the objective function is unchanged even if x1 becomes positive, provided
that it is not violating the constraints. Replacing x1 with x4 as a basic variable yields
an alternative optimal solution

x1 D 3; x2 D 3:5 .x3 D x4 D 0; x5 D 2:5/; zD 13:5

giving the same value of the objective function. It should be noted here that the
optimal solutions obtained in cycles 1 and 2, respectively, correspond to the extreme
points E and D in Fig. 1.1, and all of the points on the line segment ED are also
optimal. ˙

2.5 Two-Phase Method

The simplex method requires a basic feasible solution as a starting point. Such a
starting point is not always easy to find, and in fact none will exist if the constraints
are inconsistent. Phase I of the simplex method finds an initial basic feasible solution
or derives the information that no feasible solution exists. Phase II then proceeds
from this starting point to an optimal solution or derives the information that the
optimal value is unbounded. Both phases use the procedure of the simplex method
given in the previous section.
Phase I starts with a linear programming problem in the standard form (2.24),
where all the constants bi are nonnegative. For this purpose, if some bi is negative,
multiply the corresponding equation by 1. In order to set up an initial feasible
solution for phase I, the linear programming problem in the standard form is
augmented with a set of nonnegative variables xnC1  0, xnC2  0, : : :, xnCm  0,
so that the problem becomes as follows:
Find values of xj  0, j D 1; 2; : : : ; n; n C 1; : : : ; n C m so as to minimize z,
satisfying the augmented system of linear equations
9
a11 x1 C a12 x2 C    C a1n xn CxnC1 D b1 . 0/ >
>
>
a21 x1 C a22 x2 C    C a2n xn CxnC2 D b2 . 0/ >
>
=

>
am1 x1 C am2 x2 C    C amn xn CxnCm D bm . 0/ >
>
>
>
c1 x 1 C c 2 x 2 C    C cn x n z D 0: ;
(2.55)
The newly introduced nonnegative variables xnC1  0, xnC2  0, : : :, xnCm  0
are called artificial variables.
In the canonical form (2.55), using the artificial variables xnC1 ; xnC2 ; : : : ; xnCm
as basic variables, the following initial basic feasible solution is directly obtained:

x1 D x2 D    D xn D 0; xnC1 D b1  0; : : : ; xnCm D bm  0: (2.56)


2.5 Two-Phase Method 29

Although a basic feasible solution to (2.55) such as (2.56) is not always feasible
to the original system, basic feasible solutions to (2.55) such that all the artificial
variables xnC1 ; xnC2 ; : : : ; xnCm are equal to zeros are also feasible to the original
system. Thus, one way to find a basic feasible solution to the original system is to
start from the initial basic solution (2.56) and use the simplex method to drive a
basic feasible solution such that all the artificial variables are equal to zeros. This
can be done by minimizing a function of the artificial variables

w D xnC1 C xnC2 C    C xnCm (2.57)

subject to the equality constraints (2.55) and the nonnegativity conditions for all
variables. By its very nature, the function (2.57) is sometimes called the infeasibility
form.
That is, the phase I problem is to find values of x1  0; x2  0; : : : ; xn  0,
xnC1  0; : : : ; andxnCm  0 so as to minimize w, satisfying the augmented system
of linear equations
9
a11 x1 C a12 x2 C  C a1n xn CxnC1 Db1 . 0/ >
>
>
a21 x1 C a22 x2 C  C a2n xn CxnC2 Db2 . 0/ >
>
>
>

=
am1 x1 Cam2 x2 C  Camn xn CxnCm Dbm . 0/>>
>
c1 x1 C c2 x2 C  C cn xn z D0 >
>
>
>
xnC1 CxnC2 C   CxnCm wD 0:
;
(2.58)
Since the artificial variables are nonnegative, the function w which is the sum of
the artificial variables is obviously larger than or equal to zero. In particular, if the
optimal value of w is zero, i.e., w D 0, then all the artificial variables are zeros, i.e.,
xnCi D 0 for all i D 1; 2; : : : ; m. In contrast, if it is positive, i.e., w > 0, then no
feasible solution to the original system exists because some artificial variables are
not zeros and then the corresponding original constraints are not satisfied. Given an
initial basic feasible solution, the simplex method generates other basic feasible
solutions in turn, and then the end product of phase I must be a basic feasible
solution to the original system if such a solution exists.
It should be mentioned that a full set of m artificial variables may not be
necessary. If the original system has some variables that can be used as initial basic
variables, then they should be chosen in preference to artificial variables. The result
is less work in phase I.
For obtaining an initial basic feasible solution through the minimization of w
with the simplex method, it is necessary to convert the augmented system (2.58)
into the canonical form with the row of w, where w must be expressed by the
current nonbasic variables x1 ; x2 ; : : : ; xn . Since from (2.58) the artificial variables
are represented by using the nonbasic variables, i.e.,

xnCi D bi ai1 x1 ai2 x2  ai n xn ; i D 1; 2; : : : ; m;


30 2 Linear Programming

(2.57) can be rewritten as


0 1
m m n m n m
!
X X X X X X
wD xnCi D @b i aij xj A D bi aij xj ; (2.59)
iD1 iD1 j D1 iD1 j D1 iD1

which is now expressed by the nonbasic variables x1 ; x2 ; : : : ; xn . Defining


m
X m
X
w0 D bi . 0/; dj D aij ; j D 1; 2; : : : ; n; (2.60)
iD1 iD1

the row of w is compactly expressed as

w C d 1 x 1 C d2 x 2 C    C d n x n D w0 : (2.61)

In this way, the augmented system (2.58) is converted into the following initial
feasible canonical form for phase I with the row of w in which the artificial
variables xnC1 ; xnC2 ; : : : ; xnCm are selected as basic variables:
9
a11 x1 C a12 x2 C    C a1n xn CxnC1 D b1 . 0/ >
>
>
a21 x1 C a22 x2 C    C a2n xn CxnC2 D b2 . 0/ >
>
>
>

=
am1 x1 C am2 x2 C    C amn xn CxnCm D bm . 0/ >
>
>
c1 x1 C c2 x2 C    C cn xn z D 0 >
>
>
>
d1 x 1 C d 2 x 2 C    C dn x n w D w0 :
;
(2.62)
Now it becomes possible to solve the phase I problem as given by (2.62) using
the simplex method. Finding the pivot element aN rs by using the rule

dNs D min dNj (2.63)


dNj <0

and

bNr bNi
D min (2.64)
aN rs N is
aN i s >0 a

and performing the pivot operation on it, we minimize the objective function w in
phase I. When

dNj  0; j D 1; : : : ; n; n C 1; : : : ; n C mI w D 0; (2.65)

all the artificial variables become zeros. In this case, if all the artificial variables
become nonbasic ones, an initial basic feasible solution to the original problem is
2.5 Two-Phase Method 31

obtained. Hence, after eliminating all the artificial variables together with the row
of w, initiate phase II of the simplex method for minimizing the original objective
function z.
In the two-phase method, phase I finds an initial basic feasible solution or derives
the information that no feasible solution exists, and phase II then proceeds from this
starting point to an optimal solution or derives the information that the optimal value
is unbounded.
Whenever the original system contains redundancies and often when degenerate
solutions occur, artificial variables will remain in the basis at the end of phase I.
Thus, it is necessary to prevent their values from becoming positive in phase II. One
possible way is to drop all nonartificial variables whose relative cost coefficients for
w are positive and all nonbasic artificial variables before starting phase II. To see
this, we note that the equation for w at the end of phase I satisfies

nCm
X
dNj xj D w w0 ; (2.66)
j D1

where dNj  0 and w0 D 0 since feasible solutions to the original problem


exist. For feasibility, w must remain zero in phase II, which means that every xj
corresponding to dNj > 0 must be zero; hence, all such xj can be set equal to
zero and eliminated from further consideration in phase II. We can also drop any
nonbasic artificial variables because we no longer need to consider them. That is,
eliminate the columns of the artificial variables leaving from the basis and those of
nonbasic variable xj with dj > 0 in the optimal simplex tableau of phase I. Due to
this operation, the objective function w of phase I will not become positive again,
and also the values of the artificial variables remaining in the basis will not become
positive in phase II. This means that basic solutions generated in phase II are always
feasible.
m
X m
X
dj D aij ; w0 D bi
iD1 iD1

Before summarizing the procedure of the two-phase method, the following useful
remarks are given. In the simplex tableau, it is customary to omit the artificial
variable columns because these, once dropped from the basis, can be eliminated
from further consideration. Moreover, if the pivot operations for minimizing w in
phase I are also simultaneously performed on the row of z, the original objective
function z will be expressed in terms of nonbasic variables at each cycle. Thus, if an
initial basic feasible solution is found for the original problem, the simplex method
can be initiated immediately on z. Therefore, the row of z is incorporated into the
pivot operations in phase I.
Following the above discussions, the procedure of the two-phase method can be
summarized as follows.
32 2 Linear Programming

Table 2.6 Initial tableau of two-phase method


Basis x1 x2  xj  xn Constants
xnC1 a11 a12  a1j  a1n b1
xnC2 a21 a22  a2j  a2n b2
:: :: :: :: :: ::
: : : : : :
xnCi ai1 ai 2  aij  ai n bi
:: :: :: :: :: ::
: : : : : :
xnCm am1 am2  amj  amn bm
z c1 c2  cj  cn 0
w d1 d2  dj  dn w0

Procedure of Two-Phase Method

Phase I Starting with the simplex tableau in Table 2.6, perform the simplex
method with the row of w as an objective function in phase I, where the pivot
element is not selected from the row of z, but the pivot operation is performed
to the row of z. When an optimal tableau is obtained, if w > 0, no feasible
solution exists to the original problem. Otherwise, i.e., if w D 0, proceed to
phase II.
Phase II After dropping all columns of xj such that dNj > 0 and the row of w,
perform the simplex method with the row of z as the objective function in
phase II.

Example 2.7 (Two-phase method for diet problem with two decision variables).
Using the two-phase method, solve the diet problem in the standard form given
in Example 2.3.

minimize z D 4x1 C 3x2


subject to x1 C 3x2 x3 D 12
x1 C 2x2 x4 D 10
2x1 C x2 x5 D 15
xj  0; j D 1; 2; 3; 4; 5:

After introducing artificial variables x6 , x7 , and x8 as basic variables in phase I,


the two-phase method starts from cycle 0 as shown in Table 2.7, and then the value
of w becomes zero, i.e., w D 0 at cycle 3. In this example, when phase I has finished
at cycle 3, since all of the relative cost coefficients of the row of z are positive,
phase II also finishes. Thus, an optimal solution

x1 D 6:6; x2 D 1:8 .x3 D 0; x4 D 0:2; x5 D 0/ z D 31:8

is obtained. ˙
2.5 Two-Phase Method 33

Table 2.7 Simplex tableau of two-phase method for Example 2.3


Cycle Basis x1 x2 x3 x4 x5 Constants
0 x6 1 [3] 1 12
x7 1 2 1 10
x8 2 1 1 15
z 4 3 0
w 4 6 1 1 1 37
1 x2 1/3 1 1=3 4
x7 [1/3] 2/3 1 2
x8 5/3 1/3 1 11
z 3 1 12
w 2 1 1 1 13
2 x2 1 1 1 2
x1 1 2 3 6
x8 3 [5] 1 1
z 5 9 30
w 3 5 1 1
3 x2 1 0:4 0.2 1.8
x1 1 0.2 0:6 6.6
x4 0:6 1 0:2 0.2
z 0.4 1.8 31:8
w 0 0 0

Example 2.8 (Example of infeasible problem with two decision variables and
four constraints). As an example of simple infeasible problem, consider a linear
programming problem in the standard form for the diet problem of Example 2.7
including the additional inequality constraint

4x1 C 5x2  8:

Introducing a slack variable x6 , the problem is converted into the following


standard form of linear programming:

minimize z D 4x1 C 3x2


subject to x1 C 3x2 x3 D 12
x1 C 2x2 x4 D 10
2x1 C x2 x5 D 15
4x1 C 5x2 C x6 D 8
xj  0; j D 1; 2; 3; 4; 5; 6:

Using the slack variable x6 and the artificial variables x7 , x8 , and x9 as initial
basic variables, phase I of the simplex method is performed. As shown in Table 2.8,
phase I is terminated at cycle 1 because of d1 > 0, d3 > 0, d4 > 0, d5 > 0, and
d6 > 0. However, from w D 27:4 > 0, no feasible solution exists to this problem.
34 2 Linear Programming

Table 2.8 Infeasible simplex tableau


Cycle Basis x1 x2 x3 x4 x5 x6 Constants
0 x7 1 3 1 12
x8 1 2 1 10
x9 2 1 1 15
x6 4 [5] 1 8
z 4 3 0
w 4 6 1 1 1 37
1 x7 1:4 1 0:6 7.2
x8 0:6 1 0:4 6.8
x9 1.2 1 0:2 13.4
x2 0.8 1 0.2 1.6
z 1.6 0:6 4:8
w 0.8 1 1 1 1.2 27:4

It should be noted here that since the slack variable x6 is used as a basic variable,
the row of w is calculated only from the rows of x7 , x8 , and x9 in cycle 0. For
example, d1 D .1 C 1 C 2/ D 4. ˙

Example 2.9 (Example of artificial variables left in the basis). As an example


where artificial variables remain as a part of basic variables, consider the following
problem:

minimize z D 3x1 C x2 C 2x3


subject to x1 C x2 C x3 D 10
3x1 C x2 C 4x3 x4 D 30
4x1 C 3x2 C 3x3 C x4 D 40
xj  0; j D 1; 2; 3; 4:

Using the artificial variables x5 , x6 , and x7 as basic variables, phase I of the


simplex method is performed. As shown in Table 2.9, phase I is terminated with
w D 0 at cycle 1. However, the artificial variables x6 and x7 still remain in the basis
as a part of basic variables. Since dN2 D 3 > 0, after dropping the columns of x2 and
the row of w, phase II of the simplex method is performed. At cycle 3, an optimal
solution

x1 D 0; x2 D 0; x3 D 10; x4 D 10; .x5 D 0; x6 D 0; x7 D 0/ z D 20

is obtained. ˙
The procedure of the simplex method considered thus far provides a means of
going from one basic feasible solution to another one such that the objective function
z is lower than the previous value of z if there is no degeneracy or at least equal to it
2.5 Two-Phase Method 35

Table 2.9 Example of artificial variables left in the basis


Cycle Basis x1 x2 x3 x4 Constants
0 x5 [1] 1 1 10
x6 3 1 4 1 30
x7 4 3 3 1 40
z 3 1 2 0 0
w 8 5 8 0 80
1 x1 1 1 1 10
x6 2 [1] 1 0
x7 1 1 1 0
z 0 2 1 0 30
w 0 3 0 0 0
2 x1 1 [1] 10
x3 1 1 0
x7 0
z 0 0 1 30
3 x4 1 1 10
x3 1 1 10
x7 0
z 1 0 0 20

(as can occur in the degenerate case). It continues until (i) the condition of optimality
test (2.42) is satisfied, or (ii) the information of unboundedness on the optimal value
is provided. Therefore, in case of no degeneracy, the following convergence theorem
can be easily understood.
Theorem 2.4 (Finite convergence of simplex method (nondegenerate case)).
Assuming nondegeneracy at each iteration, the simplex method will terminate in
a finite number of iterations.
Proof. Since the number of basic feasible solutions is at most n Cm and it is finite,
the algorithm of the simplex method fails to finitely terminate only if the same basic
feasible solution repeatedly appears. Such repetition implies that the value of the
objective function z is the same. Under nondegeneracy, however, since each value
of z is lower than the previous, no repetition can occur and therefore the algorithm
finitely terminates. 
Recall that there is at least one basic variable whose value is zero in a degenerate
basic feasible solution. Such degeneracy may occur in an initial feasible canonical
form, and it is also possible that after some pivot operations in the procedure of the
simplex method, degenerate basic feasible solution may occur.
For example, in step 3 of the procedure of the simplex method, if the minimum
of fbNi =aN is for all i j aN is > 0g is attained by two or more basic variables, i.e.,
36 2 Linear Programming

bNi bNr bNr


min D™D 1 D 2 ; (2.67)
N is
aN i s >0 a aN r1 s aN r2 s

either xr1 or xr2 can be removed from the basis and the other remains in the basis.
In either case, both xr1 and xr2 become zeros, i.e.,

xr1 D bNr1

aN r1 s ™ D 0
(2.68)
xr2 D bNr2 aN r2 s ™ D 0:

Thus, since there is at least one basic variable whose value is zero, the new basic
feasible solution is degenerate.
This in itself does not undermine the feasibility of the solution. However, if at
some iteration a basic feasible solution is degenerate, the value of objective function
z could remain the same for some number of subsequent iterations. Moreover, there
is a possibility that after a series of pivot operations without decrease of z, the same
basis appears, and then the simplex method may be trapped into an endless loop
without termination. This phenomenon is called cycling or circling.3
The following example given by H.W. Kuhn shows that the simplex method
could be trapped into the cycling problem if the smallest index is used as tie breaker.
Example 2.10 (Kuhn’s example of cycling). As an example of cycling, consider the
following problem given by H.W. Kuhn:

minimize z D 2x4 3x5 C x6 C 12x7


subject to x1 2x4 9x5 C x6 C 9x7 D 0
1 1
x2 C x4 C x5 x6 2x7 D 0 :
3 3
x3 C 2x4 C 3x5 x6 12x7 D 2
xj  0; j D 1; 2; : : : ; 7

Using x1 , x2 , and x3 as the initial basic variables and performing the simplex
method, we have the result shown in Table 2.10. Observing that the tableau of cycle
6 is completely identical to that of cycle 0 in Table 2.10, one finds that cycling
occurs. ˙
To avoid the trap of cycling, some means to prevent the procedure from cycling
is required. Observe that in the absence of degeneracy the objective function
values in a series of iterations of the simplex method form a strictly decreasing
monotone sequence that guarantees the same basis does not repeatedly appear. With
a degenerate basic solution, the sequence is no longer strictly decreasing. To prevent
the procedure from revisiting the same basis, we need to incorporate another rule to
keep a strictly monotone decreasing sequence.

3
In his famous 1963 book, G.B. Dantzig adopted the term “circling” for avoiding possible
confusion with the term “cycle,” which was used synonymously with iteration.
2.5 Two-Phase Method 37

Table 2.10 Simplex tableau for Kuhn’s example of cycling


Cycle Basis x1 x2 x3 x4 x5 x6 x7 Constants
0 x1 1 2 9 1 9 0
x2 1 1/3 [1] 1=3 2 0
x3 1 2 3 1 12 2
z 2 3 1 12 0
1 x1 1 9 [1] 2 9 0
x5 1 1/3 1 1=3 2 0
x3 3 1 1 0 6 2
z 3 1 0 6 0
2 x4 1 9 1 2 9 0
x5 1=3 2 1 1/3 [1] 0
x3 1 12 1 2 3 2
z 1 12 2 3 0
3 x4 2 9 1 9 [1] 0
x7 1=3 2 1 1/3 1 0
x3 0 6 1 3 1 2
z 0 6 3 1 0
4 x6 2 9 1 9 1 0
x7 1/3 [1] 1=3 2 1 0
x3 2 3 1 1 12 2
z 2 3 1 12 0
5 x6 [1] 2 9 1 9 0
x2 1/3 1 1=3 2 1 0
x3 1 1 0 6 -3 2
z 1 0 6 3 0
6 x1 1 2 9 1 9 0
x2 1 1/3 [1] 1=3 2 0
x3 1 2 3 1 12 2
z 2 3 1 12 0

Several methods besides the random choice rule exist for avoiding cycling in
the simplex method. Among them, a very simple and elegant (but not necessarily
efficient) rule due to Bland (1977) is theoretically interesting. Bland’s rule is
summarized as follows:
(i) Among all candidates to enter the basis, choose the one with the smallest index.
(ii) Among all candidates to leave the basis, choose the one with the smallest index.
The procedure of the simplex method incorporating Bland’s anticycling rule, just
specifying the choice of both the entering and leaving variables, can now be given
in the following.
38 2 Linear Programming

Procedure of Simplex Method Incorporating Bland’s Rule

Step 1B If all of the relative cost coefficients are nonnegative, i.e., cNj  0 for all
indices j of the nonbasic variables, then the current solution is optimal, and stop.
Otherwise, by using the relative cost coefficients cNj , find the index s such that
˚
min j j cNj < 0 D s: .j W nonbasic/

That is, if there are two or more indices j such that cNj < 0 for all indices of
the nonbasic variables, choose the smallest index s as the index of a nonbasic
variable newly entering the basis.
Step 2 If all of the coefficients in column s are nonpositive, i.e., aN is  0 for all
indices i of the basic variables, then the optimal value is unbounded, and stop.
Step 3B If some of aN is are positive, find the index r such that

bNi bNr
min D D ™:
N is
aN i s >0 a aN rs
If there is a tie in the minimum ratio test, choose the smallest index r as the index
of a basic variable leaving the basis.
Step 4 Perform the pivot operation on aN rs for obtaining a new feasible canonical
form with xs replacing xr as a basic variable. Return to step 1B.

It is interesting to note here that the use of Bland’s rule for cycling prevention
can be proven by contradiction on the basis of the following observation.
In a degenerate pivot operation, if some variable xq enters the basis, then xq
cannot leave the basis until some other variable with a higher index than q, which
was nonbasic when xq entered, also enters the basis. If this holds, then cycling
cannot occur because in a cycle any variable that enters must also leave the basis,
which means that there exists some highest indexed variable that enters and leaves
the basis. This contradicts the foregoing monotone feature.4
In practice, however, such a procedure is found to be unnecessary because the
simplex procedure generally does not enter a cycle even if degenerate solutions
are encountered. However, an anticycling procedure is simple, and therefore many
codes incorporate such a procedure for the sake of safety.
Example 2.11 (Simplex method incorporating Bland’s rule for Kuhn’s example).
Apply the simplex method incorporating Bland’s rule to the example given by Kuhn.
After only two pivot operations, the algorithm stops, and the result is shown in
Table 2.11. At cycle 2 in Table 2.11, degeneracy is ended, and an optimal solution

x1 D 2; x2 D 0; x3 D 0; x4 D 2; x5 D 0; x6 D 2; x7 D 0; zD 2

is obtained. ˙

4
The interested reader should refer to the solution of Problem 2.8 for a full discussion of the proof.
2.6 Revised Simplex Method 39

Table 2.11 Simplex tableau incorporating Bland’s rule


Cycle Basis x1 x2 x3 x4 x5 x6 x7 Constants
0 x1 1 2 9 1 9 0
x2 1 [1/3] 1 1=3 2 0
x3 1 2 3 1 12 2
z 2 3 1 12 0
1 x1 1 6 3 1 3 0
x4 3 1 3 1 6 0
x3 6 1 3 [1] 0 2
z 6 3 1 0 0
2 x1 1 0 1 6 3 2
x4 3 1 1 0 6 2
x6 6 1 3 1 0 2
z 0 1 0 0 2

2.6 Revised Simplex Method

In performing the simplex method, all the information contained in the tableau is
not necessarily used. Only the following items are needed:

Information Needed for Updating the Simplex Tableau

(i) Using the relative cost coefficients cNj , find the index s such that

min cNj D cNs :


cNj <0

(ii) Assuming cNs < 0, we require the elements of the sth column (pivot column)

pN s D .aN 1s ; aN 2s ; : : : ; aN ms /T

and the values of the basic variables

bN D .bN1 ; bN2 ; : : : ; bNm /T :

By using these values, the quotients

bNr bNi
D min
aN rs N is
aN i s >0 a

are calculated for finding the index r. Then, a pivot operation is performed on
aN rs for updating the tableau.
40 2 Linear Programming

From the above discussion, note that only one nonbasic column pN s in the current
tableau is required. Since there are many more columns than rows in a linear
programming problem, dealing with all the columns pN j wastes much computation
time and computer storage. A more efficient procedure is to calculate first the
relative cost coefficients cNj and then the pivot column pN s from the data of the original
problem. The revised simplex method does precisely this, and the inverse of the
current basis matrix is what is needed to calculate them.
We assume again that the m  n rectangular matrix A D Œp1 p2    pn  for
the constraints has the rank of m and n > m. Moreover, we assume that a linear
programming problem in the standard form is feasible. A basis matrix B is defined
as an mm nonsingular submatrix formed by selecting some m linearly independent
columns from the n columns of matrix A. Note that matrix A contains at least one
basis matrix B due to rank.A/ D m and n > m.
For notational simplicity, without loss of generality, assume that the basis matrix
B is formed by selecting the first m columns of matrix A, i.e.,

B D Œ p1 p2    pm : (2.69)

Let

xB D .x1 ; x2 ; : : : ; xm /T and cB D .c1 ; c2 ; : : : ; cm / (2.70)

be the corresponding vectors of basic variables and coefficients of the objective


function, respectively. Note that cB is a row vector. The vector xB satisfies

BxB D b; (2.71)

and one finds that

xB D B 1 N
b D b: (2.72)

Assume that the basis matrix B is feasible, i.e.,

xB  0: (2.73)

As shown earlier, it is convenient to deal with the objective function z as the


.m C 1/th equation and keep the variable z in the basis. This augmented system
can be written in column form as follows:
n
! ! !
X pj 0 b
xj C . z/ D : (2.74)
j D1
c j 1 0

By using the corresponding basis xB D .x1 ; x2 ; : : : ; xm /T , . z/ and the nonbasis


xN D .xmC1 ; : : : ; xn /T , (2.74) is also rewritten as
2.6 Revised Simplex Method 41

" # ! " # !
p1 p2    pm 0 xB pmC1    pn b
C xN D : (2.75)
c 1 c 2    cm 1 z cmC1    cn 0

Since the basis matrix B is feasible, the .m C 1/  .m C 1/ matrix


" # " #
p1 p2    pm 0 B 0
BO D D (2.76)
c 1 c2    c m 1 cB 1

is also a feasible basis matrix for the enlarged system (2.74). It is easily verified by
direct matrix multiplication that the inverse of BO is
" #
1
B 0
BO 1
D : (2.77)
1
cB B 1

Such an .mC1/.mC1/ matrix BO is called an enlarged basis matrix and its inverse
BO 1 is called an enlarged basis inverse matrix.
Introducing a simplex multiplier vector
1
  D . 1 ;  2 ; : : : ;  m / D cB B (2.78)

associated with the basis matrix B, the enlarged basis inverse matrix BO 1
is written
more compactly as
" #
1
B 0
BO 1
D : (2.79)
  1

Premultiplying the enlarged system (2.75) by BO 1


, (2.75) becomes as
" # ! " # !
I 0 xB pmC1    pn b
T
C BO 1
xN D BO 1
(2.80)
0 1 z cmC1    cn 0

which results in the following canonical form:

n
! ! !
xB X pN j bN
C xj D ; (2.81)
z j DmC1 cNj zN

or equivalently
42 2 Linear Programming

bN1 >
2 3 19 0
x1
n B bN2 C >
>
6 x2 7 X B C>
>
7C pN j xj D B : C >
6 7
6 :: @ :: A =
>
>
4 : 5
j DmC1
(2.82)
xm bNm >
n
>
>
X >
>
zC cNj xj D zN; >
>
>
;
j DmC1

where
! ! " # ! !
1 1
pN j pj B 0 pj B pj
D BO 1
D D ; (2.83)
cNj cj   1 cj cj  p j
! ! " # ! !
bN b B 10 b B 1
b
D BO 1
D D : (2.84)
zN 0   1 0  b

In particular, from (2.83), the updated column vector of the coefficients is


represented as
1
pN j D B pj ; (2.85)

and the relative cost coefficient is

cNj D cj  pj : (2.86)

These two formulas are fundamental to calculation in the revised simplex method,
and as seen in (2.85) and (2.86), cNj and pN j , which are used in each iteration of the
simplex method, can be calculated by using the original coefficients cj and pj given
in the initial simplex tableau, provided that the enlarged basis inverse matrix BO 1 , or
equivalently the basis inverse matrix B 1 and the simplex multipliers   are given.
Now, assume that the smallest cNs is found among the relative cost coefficients
cNj calculated by (2.86) for all nonbasic variables and the corresponding column
vector pN s D .aN 1s ; : : : ; aN ms /T is obtained by (2.85). If the vector bN of the values
of the basic variables is known at the beginning of each cycle, the pivot element
aN rs is immediately determined. Furthermore, however, we need the inverse matrix
BO  1 of the revised enlarged basis matrix BO  corresponding to a new basis made
by replacing xr in the current basis with the new basic variable xs . From the current
.m C 1/  .m C 1/ enlarged basis matrix
" #
p1    pr 1 pr prC1    pm 0
BO D ; (2.87)
c1    cr 1 cr crC1    cm 1

by removing pr and cr and entering ps and cs instead, the new enlarged basis matrix
2.6 Revised Simplex Method 43

" #
p1    pr 1 ps prC1    pm 0
BO  D (2.88)
c1    cr 1 cs crC1    cm 1

is obtained.
It can be shown that, without the direct calculations of inverse matrices,5 the
new enlarged basis inverse matrix BO  1 can be obtained from BO 1 by performing
the pivot operation on aN rs in BO 1 . Since BO  1 is the same as BO 1 except the rth
column, the product of BO 1 and BO  can be represented as
2 3
1 aN 1s
6
:: :: 7
:
6 7
6 : 7
6 7
6 aN rs 7
BO 1 BO  D 6 7; (2.89)
6 7
:: ::
6
6 : : 7
7
6 7
6
4 aN ms 1 7
5
cNs 1
! !
N
p s ps
where the rth column .aN 1s ; : : : ; aN rs ; : : : ; aN ms ; cNs /T D is BO 1 , and
cNs cs
the i column (i 6D r) is the .m C 1/ dimensional unit vectors such that the i th
element is one.
After introducing the .m C 1/  .m C 1/ nonsingular square matrix
2 3
1 aN 1s =aN rs
6
:: :: 7
:
6 7
6 : 7
6 7
O
6 1=aN rs 7
ED6 (2.90)
6 7
:: :: 7
6
6 : : 7
7
6 7
6
4 aN ms =aN rs 1 7
5
cNs =aN rs 1

that differs from the .m C 1/  .m C 1/ unit matrix IO in only the rth column,
premultiplying (2.89) by EO yields

EO BO 1
BO  D IO: (2.91)

Hence, by postmultiplying both sides of (2.91) by BO  1 , we have the new enlarged


basis inverse matrix

5
Annoying calculations of the inverse matrix make no sense of the revised method.
44 2 Linear Programming

BO  1
D EO BO 1
: (2.92)

Setting
2 3 2 3
“ij 0 “ij 0
BO  1
D4 5; BO 1
D4 5 (2.93)
 j 1  j 1

and calculating the right-hand side of (2.92), we have


9
1
“rj D
>
“rj ; j D 1; 2; : : : ; m; >
>
aN rs
>
>
>
>
=
 aN is (2.94)
“ij D “ij “rj ; i D 1; 2; : : : ; m; i ¤ r; j D 1; 2; : : : ; m; >
aN rs >
>
>
 cNs
>
>
 j D   j “rj ; j D 1; 2; : : : ; m:
>
;
aN rs

This means that performing the pivot operation on aN rs to the current enlarged basis
inverse matrix BO 1 gives the new enlarged basis inverse matrix BO  1 .
By adding the superscript  to the values of bN and zN for the new enlarged basis
matrix BO  and premultiplying the enlarged system (2.75) corresponding to BO  by
BO  1 , the right-hand side becomes as
! ! ! !
bN  b b bN
D BO  1
D EO BO 1
D EO : (2.95)

zN 0 0 zN

Each element of (2.95) is represented by


9
bNr
bNr D
>
; >
>
aN rs
>
>
>
>
=
aN is N
bNr D bNi br ; .i ¤ r/ > (2.96)
aN rs >
>
>
cNs N >
zN D
>
zN br :
>
;
aN rs

This also means that performing the pivot operation on aN rs to the current bN and zN
gives the new constants bN  and zN . As just described, since premultiplying the
current enlarged basis inverse matrix BO 1 and the right-hand side of (2.81) by EO
corresponds to a pivot operation, EO is called a pivot matrix or an elementary matrix.
Now, the procedure of the revised simplex method, starting with an initial basic
feasible solution, can be summarized as follows.
2.6 Revised Simplex Method 45

Procedure of the Revised Simplex Method

Assume that the coefficients A, b, and c of the initial feasible canonical form and
the inverse matrix B 1 of the initial feasible basis are available.
1
Step 0 By using B , calculate

  D cB B 1
; xB D bN D B 1
b; zN D  b;

and put them in the revised simplex tableau shown in Table 2.12.
Step 1 Calculate the relative cost coefficients cNj for all indices j of the nonbasic
variables by

cNj D cj  pj :

If all of the relative cost coefficients are nonnegative, i.e., cNj  0, then the current
solution is optimal, and stop. Otherwise, find the index s such that

min cNj D cNs :


cNj <0

Step 2 Calculate
1
pN s D B ps :

If all of the elements of pN s D .aN 1s ; aN 2s ; : : : ; aN ms /T are nonpositive, i.e., aN is  0


for all indices i of the basic variables, then the optimal value is unbounded, and
stop.
Step 3 If some of aN is are positive, put the values of pON s D .Nps ; cNs /T in the revised
simplex tableau of Table 2.12, and find the index r such that

bNr bNi
D min :
aN rs N is
aN i s >0 a

Step 4 Perform the pivot operation on aN rs to B 1 ,  , b, N and zN of Table 2.12,


and replace xr with xs as a new basic variable. The pivot operation for B 1
and   is given by (2.94), and that for bN and zN is also given by (2.96). After
updating the revised simplex tableau in Table 2.12, return to step 1.

It should be noted here that in the procedure of the revised simplex method, since
!
0
the .m C 1/th column of the enlarged basis inverse matrix BO 1 is always ; it
1
is recommended to neglect it and to use the revised simplex tableau without the
.m C 1/th column of BO 1 as given in Table 2.12.
46 2 Linear Programming

Table 2.12 Revised simplex


tableau Basis Basis inverse matrix Constants pON s
x1
::
:
xr B 1
bN pN s
::
:
xm
z   zN cNs

When starting the revised simplex method with phase I of the two-phase method,
the enlarged basis inverse matrix BO 1 can be considered as
2 1 3
B 0 0
BO 1
D4   1 0 5; (2.97)
¢ 0 1

where ¢ D .¢1 ; ¢2 ; : : : ; ¢m / is a vector of the simplex multipliers for the objective


function w of phase I, and the initial enlarged basis inverse matrix BO 1 is an .m C
2/  .m C 2/ unit matrix.
In phase I, the relative cost coefficients dNj are calculated by

dNj D dj ¢pj ; .j W nonbasic/ (2.98)

and the pivot column is determined by min dNj . Including the row of w, the pivot
dNj <0
operations are performed according to step 4 in the procedure of the revised simplex
method. After eliminating the row of w, i.e., the .m C 2/th row at the beginning
of phase II, the above-mentioned revised simplex method is continued.
Example 2.12 (Revised simplex method for production planning problem of Exam-
ple 1.1). Using the revised simplex method, solve the standard form of the
production planning problem of Example 1.1.

minimize z D 3x1 8x2


subject to 2x1 C 6x2 C x3 D 27
3x1 C 2x2 C x4 D 16
4x1 C x2 C x5 D 18
xj  0; j D 1; 2; 3; 4; 5:

Employing the slack variables x3 , x4 , and x5 as basic variables, one finds that the
basis matrix B is a 3  3 unit matrix and its inverse B 1 is also the same unit matrix.
Thus, from (2.78) and (2.84), it follows that

  D cB B 1
D .0; 0; 0/; bN D B 1
b D b D .27; 16; 18/T ; zN D  b D 0:
2.6 Revised Simplex Method 47

Table 2.13 Revised simplex tableau of Example 1.1


Cycle Basis Basis inverse matrix Constants pON s
0 x3 1 27 [6]
x4 1 16 2
x5 1 18 1
z 0 8
1 x2 1/6 4.5 1/3
x4 1=3 1 7 [7/3]
x5 1=6 1 13.5 11/3
z 4/3 36 1=3
2 x2 3/14 1=7 3.5
x1 1=7 3/7 3
x5 5/14 11=7 1 2.5
z 9/7 1/7 37

Putting these values in the revised simplex tableau at cycle 0 of Table 2.13.
After calculating cNj for nonbasic variables, the minimum of them is calculated
in order to select a new basic variable as follows:
0 1
2
cN1 D c1  p1 D 3 .0; 0; 0/ @ 3 A D 3;
4
0 1
6
cN2 D c2  p2 D 8 .0; 0; 0/ 2 A D 8;
@
1
min cNj D . 3; 8/ D cN2 D 8 < 0:
cNj <0

Since cN2 is the minimum, x2 becomes a new basic variable. The corresponding
coefficient column vector pN 2 is calculated as
2 30 1 0 1
1 0 0 6 6
1
pN 2 D B p2 D 0 1 04 5 @ 2 D 2 A;
A @
0 0 1 1 1

and then it is filled in on the rightmost column of the revised simplex tableau. Since
! !
bN3 bN4 bN5 27 16 18 27
min ; ; D min ; ; D D 4:5;
aN 32 aN 42 aN 52 6 2 1 6

x3 becomes a nonbasic variable, and it follows that the pivot element is aN 32 D 6,


which is bracketed by Œ  in Table 2.13. After replacing x3 with x2 as a new basic
variable, the pivot operation on aN 32 D 6 is performed to the basis inverse matrix and
48 2 Linear Programming

the constants at cycle 0 of Table 2.13, and the result of cycle 1 is obtained. These
values become B 1 ,  , b, and zN for the new basis matrix B D Œ p2 p4 p5  and
the procedure returns to step 1.
From
0 1
  2
4 1
cN1 D c1  p1 D 3 ; 0; 0 @ 3 A D
3 3
4
0 1
  1
4 4
cN3 D c3  p3 D 0 ; 0; 0 @ 0 A D
3 3
0
1
min cNj D cN1 D < 0;
cNj <0 3

1
x1 becomes a new basic variable. By using B at cycle 1, pN 1 is calculated as
2 30 1 0 1
1=6 0 0 2 1=3
pN 1 D B 1 p1 D 4 1=3 1 0 5@ 3 A D @ 7=3 A ;
1=6 0 1 4 11=3

and it is filled in on the rightmost column of the revised simplex tableau. Since
!
bN2 bN4 bN5
 
4:5 7 13:5 7
min ; ; D min ; ; D D 3;
aN 21 aN 41 aN 51 1=3 7=3 11=3 7=3

x4 becomes a nonbasic variable. After replacing x4 with x1 , the pivot operation


on aN 41 D 7=3 bracketed by Œ  is performed to the basis inverse matrix and the
constants at cycle 1. This yields the result at cycle 2 in Table 2.13. The next basis
matrix becomes as B D Œ p2 p1 p5 , and since
0 1
  1
9 1 9
cN3 D c3  p3 D 0 ; ;0 @0A D > 0
7 7 7
0
0 1
  0
9 1 1
cN4 D c4  p4 D 0 ; ; 0 @ 1 A D > 0;
7 7 7
0

an optimal solution

x1 D 3; x2 D 3:5 .x3 D x4 D 0; x5 D 2:5/; zD 37

is obtained. ˙
2.7 Duality 49

2.7 Duality

The notion of duality is one of the most important concepts in linear programming.
Basically, associated with a linear programming problem (we may call it the primal
problem), defined by the constraint matrix A, the right-hand side constant vector
b, and the cost coefficient vector c, there is the corresponding linear programming
problem (called the dual problem) which is specified by the same set of coefficients
A, b, and c. These two problems bear interesting and useful relationships to one
another.
Consider the standard form of linear programming
9
minimize z D cx =
subject to Ax D b (2.99)
;
x  0;

where c is an n dimensional row vector, x is an n dimensional column vector, A


is an m  n matrix A, and b is an n dimensional column vector. Introducing an
m dimensional row vector   D . 1 ;  2 ; : : : ;  m /, we define an associated linear
programming problem:

maximize v D  b
(2.100)
subject to  A  c:

It should be noted here that problem (2.100) is a maximization problem with m


unrestricted variables and n inequality constraints. The roles of the variables and
constraints are somewhat reversed in problems (2.99) and (2.100). Usually, the
original problem (2.99) is called the primal problem and the related problem (2.100)
is called the dual problem. The two problems make a primal–dual pair. Similarly, an
element of the vector x is called a primal variable, and that of the vector   is called
a dual variable.
The constraint of the dual problem (2.100), which is a product of the m
dimensional row vector   and the m  n constraint matrix A, is alternatively
expressed as
9
a11  1 C a21  2 C    C am1  m  c1 > >
a12  1 C a22  2 C    C am2  m  c2
=
(2.101)
 >
>
;
a1n  1 C a2n  2 C    C amn  m  cn ;

which implies that the coefficients of the system of inequalities (2.101) are given by
the transposed matrix AT of A.
Any primal problem can be changed into a linear programming problem in a
different format by using the following devices: (i) replace an unconstrained variable
with the difference of two nonnegative variables; (ii) replace an equality constraint
50 2 Linear Programming

Table 2.14 Primal–dual Minimization problem Maximization problem


relationships
Constraints Variables
 , 0
 , 0
D , Unrestricted
Variables Constraints
0 , 
0 , 
Unrestricted , D

with two opposing inequalities; and (iii) replace an inequality constraint with an
equality by adding a slack or surplus variable.
For example, consider the following linear programming problem involving not
only equality constraints but also inequality constraints and free variables:
9
minimize z D c1 x1 C c2 x2 C c3 x3 >
>
A11 x1 C A12 x2 C 3 1 >
>
subject to A13 x  b > =
A21 x1 C A22 x2 C 3
A23 x  b 2 (2.102)
A31 x1 C A32 x2 C A33 x3 D b3 >
>
>
>
x1  0; x2  0:
>
;

By converting this problem to its standard form by introducing slack and surplus
variables, and substituting x2 D x2C .x2C  0/ and x3 D x3C x3 .x3C  0,
x3  0/, it can be easily understood that its dual becomes
9
maximize v D  1 b1 C  2 b2 C  3 b3 >
>
 1 A11 C  2 A21 C 3 1 >
>
subject to   A31  c > =
 1 A12 C  2 A22 C  3 A32  c2 (2.103)
 1 A13 C  2 A23 C  3 A33 D c3 >
>
>
>
 1  0;  2  0:
>
;

Carefully comparing this dual problem (2.103) with the primal problem (2.102)
gives the relationships between the primal and dual pair summarized in Table 2.14.
For example, an unrestricted variable corresponds to an equality constraint.
By utilizing the relationships in Table 2.14, it is possible to write the dual problem
for a given linear programming problem without going through the intermediate step
of converting the problem to the standard form. From Table 2.14, the symmetric
primal-dual pair given in Table 2.15 is immediately obtained. In a symmetric form,
it is especially easy to see that the dual of the dual is the primal.
The relationship between the primal and dual problems is called duality. The
following theorem, sometimes called the weak duality theorem, is easily proven
and gives us an important relationship between the two problems. In the following,
it is convenient to deal with a primal problem in the standard form.
2.7 Duality 51

Table 2.15 Symmetric Primal Dual


primal–dual pair
Minimize z D cx Maximize v D  b
Subject to Ax  b Subject to  A  c
x0  0

N are feasible primal and dual


Theorem 2.5 (Weak duality theorem). If xN and  
solutions, then

N D vN :
zN D cNx   b (2.104)

N and the primal feasibility of xN , we have


Proof. From the dual feasibility of  

N
c   A; and ANx D b; xN  0;

which implies

N x D  b:
cNx   AN N


This theorem shows that the primal (minimization) problem is always bounded
below by the dual (maximization) problem and the dual (maximization) problem is
always bounded above by the primal (minimization) problem if they are feasible.
From the weak duality theorem, several corollaries can be immediately obtained.
Corollary 2.1. If xN o and   N o are feasible primal and dual solutions and cxo D  o b
N o are optimal solutions to their respective problems.
holds, then xN o and  
This corollary implies that if a pair of feasible solutions can be found to the
primal and dual problems with the same objective value, then they are both optimal.
Corollary 2.2. If the primal problem is unbounded below, then the dual problem is
infeasible.
Corollary 2.3. If the dual problem is unbounded above, then the primal problem is
infeasible.
With these results, the following duality theorem, sometimes called the strong
duality theorem, can be established as a stronger result.
Theorem 2.6 (Strong duality theorem).
(i) If either the primal or the dual problem has a finite optimal solution, then so
does the other, and the corresponding values of the objective functions are the
same.
(ii) If one problem has an unbounded objective value, then the other problem has
no feasible solution.
52 2 Linear Programming

Proof. (i) It is sufficient, in proving the first statement, to assume that the primal
has a finite optimal solution, and then we show that the dual has a solution with
the same value of the objective function.
To show that the optimal values are the same, let xo solve the primal. Since the
primal must have a basic optimal solution, we may as well assume xo as the basic,
with the optimal basis matrix B o , and the vector of basic variables xoB o . Thus

B o xoB o D b; xoB o  0:

The simplex multiplier vector associated with B o is

 o D cB o .B o / 1 ;

where cB o is the vector of cost coefficients of basic variables. Since xo is optimal,


the relative cost coefficients cNj given by (2.86) are nonnegative:

cNj D cj  o pj  0; j D 1; : : : ; n;

or, in matrix form,

 o A  c:

Thus,  o satisfies the dual constraints, and the corresponding objective value is

vo D  o b D cB o .B o / 1 b D cB o xoB o D zo :

Hence, from Corollary 2.1, it directly follows that  o is an optimal solution to the
dual problem.
(ii) The second statement is an immediate consequence of Corollaries 2.2 and 2.3.

The preceding proof illustrates some important points.
(i) The constraints of the dual problem exactly represent the optimality conditions
of the primal problem, and the relative cost coefficients cNj can be interpreted as
slack variables in them.
(ii) The simplex multiplier vector  o associated with a primal optimal basis solves
the corresponding dual problem. Since, as shown in the previous section, the
vector   is contained in the bottom row of the revised simplex tableau for the
primal problem, the optimal revised simplex tableau inherently provides a dual
optimal solution.
For interpreting the relationships between the primal and dual problems, recall
the two-variable diet problem of Example 2.3. Associated with this problem, we
examine the following problem, though it is somewhat intentional.
2.7 Duality 53

Example 2.13 (Dual problem for the diet problem of Example 2.3). A drug
company wants to maximize the total profit by producing three pure tablets V1 ,
V2 , and V3 which contain exactly one mg (milligram) of the nutrients N1 , N2 , and
N3 , respectively. To do so, the company attempts to determine the prices of three
tablets which compare favorably with those of the two foods F1 and F2 . Let  1 ,  2 ,
and  3 denote the prices in yens of one tablet of V1 , V2 , and V3 , respectively.
One gram of the food F1 provides 1, 1, and 2 mg of N1 , N2 , and N3 and costs 4
yen. If the housewife replaces one gram of this food F1 with tablets of V1 , V2 , and
V3 , one tablet of V1 , one tablet of V2 , and two tablets of V3 are needed. This would
cost  1 C 2 C2 3 , which should be less than or equal to the price of one gram of the
food F1 , i.e.,  1 C  2 C 2 3  4. Similarly, one gram of the food F2 provides 1, 1,
and 2 mg of N1 , N2 , and N3 and costs 3 yen. Thus, the inequality 3 1 C 2 2 C  3 
3 is imposed. Since the housewife understands that the daily requirements of the
nutrients N1 , N2 , and N3 are 12, 10, and 15 mg, respectively, the cost of meeting
these requirements by using the tablets would be v D 12 1 C10 2 C15 3 . Thus, the
company should determine the prices of the tablets V1 , V2 , and V3 so as to maximize
this function subject to the above two inequalities. That is, the company determines
the prices of the three tablets which maximize the profit function

v D 12 1 C 10 2 C 15 3

subject to the constraints

 1 C  2 C 2 3  4
3 1 C 2 2 C  3  3
 1  0;  2  0;  3  0:

It should be noted here that this linear programming problem is precisely the dual
of the original diet problem of Example 2.3. ˙
As thus far discussed, the dual variables, corresponding to the constraints of
the primal problem, coincide with the simplex multipliers for the optimal basic
solution of the primal problem. Consider the economic interpretation of the simplex
multiplier. Let

xo D .x1o ; x2o ; : : : ; xno /T and  o D . o1 ;  o2 ; : : : ;  om /

be the optimal solutions of the primal and dual problems, respectively. From the
strong duality theorem, it follows that

zo D c1 x1o C c2 x2o C    C cn xno D  o1 b1 C  o2 b2 C    C  om bm D vo :

In this relation, one finds that

zo D  o1 b1 C  o2 b2 C    C  om bm ; (2.105)


54 2 Linear Programming

and then it can be intuitively understood that when one unit of the right-hand side
constant bi of the i th constraint ai1 x1 C ai2 x2 C    C ai n xn D bi of the primal
problem is changed from bi to bi C1, the value of the objective function will increase
by  oi as long as the basis does not change.
To be more precise, from (2.105), the amount of change in the objective function
z for a small change in bi is obtained by partially differentiating z with respect to
the right-hand side bi , i.e.,

@zo
 oi D ; i D 1; : : : ; m: (2.106)
@bi

Thus, the simplex multiplier  i indicates how much the value of the objective
function varies for a small change in the right-hand side of the constraint, and
therefore it is referred to as the shadow price or the marginal price.
Using the duality theorem, the following result, known as Farkas’s theorem
concerning systems of linear equalities and inequalities, can be easily proven.
Theorem 2.7 (Farkas’s theorem). One and only one of the following two
alternatives holds.
(i) There exists a solution x  0 such that Ax D b.
(ii) There exists a solution   such that  A  0T and  b > 0.
Proof. Consider the (primal) linear problem

minimize z D 0T x
9
=
subject to Ax D b (2.107)
;
x  0;

and its dual



maximize v D  b
(2.108)
subject to  A  0T :

If the statement (i) holds, the primal problem is feasible. Since the value of the
objective function z is always zero, any feasible solution is optimal. From the strong
duality theorem, the value of the objective function v of the dual is zero. Thus, the
statement (ii) does not hold.
Conversely, if the statement (ii) holds, the dual problem has a feasible solution
such that the objective function v is positive. From the weak duality theorem, this
implies that the objective function z of the primal is positive, and therefore the primal
problem has no feasible solution. 
Associated with Farkas’s theorem, Gordon’s theorem also plays an important role
for deriving the optimality conditions of nonlinear programming.
2.8 Dual Simplex Method 55

Theorem 2.8 (Gordon’s theorem). One and only one of the following two
alternatives holds.
(i) There exists a solution x  0, x ¤ 0 such that Ax D 0.
(ii) There exists a solution   such that  A < 0T .
The following theorem, relating the primal and dual problems, is often useful.
Theorem 2.9 (Complementary slackness theorem). Let x be a feasible solution
to the primal problem (2.99) and   be a feasible solution to the dual prob-
lem (2.100). Then they are respectively optimal if and only if the complementary
slackness condition

.c  o A/xo D 0 (2.109)

is satisfied.

2.8 Dual Simplex Method

There are a number of algorithms for linear programming which start with an
infeasible solution to the primal and iteratively force a sequence of solutions to
become feasible as well as optimal. The most prominent among such methods is
the dual simplex method (Lemke 1954). Operationally, its procedure still involves a
sequence of pivot operations, but with different rules for choosing the pivot element.
Consider a primal problem in the standard form
9
minimize z D cx =
subject to Ax D b
;
x  0;

and its dual



maximize v D  b
subject to  A  c:

Consider the canonical form of the primal problem starting with the basis
.x1 ; x2 ; : : : ; xm / expressed as

bN1 >
2 3 0 19
x1
n B bN2 C >
>
6 x2 7 X B C>
>
C N
p x D
6 7
B : C>
>
6 :: 7 j j
: >
4 : 5
j DmC1
@ : A=
(2.110)
xm bNm >
n
>
>
X >
>
zC cNj xj D zN; >
>
>
;
j DmC1
56 2 Linear Programming

where not all right-hand side constants bNi may be nonnegative, i.e., for some i ,
bNi  0 may not hold.
In this canonical form, if cNj D cj  pj  0 for all j D m C 1; : : : ; n, which
can be alternatively expressed as  A  c in a vector-matrix form,   is a feasible
solution to the dual problem. Thus, the canonical form of the primal problem (2.110)
satisfying cNj  0, j D m C 1; : : : ; n is called the dual feasible canonical one.
Obviously, if the dual feasible canonical form is also feasible to the primal problem,
i.e., for all i , bi  0 hold, then it is an optimal canonical form.
Now, in a quite similar way to the selection rule of cNs in the simplex method, find
the pivot row by

min bNi D bNr :


bNi <0

It should be noted that if bNr  0 for all r, it follows that an optimal solution is
obtained.
If aN rj  0 for all j , from bNr < 0, in the rth equation

n
X
xr D bNr aN rj xj ;
j DmC1

the right-hand side is negative for xj  0, j D m C 1; : : : ; n, which implies that


the value of the basic variable xr is negative, i.e., xr < 0 for all the nonnegative
nonbasic variables xj . This means that the primal problem is infeasible, and then
the following theorem is obtained.
Theorem 2.10 (Infeasibility of primal problem). In the rth row of the canonical
form (2.110), if

bNr < 0; aN rj  0; j D m C 1; m C 2; : : : ; n; (2.111)

then the primal problem is infeasible.


Now, in the dual feasible canonical form (2.110), let the rth row be the pivot one.
Moreover, assume that bNr is negative and for some j at least one aN rj is negative.
Then, if the pivot column is found by

cNj cNs
min D D ; (2.112)
aN rj <0 aN rj aN rs

aN rs is chosen as the pivot element.


By performing the pivot operation on aN rs which means that xr is replaced
with xs as a new basic variable, as shown in Table 2.3, the resulting new relative
cost coefficients cNj for nonbasic variables, which are discriminated by adding the
superscript , are represented as
2.8 Dual Simplex Method 57

aN rj
cNj D cNj 
cNs aN rj D cNj cNs :
aN rs

Obviously, for any column index j of a nonbasic variable such that aN rj  0


holds, from cNs > 0 and aN rs < 0, it directly follows that its relative cost coefficient is
nonnegative, i.e.,

aN rj
cNj D cNj cNs  cNj  0:
aN rs

For any column index j of a nonbasic variable such that aN rj < 0 holds, from (2.112),
it follows that its relative cost coefficient is also nonnegative, i.e.,

cNj
 
cNs
cNj D aN rj  0:
aN rs aN rj

Hence, it holds that cNj  0 for all j , the resulting new canonical form (tableau) is a
dual feasible canonical form.
Moreover, by the pivot operation on aN rs , we also have the updated value of the
objective function

bNr
zN D zN C cNs D zN bNr ;
aN rs

and from bNr < 0 and   0, the value is increased by jbNr j compared to the
previous value of zN. 6
After starting with the dual feasible canonical form, the dual simplex method
improves feasible solutions of the dual problems through a series of pivot operations
in order to seek for an optimal solution. Although the dual simplex method uses the
pivot operations in a similar way to the simplex method, it employs a different rule
for choosing the pivot element and the value of the objective function increases with
the number of iterations. The procedure of the dual simplex method, starting with
the dual feasible canonical form, can be summarized as follows.

Procedure of the Dual Simplex Method

Start with the dual feasible canonical form. That is, assume that cNj  0 for all j .

Step 1 If bNi  0 for all indices i of the basic variables, then the current solution
is optimal, and stop. Otherwise, choose the index r for the pivot row such that

6
If cNs D 0 and dual degeneracy occurs, it is possible to avoid cycling by utilizing the similar
anticycling rule in the simplex method.
58 2 Linear Programming

min bNi D bNr :


bNi <0

Step 2 If aN rj  0 for all indices j of the nonbasic variables, then the primal
problem is infeasible, and stop.
Step 3 If some of aN rj are negative, find the index s for the pivot column such that

cNj cNs
min D D :
aN rj <0 aN rj aN rs

Step 4 Perform the pivot operation on aN rs for obtaining a new dual feasible
canonical form with xs replacing xr as a basic variable. Return to step 1.

Example 2.14 (Dual simplex method for the diet problem of Example 2.3). Using
the dual simplex method, solve the diet problem in the standard form given in
Example 2.3:

minimize z D 4x1 C 3x2


subject to x1 C 3x2 x3 D 12
x1 C 2x2 x4 D 10
2x1 C x2 x5 D 15
xj  0; j D 1; 2; 3; 4; 5:

Multiplying both sides of the three equations of the constraints by 1 yields the
dual feasible canonical form

x1 3x2 C x3 D 12
x1 2x2 C x4 D 10
2x1 x2 C x5 D 15 :
4x1 C 3x2 zD 0
xj  0; j D 1; 2; 3; 4; 5

Since cN1 D 4 > 0 and cN2 D 3 > 0, this canonical form with basic variables x3 ,
x4 , and x5 is dual feasible. However, it is not primal feasible because bN1 D 12 < 0,
bN2 D 10 < 0 and bN3 D 9 < 0.
At cycle 0 in Table 2.16, from

min.bN3 ; bN4 ; bN5 / D min . 12; 10; 15/ D 15 < 0;

x5 becomes a nonbasic variable in the next cycle. From


   
cN1 cN2 4 3 4
min ; D min ; D ;
aN 51 aN 52 2 1 2
2.8 Dual Simplex Method 59

Table 2.16 Simplex tableau of Example 2.3 (dual simplex method)


Cycle Basis x1 x2 x3 x4 x5 Constants
0 x3 1 3 1 12
x4 1 2 1 10
x5 Œ 2 1 1 15
z 4 3 0
1 x3 Œ 2:5 1 0:5 4:5
x4 1:5 1 0:5 2:5
x1 1 0.5 0:5 7.5
z 1 2 30
2 x2 1 0:4 0.2 1.8
x4 0:6 1 0:2 0.2
x1 1 0.2 0:6 6.6
z 0.4 1.8 31:8

x1 becomes a basic variable in the next cycle, and the pivot element is determined
at aN 51 D 2 bracketed by Œ  in Table 2.16. After performing the pivot operation on
aN 51 D 2, the tableau at cycle 1 is obtained. At cycle 1, from bN1 > 0 and

min.bN3 ; bN4 / D min . 4:5; 2:5/ D 4:5 < 0

x3 becomes a nonbasic variable in the next cycle. From


   
cN2 cN5 1 2 1
min ; D min ; D
aN 32 aN 35 2:5 0:5 2:5

x2 becomes a basic variable in the next cycle, and the pivot element is determined at
aN 32 D 2:5 bracketed by Œ . After performing the pivot operation on aN 32 D 2:5,
the tableau at cycle 2 is obtained. At cycle 2, all of the constants bNi become positive,
and an optimal solution

x1 D 6:6; x2 D 1:8 .x3 D 0; x4 D 0:2; x5 D 0/; z D 31:8

is obtained. Observe that the tableau of cycle 2 in Table 2.16 coincides with that of
cycle 3 in Table 2.7 when the row of w is dropped. ˙
It should be noted here that the idea of the revised simplex method can be
employed in the discussion of the dual simplex method. In the dual simplex method,
in addition to the data of the initial feasible canonical form A, b, and c, the
coefficients aN rj for all indices j of the nonbasic variables with respect to xr left
from the basis and the relative cost coefficients cNj for all indices j of the nonbasic
variables are required, where cNj can be computed by the formula cNj D cj  pj of
the revised simplex method. Hence, if the formula for calculating aN rj for all indices
j of the nonbasic variables through the basis inverse matrix B 1 is given, the dual
60 2 Linear Programming

simplex method can be expressed in a style followed by the revised simplex method.
Since the coefficient aN rj is the rth element of pN j , by using the rth row vector of B 1 ,
denoted by ŒB 1 r , it can be calculated just as
1
aN rj D ŒB r pj ; j W nonbasic: (2.113)

With the above discussion, the procedure of the revised dual simplex method can be
summarized as follows.

Procedure of the Revised Dual Simplex Method

Assume that the coefficients A, b, and c of the initial dual feasible canonical form
and the inverse matrix B 1 of the initial dual feasible basis are available.
1
Step 0 Using B , calculate

  D cB B 1
; xB D bN D B 1
b; zN D  b

and put them in the revised simplex tableau shown in Table 2.12.
Step 1 If bNi  0 for all indices i of the basic variables, then the current solution
is optimal, and stop. Otherwise, choose the index r for the pivot row such that

min bNi D bNr :


bNi <0

Step 2 For all indices j of the nonbasic variables, calculate


1
aN rj D ŒB r pj :

If aN rj  0 for all indices j of the nonbasic variables, then the primal problem is
infeasible, and stop.
Step 3 If some of aN rj are negative, calculate

cNj D cj  pj

and find the index s for the pivot column such that

cNj cNs
min D D :
aN rj <0 aN rj aN rs

In Table 2.12, replace xr with xs as a basic variable.


Step 4 Calculate
1
pN s D B ps
2.8 Dual Simplex Method 61

Table 2.17 Revised dual simplex tableau of Example 2.3


Cycle Basis Basis inverse matrix Constants pON s
0 x3 1 12 1
x4 1 10 1
x5 1 15 [ 2]
z 4
1 x3 1 1=2 9=2 [ 5=2]
x4 1 1=2 5=2 3=2
x1 1=2 15=2 1=2
z 2 30 1
2 x2 2=5 1=5 9=5
x4 3=5 1 1=5 1=5
x1 1=5 3=5 33=5
z 2=5 9=5 159=5

and put the values of pON s D .Nps ; cNs /T in the column pON s of Table 2.12. Perform the
pivot operation on aN rs to B 1 ,  , b, N zN of Table 2.12, and return to step 1.

Example 2.15 (Revised dual simplex method for the diet problem of Example 2.3).
The canonical form
x1 3x2 C x3 D 12
x1 2x2 C x4 D 10
2x1 x2 C x5 D 15
4x1 C 3x2 zD 0
xj  0; j D 1; 2; 3; 4; 5;

for the diet problem discussed in Examples 2.3 and 2.14, where x3 , x4 , and x5 are
basic variables, is dual feasible because cN1 D 4 > 0 and cN2 D 3 > 0. However,
since bN1 D 12 < 0, bN2 D 10 < 0, and bN3 D 9 < 0, the primal problem is not
feasible. The initial basis matrix B is the 3  3 unit matrix and its inverse B 1 is
also the same unit matrix. Hence, from (2.78) and (2.84), it follows that

  D cB B 1
D .0; 0; 0/; bN D B 1
b D b D . 12; 10; 15/T ; zN D  b D 0:

Putting these values in the revised dual simplex tableau at cycle 0 of Table 2.17.
At cycle 0 in Table 2.17, since

min.bN3 ; bN4 ; bN5 / D min. 12; 10; 15/ D 15 < 0;

x5 becomes a nonbasic variable in the next cycle and the index r of the variable
leaving the basis is determined as r D 3.
According to (2.113), we calculate the coefficients aN rj , r D 3, j D 1; 2 for the
nonbasic variables. That is, using the third row ŒB 1 3 of B 1 , p1 , and p2 , we have
62 2 Linear Programming

1
aN 31 D ŒB 3 p1 D .0; 0; 1/. 1; 1; 2/T D 2
1
aN 32 D ŒB 3 p2 D .0; 0; 1/. 3; 2; 1/T D 1:

Also the relative cost coefficients cNj , j D 1; 2 are calculated as

cN1 D c1  p1 D 4 C .0; 0; 0/. 1; 1; 2/T D 4


cN2 D c2  p2 D 3 C .0; 0; 0/. 3; 2; 1/T D 3:

From

cNj
   
cN1 cN2 4 3 4
min D min ; D min ; D ;
aN rj <0 a N rj aN 31 aN 32 2 1 2

x1 becomes a basic variable in the next cycle. We calculate pN 1 as


2
30 1 0 1
1 0 0 1 1
pN 1 D B 1 p1 D 4 0 1 0 5 @ 1 AD@ 1 A
0 0 1 2 2

and put pN 1 and cN1 D 4 in the column of pON s at cycle 0 in Table 2.17. Since r D 3, the
pivot element is 2 bracketed by Œ . By performing the pivot operation on 2 at
cycle 0, the tableau at cycle 1 is obtained.
At cycle 1, the variables x3 , x4 , and x1 are basic variables, and in Table 2.17,
since bN1 > 0 and

min.bN3 ; bN4 / D min. 9=2; 5=2/ D 9=2 < 0;

x3 becomes a nonbasic variable in the next cycle, and the index r of the variable
leaving the basis is determined as r D 1.
From (2.113), we calculate the coefficients aN rj , r D 1, j D 2; 5 for nonbasic
variables. Using the first row ŒB 1 1 of B 1 , p2 , and p5 , we have
1
aN 12 D ŒB 1 p2 D .1; 0; 1=2/. 3; 2; 1/T D 5=2
1
aN 15 D ŒB 1 p5 D .1; 0; 1=2/.0; 0; 1/T D 1=2:

Also the relative cost coefficients cNj , j D 2; 5 are calculated as

cN2 D c2  p2 D 3 C .0; 0; 2/. 3; 2; 1/T D 1


cN5 D c5  p5 D 0 C .0; 0; 2/.0; 0; 1/T D 2:
2.8 Dual Simplex Method 63

From

cNj
   
cN2 cN5 1 2 1
min D min ; D min ; D ;
aN rj <0 a N rj aN 12 aN 15 5=2 1=2 5=2

x2 becomes a basic variable in the next cycle. We calculate pN 2 as


2 30 1 0 1
1 0 0 3 5=2
pN 2 D B 1 p2 D 4 0 1 0 5 @ 2 A D @ 3=2 A
0 0 1 1 1=2

and put pN 2 and cN2 D 1 in the column of pON s at cycle 1 in Table 2.17. Since r D 1, the
pivot element is 5=2 bracketed by Œ . By performing the pivot operation on 5=2
at cycle 1, the tableau at cycle 2 is obtained.
At cycle 2, the variables x2 , x4 and x1 are basic variables. Since all of the
constants bNi are positive, an optimal solution
 
33 9 1 159
x1 D ; x2 D x3 D 0; x4 D ; x5 D 0 ; zD
5 5 5 5

is obtained. ˙
Finally, consider the sensitivity analysis, which examines the effects of small
changes in the parameters of a linear programming problem on its optimal solution.
In particular, we deal with a case where the right-hand side vector is changed, which
is closely related to the dual simplex method.
Assume that in the standard form of linear programming
9
minimize z D cx =
subject to Ax D b (2.114)
;
x  0;

an optimal basis B is known, and then the corresponding optimal basic solution
xB is

xB D bN D B 1
b: (2.115)

Moreover, the corresponding simplex multiplier vector   is


1
  D cB B ; (2.116)

and the value of the objective function zN is also calculated as

zN D cB xB D cB bN D  b: (2.117)
64 2 Linear Programming

Obviously, the optimality criterion

cNj D cj  pj  0 for all j of the nonbasic variables (2.118)

is satisfied.
In discussing changes in the right-hand side vector, assume that b is changed to
b C b. Consider the following linear programming problem:
9
minimize z D cx =
subject to Ax D b C b (2.119)
;
x  0:

Since the simplex multiplier vector   and the relative cost coefficients cNj for
all indices j of the nonbasic variables do not depend on b as shown in (2.116)
and (2.118), they remain the same even if b is changed to b C b. However, the
basic solution xB itself may no longer be feasible.
The new basic solution and the value of the objective function are calculated as

xB D B 1
.b C b/ D xB C B 1
b (2.120)

and

zN D  .b C b/ D zN C  b; (2.121)

respectively.
Therefore, the following statements hold:
(i) If xB  0 holds, then xB is an optimal solution, and the variation in the objective
function is  b.
(ii) If xB  0 does not hold, since the optimality condition cNj  0 for all indices
j of the nonbasic variables is satisfied, the dual simplex method can be used to
find a new optimal solution.
Example 2.16 (Sensitivity analysis for the production planning problem of Exam-
ple 1.1). In the production planning problem of Example 1.1, we calculate optimal
solutions when the total amounts of available materials are changed as follows:
(i) The available amounts of material M1 is changed from 27 tons to 32 tons.
(ii) The available amounts of material M2 is changed from 16 tons to 23 tons.
Although the optimal solution to the original problem is given at cycle 2 in the
revised simplex method of Table 2.13, for the sake of convenience, we rewrite the
initial tableau (cycle 0) and the optimal tableau (cycle 2) in Table 2.18.
From the optimal tableau, one finds that the basic variables are xB D
.x2 ; x1 ; x5 /T , the basis inverse matrix is
2.8 Dual Simplex Method 65

Table 2.18 Initial and optimal tableaux of Example 1.1


Cycle Basis Basis inverse matrix Constants pON s
Cycle 0 (initial) x3 1 27 [6]
x4 1 16 2
x5 1 18 1
z 0 8
Cycle 2 (optimal) x2 3/14 1=7 3.5
x1 1=7 3/7 3
x5 5/14 11=7 1 2.5
z 9/7 1/7 37

2 3
3=14 1=7 0
1
6 7
B D6
4 1=7 3=7 5;
0 7
5=14 11=7 1

and the simplex multiplier vector is

  D . 9=7; 1=7; 0/:


1 0 0 1
5 27
(i) Let the amounts of changes be b D @ 0 A, and from b D @ 16 A, it follows
0 18
that

xB D B 1
.b C b/
2 30 1 0 1
3=14 1=7 0 32 32=7
D 4 1=7 3=7 0 5 @ 16 A D @ 16=7 A
5=14 11=7 1 18 30=7

1 0
32
zN D  .b C b/ D . 9=7; 1=7; 0/ @ 16 A D 304=7:
18

Since xB  0 holds, xB is an optimal basic solution, and then an optimal
solution

x2 D 32=7; x1 D 16=7; x5 D 30=7; .x3 D x4 D 0/ zD 304=7

is obtained.
66 2 Linear Programming

Table 2.19 Revised simplex tableau after change of b2 D 23


Cycle Basis Basis inverse matrix Constants pON s
1 x2 3/14 1=7 5/2 1=7
x1 1=7 3/7 6 3=7
x5 5/14 11=7 1 17=2 Œ 11=7
z 9/7 1/7 38 1=7
2 x2 2/11 1/11 36/11
x1 1=22 3/11 81/22
x4 5=22 1 7=11 119/22
z 29/22 1/11 819/22

1 0 0 1
0 27
(ii) Let the amounts of changes be b D @ 7 A, and from b D @ 16 A, it follows
0 18
that
1 0 0 1
27 5=2
xB D B 1 .b C b/ D B 1 @ 23 A D @ 6 A
18 17=2

1 0
27
zN D  .b C b/ D . 9=7; 1=7; 0/ @ 23 A D 38:
18

Since the negative component 17=2 appears in xB , using the revised dual
simplex method, we can obtain an optimal tableau shown in Table 2.19.
1 1
That is, using the third row ŒB 3 of B , p3 , and p4 , we have
1
aN 33 D ŒB 3 p3 D .5=14; 11=7; 1/.1; 0; 0/T D 5=14;
1
aN 34 D ŒB 3 p4 D .5=14; 11=7; 1/.0; 1; 0/T D 11=7:

Thus, x4 becomes a basic variable in the next cycle. The relative cost coefficients cN4
is calculated as

cN4 D c4  p4 D 0 . 9=7; 1=7; 0/.0; 1; 0/T D 1=7:

We calculate pN 4 as
2 30 1 0 1
3=14 1=7 0 0 1=7
4 1=7 3=7 0 5 @ 1 A D @ 3=7 A :
5=14 11=7 1 0 11=7
2.8 Dual Simplex Method 67

These values are put in the column of pON s in the tableau. By performing the pivot
operation on Œ 11=7, a new tableau is obtained.
In this example, after the only one pivot operation, an optimal solution

x2 D 36=11; x1 D 81=22 .x4 D 119=22; x3 D x5 D 0/; zD 819=22

is obtained. ˙
When the coefficients of the objective function are changed, since only the
changes in the cost coefficients c affect the optimality criterion and the value of the
objective function, the (revised) simplex method is used for finding the new optimal
solution only when some relative cost coefficients become negative, i.e., cNj < 0 for
some j .

Problems

2.1 Convert the following problems to the standard form of linear program-
ming:
(i) (Absolute value problem)
n
P
minimize z D cj jxj j
j D1
n
P
subject to aij xj D bi ; i D 1; 2; : : : ; m;
j D1

where cj > 0, j D 1; 2; : : : ; n, and xj , j D 1; 2; : : : ; n are free variables.


(ii) (Fractional programming problem)
n
P
c j x j C c0
j D1
minimize z D n
P
d j x j C d0
j D1
n
P
subject to aij xj D bi ; i D 1; 2; : : : ; m
j D1
xj  0; j D 1; 2; : : : ; n;

n
P
where dj xj C d0 > 0 holds for all feasible solutions.
j D1
68 2 Linear Programming

(iii) (Minimax problem)


n n n
 
cj1 xj ; cj2 xj ; : : : ; cjL xj
P P P
minimize z D max
j D1 j D1 j D1
n
P
subject to aij xj D bi ; i D 1; 2; : : : ; m
j D1
xj  0; j D 1; 2; : : : ; n:

2.2 Formulate the following problems as linear programming ones.


(i) A manufacturing company produces two products A and B. There are 40 h
of labor available each day, and 1 kg (kilogram) of product A requires 2 h of
labor, whereas 1 kg of product B requires 5 h. There are up to 30 machine-
hours available per day, and machine processing time for 1 kg of product A is
3 h and for 1 kg of product B is 1 h. There are 39 kg of raw material available
each day, and 1 kg of product A requires 3 kg of the material, whereas 1 kg
of product B requires 4 kg. The daily profit for product A is 30 thousand yen
per 1 kg, while B is 20 thousand yen per 1 kg, and the manager wishes to
maximize the daily profit.
(ii) A firm manufactures cattle feed by mixing two ingredients A and B. Each
ingredient contains three nutrients C, D, and E. Each 1 g (gram) of the
ingredient A contains 9 mg (milligram) of C, 1 mg of D, and 1 mg of E.
Each 1 g of the ingredient B contains 2 mg of C, 5 mg of D, and 1 mg of E.
Each 1 g of the feed must contain at least 54 g, 25 g, and 13 g of C, D, and
E, respectively. The costs per gram of the ingredients A and B are 9 yen and
15 yen, respectively, and the manager wishes to find the best feed mix that has
the minimum cost per gram.
2.3 Assume that all xl D .x1l ; x2l ; : : : ; xnl /T , l D 1; 2; : : : ; L are optimal solutions
L
to a certain linear programming problem. Show that x D œl xl is also
P
lD1
an optimal solution Pto the problem, where œl , l D 1; : : : ; L are nonnegative
constants satisfying L lD1 œl D 1.
2.4 For a linear programming problem involving a free variable xk , assume that
we substitute the difference of two nonnegative variables xkC xk , xkC  0,
xk  0 for xk . Explain why both xkC and xk cannot be in the same basis
simultaneously.
2.5 Consider the two linear programming problems

minimize z D cx minimize z D .c/x


subject to Ax D b and subject to Ax D .œb/
x  0; x  0;

where œ and  are positive real numbers. Explain the relationships between these
two problems. What happens if either œ or  is negative?
2.8 Dual Simplex Method 69

2.6 Solve the following problems using the simplex method:


(i) Minimize 2x1 5x2
subject to 2x1 C 6x2  27
8x1 C 6x2  45
3x1 C x2  15
xj  0; j D 1; 2
(ii) Minimize 3x1 2x2
subject to 2x1 C 5x2  130
6x1 C 3x2  110
xj  0; j D 1; 2
(iii) Minimize 3x1 4x2
subject to 3x1 C 12x2  400
6x1 C 3x2  600
8x1 C 7x2  800
xj  0; j D 1; 2
(iv) Minimize 2:5x1 5x2 3:4x3
subject to 5x1 C 10x2 C 6x3  425
2x1 5x2 C 4x3  400
3x1 10x2 C 8x3  600
xj  0; j D 1; 2; 3
(v) minimize 12x1 18x2 8x3 40x4
subject to 2x1 C 5:5x2 C 6x3 C 10x4  80
4x1 C x2 C 4x3 C 20x4  50
xj  0; j D 2; 3; 4I x1 : a free variable
(vi) Minimize 2x1 3x2 x3 C 2x4
subject to 3x1 C 2x2 x3 C 3x4 D 2
x1 C 2x2 C x3 C 2x4 D 3
xj  0; j D 1; 2; 3; 4
2.7 Solve the following problems using the simplex method:
(i) Minimize jx1 j C 4jx2 j C 2jx3 j
subject to 2x1 C x2 3
x1 C 2x2 C x3 D 5
x1 C 4x2 C x3 C 1
(ii) Minimize
x1 C 2x2 C x3 C 1
subject to 2x1 2x2 C x3  1
x1 C 2x2 x3  1:5
xj  0; j D 1; 2; 3
70 2 Linear Programming

(iii) Minimize max . x1 C 2x2 x3 ; 2x1 C 3x2 2x3 ; x1 x2 2x3 /


subject to 2x1 C x2 C x3  5
2x1 C 2x2 C 5x3  10
xj  0; j D 1; 2; 3
2.8 Prove by contradiction that the use of Bland’s rule prevents cycling in the
following way.
(i) Let T be the index set of all variables that enter the basis during cycling, and
let q be the largest index in T , i.e., q D maxfj j j 2 T g. The variable xq
enters the basis during cycling, and then xq must also leave the basis. Let I
be the index set of basic variables before xq enters the basis, and let J D
f1; 2;    ; ng I be the index set of nonbasic variables. The corresponding
canonical form is represented by
X X
xi C aN ij xj D bNi ; i 2 I; zC cNj xj D z:
j 2J j 2J

Furthermore, let I 0 be the index set of basic variables when xq leaves the
basis, and let J 0 D f1; 2;    ; ng I 0 be the index set of nonbasic variables.
The corresponding canonical form is represented by
X X
xi C aN ij0 xj D bNi ; i 2 I 0 ; zC cNj0 xj D z:
j 2J 0 j 2J 0

Let t 2 J 0 be the index of the basic variable that enters I 0 instead of xq . By


the definitions of q and t , it follows that cNq < 0, cNt0 < 0, aN qt
0
> 0, t 2 T ,
0 0
t < q. In the canonical form for I and J , assume that xt D 1P for t 2 J 0
and xj D 0 for all j 2 J 0 ft g. Explain that the relation cNt0 D j 2J cNj xj
holds.
(ii) From cNt0 < 0, there must be a positive term in j 2J cNj xj of the above
P
relation. Let the term be cNr xr > 0, r 2 J . Show that r < q.
0
(iii) Show xr D aN rt > 0 and derive the contradiction.
2.9 Apply the standard simplex method to the following linear programming
problem due to E.M.L. Beale, starting with x5 , x6 , and x7 as the initial basic
variables, and verify that the procedure of the simplex method cycles:

minimize . 3=4/x1 C150x2 .1=50/x3 C6x4


subject to .1=4/x1 60x2 .1=25/x3 C9x4 Cx5 D0
.1=2/x1 90x2 .1=50/x3 C3x4 Cx6 D0
x3 Cx7 D 1
xj  0; j D 1; 2; : : : ; 7:

Solve the problem using the simplex method incorporating Bland’s rule.
2.8 Dual Simplex Method 71

2.10 A vector   of the simplex multipliers can also be defined as follows:


Multiply   by a vector b of the right-hand side constants of the original equation
system, and subtract it from the objective function z. Then,  i is determined such
that the coefficient of a basic variable xi is zero. Explain that the above definition
and the original definition of the simplex multiplier are equivalent.
2.11 Solve problem 2.6 by the revised simplex method.
2.12 Show that the dual to the linear programming problem

minimize x1 Cx2 Cx3


subject to x2 Cx3  1
x1 x3  1
x1 Cx2  1
xj  0; j D 1; 2; 3

is equivalent to the primal problem. Such a pair of linear programming is known


as self-dual. Assuming A is a square matrix, derive the conditions for c, A, and
b for which the linear programming problem

minimize cx
subject to Ax  b
x0

is self-dual.
2.13 Prove the complementary slackness theorem.
2.14 Prove Gordon’s theorem.
2.15 Solve the following problems using the dual simplex method:
(i) Minimize 4x1 C 3x2
subject to x1 C 3x2  12
x1 C 2x2  10
2x1 C x2 9
xj  0; j D 1; 2
(ii) Minimize 3x1 C 5x2
subject to 2x1 C 3x2  20
2x1 C 5x2  22
5x1 C 3x2  25
xj  0; j D 1; 2
(iii) Minimize 4x1 C 2x2 C 3x3
subject to 5x1 C 3x2 2x3  10
3x1 C 2x2 C 4x3  8
xj  0; j D 1; 2; 3
72 2 Linear Programming

(iv) Minimize 4x1 C 8x2 C 3x3


subject to 2x1 C 5x2 C 3x3  185
3x1 C 2:5x2 C 8x3  155
8x1 C 10x2 C 4x3  600
xj  0; j D 1; 2; 3
2.16 In the production planning problem of Example 1.1, assume that the total
amounts of available materials are changed as follows:
(i) The total amount of M1 is changed from 27 tons to 33 tons.
(ii) The total amount of M2 is changed from 16 tons to 21 tons.
In each case, find a new optimal solution starting from the last optimal tableau.
2.17 In the linear programming problem solved in problem 2.6 (i), assume that
the right-hand side constants are changed as follows:
(i) The right-hand side constant 27 is changed to 30.
(ii) The right-hand side constant 45 is changed to 51.
In each case, find a new optimal solution starting from the last optimal tableau.
Chapter 3
Multiobjective Linear Programming

The problem to optimize multiple conflicting linear objective functions


simultaneously under the given linear constraints is called the multiobjective linear
programming problem. This chapter begins with a discussion of fundamental
notions and methods of multiobjective linear programming. After introducing
the notion of Pareto optimality, several methods for characterizing Pareto
optimal solutions including the weighting method, the constraint method,
and the weighted minimax method are explained, and goal programming and
compromise programming are also introduced. Extensive discussions of interactive
multiobjective linear programming conclude this chapter.

3.1 Problem Formulation and Solution Concepts

Recall the production planning problem discussed in Example 1.1.


Example 3.1 (Production planning problem). A manufacturing company desires to
maximize the total profit from producing two products P1 and P2 utilizing three
different materials M1 , M2 , and M3 . The company knows that to produce 1 ton
of product P1 requires 2 tons of material M1 , 3 tons of material M2 , and 4 tons of
material M3 , while to produce 1 ton of product P2 requires 6 tons of material M1 ,
2 tons of material M2 , and 1 ton of material M3 . The total amounts of available
materials are limited to 27, 16, and 18 tons for M1 ; M2 , and M3 , respectively. It
also knows that product P1 yields a profit of 3 million yen per ton, while P2 yields
8 million yen. Given these limited materials, the company is trying to figure out
how many units of products P1 and P2 should be produced to maximize the total
profit. This production planning problem can be formulated as the following linear
programming problem:

M. Sakawa et al., Linear and Multiobjective Programming with Fuzzy Stochastic 73


Extensions, International Series in Operations Research & Management Science 203,
DOI 10.1007/978-1-4614-9399-0__3, © Springer Science+Business Media New York 2013
74 3 Multiobjective Linear Programming

9
minimize z1 D 3x1 8x2 >
>
>
subject to 2x1 C 6x2  27 >>
=
3x1 C 2x2  16 (3.1)
>
4x1 C x2  18 > >
>
>
x1  0; x2  0:
;

˙
Example 3.2 (Production planning with environmental consideration). Unfortu-
nately, however, in production process, it is pointed out that product P1 yields 5
units of pollution per ton and product P2 yields 4 units of pollution per ton. Thus,
the manager should not only maximize the total profit but also minimize the amount
of pollution.
For simplicity, assume that the amount of pollution is a linear function of two
variables x1 and x2 such as

5x1 C 4x2 ;

where x1 and x2 denote the numbers of tons produced of products P1 and P2 ,


respectively.
Considering environmental quality, the production planning problem can be
reformulated as the following two-objective linear programming problem:
9
minimize z1 D 3x1 8x2 >
>
>
minimize z2 D 5x1 C 4x2 >
>
>
>
subject to 2x1 C 6x2  27
=
(3.2)
3x1 C 2x2  16 >>
>
4x1 C x2  18 > >
>
>
x1  0; x2  0:
;

˙
The problem to optimize such multiple conflicting linear objective functions
simultaneously under the given linear constraints is called the multiobjective linear
programming problem and can be generalized as follows:
9
minimize z1 .x/ D c1 x >
>
>
minimize z2 .x/ D c2 x >
>
>
>

=
(3.3)
minimize zk .x/ D ck x >
>
>
subject to Ax  b >
>
>
>
x  0;
;
3.1 Problem Formulation and Solution Concepts 75

where

ci D .ci1 ; : : : ; ci n /; i D 1; : : : ; k

0 1 2 3 0 1
x1 a11 a12    a1n b1
B x2 C 6 a21 a22    a2n 7 B b2 C
x D B : C; AD6 : ; bDB : C:
B C 6 7 B C
:: : : :: 7
@ :: A 4 :: : : : 5 @ :: A
xn am1 am2    amn bm

Such a multiobjective linear programming problem is sometimes expressed as the


following vector minimization problem:

minimize z.x/ D .z1 .x/; z2 .x/; : : : ; zk .x//T =


9

subject to Ax  b (3.4)
;
x  0;

where z.x/ D .z1 .x/; : : : ; zk .x//T D .c1 x; : : : ; ck x/T is a k-dimensional vector.


Let the feasible region of the problem be denoted by
n ˇ o
n
X D x 2 R ˇ Ax  b; x  0 : (3.5)
ˇ

Introducing a k  n matrix C D .c1 ; c2 ; : : : ; ck /T of the coefficients of the objective


functions, we can express the multiobjective linear programming problem (3.4) in a
more compact form:

minimize z.x/ D C x
(3.6)
subject to x 2 X:

If we directly apply the notion of optimality for single-objective linear programming


to this multiobjective linear programming, we arrive at the following notion of a
complete optimal solution.
Definition 3.1 (Complete optimal solution). A point x is said to be a complete
optimal solution if and only if there exists x 2 X such that zi .x /  zi .x/; i D
1; : : : ; k for all x 2 X .
However, in general, such a complete optimal solution that simultaneously
minimizes all of the multiple objective functions does not always exist when
the objective functions conflict with each other. Thus, instead of a complete
optimal solution, a new solution concept, called Pareto optimality, is introduced
in multiobjective linear programming.
76 3 Multiobjective Linear Programming

Fig. 3.1 Feasible region and x2


Pareto optimal solutions for
Example 3.2 in x1 -x2 plane
5
E (0, 4.5)
4 D (3, 3.5)

3
z1= −3x1−8x2
2
C (4, 2)
1

0 x1
A (0, 0) 1 2 3 4 5
B (4.5, 0)
z2= 5x1+4x2

Definition 3.2 (Pareto optimal solution). A point x is said to be a Pareto optimal


solution if and only if there does not exist another x 2 X such that zi .x/  zi .x /
for all i and zj .x/ 6D zj .x / for at least one j .
As can be seen from the definition, a Pareto optimal solution consists of
an infinite number of points. A Pareto optimal solution is sometimes called a
noninferior solution since it is not inferior to other feasible solutions.
In addition to Pareto optimality, the following weak Pareto optimality is defined
as a slightly weaker solution concept than Pareto optimality.
Definition 3.3 (Weak Pareto optimal solution). A point x is said to be a weak
Pareto optimal solution if and only if there does not exist another x 2 X such that
zi .x/ < zi .x /; i D 1; : : : ; k.
For notational convenience, let X CO , X P , and X WP denote the complete
optimal, Pareto optimal, and weak Pareto optimal solution sets, respectively. From
their definitions, it can be easily understood that the following relation holds:

X CO  X P  X WP : (3.7)

Example 3.3 (Pareto optimal solutions to production planning of Example 3.2).


To understand the notion of Pareto optimal solutions in multiobjective linear
programming, consider the two-objective linear programming problem given in
Example 3.2. The feasible region X for this problem in the x1 -x2 plane becomes
the boundary lines and interior points of the convex pentagon ABCDE in Fig. 3.1.
Among the five extreme points A; B; C; D, and E, observe that z1 is minimized
at the extreme point D.3; 3:5/ while z2 is minimized at the extreme point A.0; 0/.
These two extreme points A and D are obviously Pareto optimal solutions since
they cannot improve respective objective functions z1 and z2 anymore. In addition
to the extreme points A and D, the extreme point E and all of the points of the
3.2 Scalarization Methods 77

Fig. 3.2 Feasible region and D (–37, 29) z2


C (–28, 28)
Pareto optimal solutions for
Example 3.2 in z1 -z2 plane B (–13.5, 22.5)

E (–36, 18)

z1
A (0, 0)

segments AE and ED are Pareto optimal solutions since they can be improved only
at the expense of either z1 or z2 . However, all of the remaining feasible points are
not Pareto optimal since there always exist other feasible points which improve both
objective functions or at least one of them without sacrificing the other.
This situation can be more easily understood by observing the feasible region

Z D f.z1 .x/; z2 .x// j x 2 X g (3.8)

in the z1 -z2 plane as depicted in Fig. 3.2.


It is not hard to check that if the objective functions of this problem are
changed to

z1 D x1 4x2 ; z2 D 2x1 x2 ;

there exists a complete optimal solution .z1 ; z2 / D . 18; 4:5/.


Moreover, if they are changed to

z1 D 4x1 5x2 ; z2 D 2x1 x2 ;

there exist weak Pareto optimal solutions. ˙

3.2 Scalarization Methods

Several computational methods have been proposed for characterizing Pareto


optimal solutions depending on the different scalarizations of the multiobjective
linear programming problems. Among the many possible ways of scalarizing the
multiobjective linear programming problems, the weighting method, the constraint
method, and the weighted minimax method have been studied as a means of
characterizing Pareto optimal solutions of the multiobjective linear programming
problems.
78 3 Multiobjective Linear Programming

3.2.1 Weighting Method

The weighting method for obtaining a Pareto optimal solution is to solve the
weighting problem formulated by taking the weighted sum of all of the objective
functions in the original multiobjective linear programming problem (Kuhn and
Tucker 1951; Zadeh 1963). Thus, the weighting problem is defined by

k
X
minimize wz.x/ D wi zi .x/; (3.9)
x2X
iD1

where w D .w1 ; : : : ; wk / is a vector of weighting coefficients assigned to the


objective functions and assumed to be

w D .w1 ; : : : ; wk /  0; w ¤ 0:

The relationships between the optimal solution x of the weighting problem and the
Pareto optimality concept of the multiobjective linear programming problems can
be characterized by the following theorems.
Theorem 3.1. If x 2 X is an optimal solution of the weighting problem for
some w > 0, then x is a Pareto optimal solution of the multiobjective linear
programming problem.
Proof. If an optimal solution x of the weighting problem is not a Pareto optimal
solution of the multiobjective linear programming problem, then there exists x 2 X
such that zj .x/ < zj .x / for some j and zi .x/  zi .x /; i D 1; : : : ; kI i 6D j .
From w D .w1 ; : : : ; wk / > 0, this implies kiD1 wi zi .x/ < kiD1 wi zi .x /. Thus,
P P
it does not follow that x is an optimal solution of the weighting problem for some
w > 0. t
u
It should be noted here that the condition of Theorem 3.1 can be replaced with a
unique optimal solution of the weighting problem for w  0, w 6D 0.
Theorem 3.2. If x 2 X is a Pareto optimal solution to a multiobjective linear
programming problem, then x is an optimal solution to the weighting problem for
some w D .w1 ; : : : ; wk /  0, w 6D 0.
Proof. First we prove that x is an optimal solution of the linear programming
problem

minimize 1T C x
9
=

subject to C x  C x (3.10)
;
Ax  b; x  0;

where 1 is a k-dimensional column vector whose elements are all ones.


3.2 Scalarization Methods 79

If x is not an optimal solution of this problem, then there exists x 2 X such that
C x  C x and 1T C x < 1T C x . This means that there exists x 2 X such that

k
X k
X
T
1 Cx D ci x < ci x D 1T C x and ci x  ci x ; i D 1; : : : ; k
iD1 iD1

or equivalently

cj x < cj x for some j and ci x  ci x ; i D 1; : : : ; kI i ¤ j

which contradicts the fact that x is a Pareto optimal solution. Hence, x is an


optimal solution of this problem.
Now consider the dual problem

maximize . bT ; xT C T /y
9
=
T T T
subject to . A ; C /y  C 1 (3.11)
;
y0

of the linear programming problem (3.10).


From the (strong) duality theorem of linear programming, it holds that

. bT ; xT C T /y D 1T C x

and thus

.1T C yT 
2 /C x D bT y1 ; yT D .yT T
1 ; y2 /:

Letting

w D 1T C yT
2 ;

we have
k
X
wi ci x D wT C x D .1 C y2 /T C x 8x 2 X:
iD1

Next observe that the dual problem of the linear programming problem

minimize .1 C y2 /T C x

(3.12)
subject to Ax  b; x  0

becomes

bT u
9
maximize =
T T 
subject to A u  C .1 C y2 / (3.13)
;
u  0:
80 3 Multiobjective Linear Programming

Fig. 3.3 Weighting method wz = 1.8x1–0.8x2


for Example 3.2 x2

5
E(0, 4.5)
4 D(3, 3.5)
z1= −3x1−8x2
3
2 C(4, 2)

0 x1
A(0, 0) 1 2 3 4 5
B(4.5, 0)
z2= 5x1+4x2

Then, x D x and u D y1 are feasible solutions for the corresponding linear
programming problems, and each value of the objective functions is equal. Hence,
for any x 2 X , it follows that

k
X k
X

wi ci x D .1 C y2 /T C x D min.1 C y2 /T C x  .1 C y2 /T C x D wi ci x:
x2X
iD1 iD1

This implies that x is an optimal solution of the weighting problem for w D


.w1 ; : : : ; wk /  0. This completes the proof of the theorem. t
u
The weighting coefficients of the weighting problem give the trade-off informa-
tion between the objective functions. They show how many units of value of one
objective function have to be given up in order to obtain one additional unit of value
of the other objective function. This fact may be intuitively understood as follows:
Geometrically, in the k-dimensional z D .z1 ; : : : ; zk / space,

W D w1 z1 .x/ C w2 z2 .x/ C    C wk zk .x/ D c .constant/ (3.14)

represents the hyperplane (note that in the case of two objectives it is a line, and
in the case of three objectives, a plane) with the normal vector w D .w1 ; : : : ; wk /.
Solving the weighting problem for the given weighting coefficients w > 0 yields
the minimum c such that this hyperplane has at least one common point with the
feasible region Z in the z D .z1 ; : : : ; zk / space, and the corresponding Pareto
optimal solution x is obtained as in Fig. 3.3.
The hyperplane for this minimum c is the supporting hyperplane of the feasible
region Z at the point z.x / on the Pareto optimal surface. The condition for the
small displacement from the point z.x / belonging to this supporting hyperplane is
W D 0, i.e.,
3.2 Scalarization Methods 81

w1 z1 C w2 z2 C    C wk zk D 0: (3.15)

For fixed values of zj D 0, j D 2; 3; : : : ; k, j 6D i except z1 and zi , we have

w1 z1 C wi zi D 0: (3.16)

Hence, it holds that

@z1 w
D i : (3.17)
@zi w1

Therefore, the ratio of the weighting coefficients wi =w1 gives a trade-off rate
between the two-objective functions at z.x /.
Example 3.4 (Weighting method for production planning of Example 3.2). To
illustrate the weighting method, consider the problem of Example 3.2. The cor-
responding weighting problem becomes as follows:

minimize wz.x/ D w1 . 3x1 8x2 / C w2 .5x1 C 4x2 /


subject to 2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18
x1  0; x2  0:

For this problem, for example, if we choose w1 D 0:4 and w2 D 0:6, we obtain

wz.x/ D 1:8x1 0:8x2 :

As depicted in Fig. 3.3, it can be easily understood that solving the corresponding
weighting problem yields the extreme point E.0; 4:5/ as a Pareto optimal solution.
Also, as two extreme cases, if we set w1 D 1, w2 D 0 and w1 D 0, w2 D 1, from
Fig. 3.3, the optimal solutions of the corresponding weighting problems become
the extreme points D.3; 3:5/ and A.0; 0/, respectively. In these cases, although the
condition w > 0 of Theorem 3.1 is not satisfied, from Fig. 3.3, it can be seen that
these two extreme points are Pareto optimal solutions. ˙

3.2.2 Constraint Method

The constraint method for characterizing Pareto optimal solutions is to solve the
constraint problem formulated by taking one objective function of a multiobjective
linear programming problem as the objective function of the constraint problem and
letting all the other objective functions be inequality constraints (Haimes and Hall
1974; Haimes et al. 1971). The constraint problem is defined by
82 3 Multiobjective Linear Programming

9
minimize zj .x/ =
subject to zi .x/  ©i ; i D 1; 2; : : : ; kI i ¤ j (3.18)
;
x 2 X:

The relationships between the optimal solution x to the constraint problem


and Pareto optimality of a multiobjective linear programming problem can be
characterized by the following theorems.
Theorem 3.3. If x 2 X is a unique optimal solution to the constraint problem
for some ©i , i D 1; : : : ; k; i 6D j , then x is a Pareto optimal solution to a
multiobjective linear programming problem.
Proof. If a unique optimal solution x to the constraint problem is not a Pareto
optimal solution to the multiobjective linear programming problem, then there exists
x 2 X such that zl .x/ < zl .x / for some l and zi .x/  zi .x /; i D 1; : : : ; kI i 6D l.
This means either

zi .x/  zi .x /  ©i ; i D 1; : : : ; kI i ¤ j; zj .x/ < zj .x /

or

zi .x/  zi .x /  ©i ; i D 1; : : : ; kI i ¤ j; zj .x/ D zj .x /

which contradicts the assumption that x is a unique optimal solution of the


constraint problem for some ©i , i D 1; : : : ; k; i 6D j . t
u
As can be easily understood from the proof of this theorem, in the absence of the
uniqueness of a solution in the theorem, only weak Pareto optimality is guaranteed.
Theorem 3.4. If x 2 X is a Pareto optimal solution to a multiobjective linear
programming problem, then x is an optimal solution of the constraint problem for
some ©i , i D 1; : : : ; k; i 6D j .
Proof. If a Pareto optimal solution x 2 X of the multiobjective linear
programming problem is not an optimal solution to the constraint problem for
©i , i D 1; : : : ; k; i 6D j , then there exists x 2 X such that

zj .x/ < zj .x / zi .x/  ©i D zi .x /; i D 1; : : : ; kI i 6D j;

which contradicts the fact that x is a Pareto optimal solution to the multiobjective
linear programming problem. t
u
Example 3.5 (Constraint method for production planning of Example 3.2). To
illustrate the constraint method, consider Example 3.2. The constraint problem for
j =1 becomes
3.2 Scalarization Methods 83

Fig. 3.4 Constraint method D(–37, 29) z2


C(–28, 28)
for Example 3.2
B(–13.5, 22.5)

18
E(–36, 18)
14
(–28, 14)

A(0, 0) z1

minimize z1 D 3x1 8x2


subject to 2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18
z2 D 5x1 C 4x2  ©2
x1  0; x2  0:

Here, for example, if we choose ©2 D 18, as illustrated in Fig. 3.4, it can be


understood that the optimal solution to this constraint problem occurs at the extreme
point E. 36; 18/ and, hence, yields a Pareto optimal solution. Also, if we choose
©2 D 14, we can obtain a Pareto optimal solution such that .z1 ; z2 / D . 28; 14/ as
in Fig. 3.4. ˙

3.2.3 Weighted Minimax Method

The weighted minimax method for characterizing Pareto optimal solutions is to


solve the following weighted minimax problem (Bowman 1976):

minimize max fwi zi .x/g; (3.19)


x 2X iD1;:::;k

or equivalently
9
minimize v =
subject to wi zi .x/  v; i D 1; 2; : : : ; k (3.20)
;
x 2 X;

where v is an auxiliary variable.


Here, without loss of generality, it can be assumed that zi .x/ > 0, i D 1; : : : ; k
for all x 2 X . Because, for the objective functions not satisfying zi .x/ > 0 for all
x 2 X , using their individual minima zmin
i D minx2X zi .x/ and setting
84 3 Multiobjective Linear Programming

Fig. 3.5 Graphical z2


interpretation of weighted
minimax method

c/w2

0 c/w1 z1

zOi .x/ D zi .x/ zmin


i ;

it holds that zOi .x/ > 0 for all x 2 X .


Geometrically, in the weighted minimax problem, the contours of maxfwi zi g D
c (constant) in the objective function space become two edges of rectangles
corresponding to the given weighting coefficients. Hence, solving the weighted
minimax problem yields Pareto optimal solutions such that these rectangles support
the feasible region

Z D fz.x/ j x 2 X g

as depicted in Fig. 3.5.


The relationships between the optimal solution x of the weighted minimax
problem and Pareto optimality of a multiobjective linear programming problem can
be characterized by the following theorems.
Theorem 3.5. If x 2 X is a unique optimal solution of the weighted minimax
problem for some w D .w1 ; : : : ; wk /  0, then x is a Pareto optimal solution of the
multiobjective linear programming.
Proof. If a unique optimal solution x of the weighted minimax problem for some
w D .w1 ; : : : ; wk /  0 is not a Pareto optimal solution, then there exists x 2 X such
that zj .x/ < zj .x / for some j and zi .x/  zi .x /; i D 1; : : : ; kI i 6D j . In view
of w D .w1 ; : : : ; wk /  0, it follows

wi zi .x/  wi zi .x /; i D 1; : : : ; k:

Hence,

max wi zi .x/  max wi zi .x /:


iD1;:::;k iD1;:::;k
3.2 Scalarization Methods 85

This contradicts the fact that x is a unique optimal solution to the weighted
minimax problem for w D .w1 ; : : : ; wk /  0. t
u
From the proof of this theorem, in the absence of the uniqueness of a solution in
Theorem 3.5, only weak Pareto optimality is guaranteed.
Theorem 3.6. If x 2 X is a Pareto optimal solution to a multiobjective linear
programming problem, then x is an optimal solution of the weighted minimax
problem for some w D .w1 ; : : : ; wk / > 0.
Proof. For a Pareto optimal solution x 2 X of the multiobjective linear program-
ming problem, choose w D .w1 ; : : : ; wk / > 0 such that wi zi .x / D v; i D
1; : : : ; k. Now assume that x is not an optimal solution of the weighted minimax
problem, then there exists x 2 X such that

wi zi .x/ < wi zi .x / D v ; i D 1; : : : ; k:

Noting w D .w1 ; : : : ; wk / > 0, this implies the existence of x 2 X such that

zi .x/ < zi .x /; i D 1; : : : ; k;

which contradicts the assumption that x is a Pareto optimal solution. t


u
Example 3.6 (Weighted minimax method for production planning of Example 3.2).
To illustrate the weighted minimax method, consider the problem of Example 3.2.
Observe that the individual minima of z1 .x/ and z2 .x/ are minx2X z1 .x/ D 37 and
minx2X z2 .x/ D 0, respectively. If we substitute zO1 .x/ D z1 .x/ . 37/ and choose
w1 D 0:8, w2 D 0:4, then the weighted minimax problem becomes
9
minimize max . 2:4x1 6:4x2 C 29:6; 2x1 C 1:6x2 / >
>
>
subject to 2x1 C 6x2  27 >
>
=
3x1 C 2x2  16
>
4x1 C x2  18 >
>
>
>
x1  0; x2  0; ;

or equivalently
9
minimize v >
>
2x1 C 6x2 
>
subject to 27 >
>
>
>
3x1 C 2x2  16 >
>
=
4x1 C x2  18
>
2:4x1 6:4x2 C 29:6  v >>
>
>
2x1 C 1:6x2  v >>
>
>
x1  0; x2  0:
;
86 3 Multiobjective Linear Programming

z2 z2

D(–37, 29) C(–28, 28)


B(–13.5, 22.5)

E(–36, 18)

(7.4, 14.8) (–29.6, 14.8)

z^1 A(0, 0) z1

Fig. 3.6 Weighted minimax method for Example 3.2

Noting that zO1 D z1 C 37, as illustrated in Fig. 3.6, it can be understood that a vector
.7:4; 14:8/ of the objective function values of this problem is a point moved from the
point . 29:6; 14:8/ of the original problem by 37 along the z1 axis. Hence, the point
. 29:6; 14:8/ is a vector of the original objective function values which corresponds
to a Pareto optimal solution .x1 ; x2 / D .0; 3:7/. ˙
From Theorems 3.3 and 3.5, if the uniqueness of the optimal solution x for
the scalarizing problem is not guaranteed, it is necessary to perform the Pareto
optimality test of x . The Pareto optimality test for x can be performed by
solving the following linear programming problem with the decision variables
x D .x1 ; : : : ; xn /T and © D .©1 ; : : : ; ©k /T :

k
9
X >
maximize ©i >
>
=
iD1 (3.21)

subject to zi .x/ C ©i D zi .x /; i D 1; : : : ; k >
>
>
;
x 2 X; © D .©1 ; : : : ; ©k /  0:

For an optimal solution .Nx; ©N / of this linear programming problem, the following
theorem holds.
Theorem 3.7. For an optimal solution .Nx; ©N / of the Pareto optimality test problem,
the following statements hold.
(i) If ©N i D 0 for all i D; : : : ; k, then x is a Pareto optimal solution of the
multiobjective linear programming problem.
(ii) If ©N i > 0 for at least one i , then x is not a Pareto optimal solution of the
multiobjective linear programming problem. Instead of x , xN is the Pareto
optimal solution corresponding to the scalarization problem.

Proof. (i) If x is not a Pareto optimal solution to the multiobjective linear


programming problem, then there exists x 2 X such that zj .x/ < zj .x /
for some j and zi .x/  zi .x /; i D 1; : : : ; kI i 6D j , which contradicts the
assumption that ©N i D 0 for all i D; : : : ; k.
3.3 Linear Goal Programming 87

(ii) If at least one ©N i > 0 and xN is not a Pareto optimal solution of the multiobjective
linear programming problem, then there exists x 2 X such that zj .x/ < zj .Nx/
for some j and zi .x/  zi .Nx/; i D 1; : : : ; kI i 6D j . Hence, there exists x 2 X
such that z.x/ C ©0 D z.Nx/ for some ©0  0, and then z.x/ C ©0 C ©N D z.x /.
This contradicts the optimality of ©N .
t
u
As discussed above, when an optimal solution x is not unique, x is not always
Pareto optimal, and then for the Pareto optimality test, (3.21) is solved. However, it
should be noted here that although the formulation is somewhat complex, by solving
the following augmented minimax problem, a Pareto optimal solution is obtainable:
9
minimize v >
>
k
X
>
=
subject to wi zi .x/ C ¡ wi zi .x/  v; i D 1; 2; : : : ; k (3.22)
>
iD1 >
>
;
x 2 X;

where ¡ is a sufficiently small positive number.

3.3 Linear Goal Programming

The term “goal programming” first appeared in the 1961 text by Charnes and Cooper
to deal with multiobjective linear programming problems that assumed the decision
maker (DM) could specify goals or aspiration levels for the objective functions.
Subsequent works on goal programming have been numerous, including texts on
goal programming by Ijiri (1965), Lee (1972), and Ignizio (1976, 1982) and survey
papers by Charnes and Cooper (1977) and Ingnizio (1983).
The key idea behind goal programming is to minimize the deviations from goals
or aspiration levels set by the DM. Goal programming therefore, in most cases,
seems to yield a satisficing solution in the same spirit as March and Simon (1958)
rather than an optimal solution.
As discussed in the previous subsection, in general, the multiobjective linear
programming problem can be formulated as follows:

minimize z.x/ D .z1 .x/; : : : ; zk .x//T ; (3.23)


x2X

where z1 .x/ D c1 x; : : : ; zk .x/ D ck x are k distinct objective functions of the


decision variable vector x and

X D fx 2 Rn j Ax  b; x  0g (3.24)

is the linearly constrained feasible region.


88 3 Multiobjective Linear Programming

For linear goal programming, however, a set of k goals is specified by the


DM for the k objective functions zi .x/, i D 1; : : : ; k and the multiobjective
linear programming problem is converted into the problem of coming “as close as
possible” to the set of specified goals which may not be simultaneously attainable.
The general formulation of goal programming thus becomes

minimize d .z.x/; zO/; (3.25)


x2X

where zO D .Oz1 ; : : : ; zOk / is the goal vector specified by the DM and d .z.x/; zO/
represents the distance between z.x/ and zO in some selected norm.
The simplest version of (3.25), where the absolute value or the `1 norm is used, is

k
X
minimize d1 .z.x/; zO/ D jci x zOi j: (3.26)
x2X
iD1

More generally, using the `1 norm with weights (the weighted `1 norm), it becomes

k
X
minimize d1w .z.x/; zO/ D wi jci x zOi j; (3.27)
x2X
iD1

where wi is a nonnegative weight to the i th objective function.


This linear goal programming problem can easily be converted to an equivalent
linear programming problem by introducing the auxiliary variables

1
diC D fjzi .x/ zOi j C .zi .x/ zOi /g (3.28)
2
and
1
di D fjzi .x/ zOi j .zi .x/ zOi /g (3.29)
2

for each i D 1; : : : ; k. Thus, the equivalent linear goal programming formulation to


the problem (3.27) becomes
9
k
X >
wi .diC C di /
>
minimize >
>
>
>
iD1
>
>
subject to zi .x/ diC C di D zOi i D 1; : : : ; k
=
(3.30)
Ax  b; x  0 >
>
>
C >
di  di D 0; i D 1; : : : ; k >
>
>
C
>
di  0; di  0; i D 1; : : : ; k:
;
3.3 Linear Goal Programming 89

It is appropriate to consider here the practical significance of diC and di . From the
definition of diC and di , it can be easily understood that
(
zi .x/ zOi if zi .x/  zOi
diC D (3.31)
0 if zi .x/ < zOi

and
(
zOi zi .x/ if zOi  zi .x/
di D (3.32)
0 if zOi < zi .x/:

Thus, diC and di represent, respectively, the overachievement and underachieve-


ment of the i th goal and, hence, are called deviational variables. Obviously,
overachievement and underachievement can never occur simultaneously. When
diC > 0, then di must be zero, and vice versa. This fact is reflected by the
third constraint, diC  di D 0, i D 1; : : : ; k, of (3.30) which is automatically
satisfied at every iteration of the simplex method of linear programming because
diC and di never become basic variables simultaneously. The third constraint
of (3.30) is always satisfied in the simplex method, and consequently, it is clear that
the simplex method can be applied to solve this type of linear goal programming
problem.
Depending on the decision situations, the DM may be sometimes concerned only
with either the overachievement or underachievement of a specified goal. Such a
situation can be incorporated into the goal programming formulation by assigning
the over- and underachievement weights wC C
i and wi to di and di , respectively.
For example, if each zi .x/ is a cost-type objective function with its goal zOi , the
overachievement is not desirable. For this case, we set wC i D 1 and wi D 0, and
the problem (3.30) is modified as follows:
9
k
X >
wC C >
minimize i di
>
>
>
>
iD1
>
>
subject to zi .x/ diC C di D zOi i D 1; : : : ; k
=
(3.33)
Ax  b; x  0 >
>
>
C >
di  di D 0; i D 1; : : : ; k >
>
>
C
>
di  0; di  0; i D 1; : : : ; k:
;

Conversely, for a benefit-type objective function, the underachievement


Pk is not
C C C
desirable. For such a case, we set wi D 1 and wi D 0 to replace iD1 wi di with
Pk
iD1 wi di as the objective function of (3.33). This particular goal programming
is called one-sided goal programming.
The linear goal programming formulation can also be modified into a
more general form by introducing the preemptive priorities Pi in place of, or
90 3 Multiobjective Linear Programming

together with, the numerical weights wC i ; wi  0. When the objective functions


z1 .x/; : : : ; zk .x/ are divided into L ordinal ranking classes having the preemptive
priorities P1 ; : : : ; PL in decreasing order, it may be convenient to write

Pl  PlC1 ; l D 1; : : : ; L 1 (3.34)

to mean that no real number t , however large, can produce

tPlC1  Pl ; l D 1; : : : ; L 1; (3.35)

where 1  L  k.
By incorporating such preemptive priorities Pl together with the over- and
underachievement weights wC i and wi , the general linear goal programming
formulation takes on the following form:
0 1 9
L
X X >
Pl @ .wC C
i di C wi di /
>
minimize A >
>
>
>
lD1 i 2Il >
>
=
subject to zi .x/ diC C
di D zOi i D 1; : : : ; k (3.36)
Ax  b; x  0
>
>
>
>
C
di  di D 0; i D 1; : : : ; k
>
>
>
>
C ;
di  0; di  0; i D 1; : : : ; k;

where Il .¤ ;/ is the index set of objective functions in the lth priority class.
Observe that when there are k distinct ordinal ranking classes with the i th objective
function ci x belonging to the i th priority class, i.e., L D k, the objective function
of (3.36) then becomes simply

k
X
Pi .wC C
i di C wi di /: (3.37)
iD1

To solve this type of linear goal programming problems, we begin by trying to


achieve the goals of all objective functions in the first priority class. Having done
that, we try to satisfy the goals in the second priority class, keeping the goals in the
first class satisfied. The process is repeated until either a unique solution is obtained
at some stage or all priority classes are considered. This is equivalent to solving, at
most, L linear programming problems sequentially, for which the simplex method
can easily be applied with some modifications.
Further details concerning the algorithm, extensions, and applications can be
found in the text of Lee (1972) and Ignizio (1976, 1982).
Example 3.7 (Production planning problem with goals). To illustrate the linear
goal programming method, assume that a manager of the manufacturing company
establishes the following goals P1 , P2 , and P3 for the production planning problem
3.3 Linear Goal Programming 91

Fig. 3.7 Graphical solution x2


for Example 3.7
5x1+4x2 = 18
E (0, 4.5)
D (3, 3.5)
3x1+8x2 = 26
3
C (4, 2)
X d1+
d2+ d1-
d2-
A (0, 0) 1.2 B (4.5, 0) x1

incorporating environmental quality in Example 3.2. Here, to avoid confusion in the


notation, let Q and R denote the product names instead of P1 and P2 used in the
previous examples.
P1 : To achieve at least 26 million yen of total profit.
P2 : To keep the pollution level below 18 units.
P3 : To produce at least 2 tons of Q and 3 tons of R. However, assign the weight
ratio of 3–8 for Q and R by considering the profit contribution ratio of these two
products.
The corresponding linear goal programming problem can be formulated as
follows:

minimize P1 d1 C P2 d2C C P3 .3d3 C 8d4 /


subject to 3x1 C 8x2 d1C C d1 D 26
5x1 C 4x2 d2C C d2 D 18
x1 d3C C d3 D 2
x2 d4C C d4 D 3
2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18
x1  0; x2  0
diC  0; di  0; i D 1; 2; 3; 4:

To graphically obtain an optimal solution for this simple example in the x1 -x2
plane, the two priority goals are depicted as straight lines together with the original
feasible region in Fig. 3.7. Although only the decision variables x1 and x2 are used
in this graph, the effect of increasing either diC or di is reflected by the arrow
signs. The region which satisfies both the original constraints and the first priority
goal, i.e., d1C  0 and d1 D 0, is shown as the cross-hatched region. To achieve the
second priority goal without degrading the achievement of the first priority goal, the
area of feasible solution should be limited to the crisscross-hatched area in Fig. 3.7.
However, as can been seen, concerning the third priority goals, d3C cannot increase
92 3 Multiobjective Linear Programming

to be positive. As just described, the final solution of this problem occurs at the point
.x1 ; x2 / D .1:2; 3/ in which only the first and second priority goals are satisfied. ˙

3.4 Compromise Programming

A well-known extension of the goal programming approach is obtained if the


goal vector zO D .Oz1 ; : : : ; zOk / is replaced with the so-called ideal or utopia vector
zmin D .zmin min
1 ; : : : ; zk /, where zi
min
D min zi .x/, i D 1; : : : ; k. The resulting
x2X
problem can be interpreted as an attempt to minimize the deviation from the ideal or
utopia vector (point). Realizing that the ideal point is generally infeasible in most of
multiobjective linear programming problems with conflicting objective functions,
Yu (1973) and Zeleny (1973) introduced the concept of compromise solutions.
Geometrically, the compromise solution defined by Yu and Zeleny is a solution
which is the closest to the ideal point.
p
To be more specific mathematically, given a weighting vector w, xw is called
a compromise solution of the multiobjective linear programming problem with
respect to the parameter p of the norm if and only if it solves

k
!1=p
X
p
minimize dpw .z.x/; zmin / D wi jzi .x/ zmin
i j
(3.38)
x2X
iD1

or equivalently, for 1  p < 1,

k
X
minimize dQpw .z.x/; zmin / D wi .zi .x/ zmin p
i / ; (3.39)
x2X
iD1

and for p D 1,

minimize dQ1w .z.x/; zmin / D max wi .zi .x/ zmin


i /: (3.40)
x2X iD1;:::;k

Observe that for p D 1, all deviations from zmin i are taken into account in direct
proportion to their magnitudes, while for 2  p < 1, the largest deviation has the
greatest influence. Ultimately for p D 1, only the largest deviation is taken into
account.
It should be noted here that any solution of (3.39) for any 1  p < 1 or a unique
solution of (3.40) with wi > 0 for all i D 1; : : : ; k is a Pareto optimal solution of
the multiobjective linear programming problem.
The compromise set Cw , given the weighting vector w, is defined as the set of all
p
compromise solutions xw , 1  p  1. To be more explicit,
3.4 Compromise Programming 93

Cw D fx 2 X j x solves (3.39) or (3.40) given w for all 1  p  1g: (3.41)

In the context of linear programming problems, Zeleny (1973) suggested that the
compromise set Cw can be approximated by the Pareto optimal solutions of the
following two-objective problem:

minimize .dQ1w .z.x/; zmin /; dQ1w .z.x/; zmin //: (3.42)


x2X

Although it can be seen that the compromise solution set Cw is a subset of the set of
Pareto optimal solutions, Cw may still be too large to select the final solution and,
hence, should be reduced further.
Zeleny (1973, 1976) suggests several methods to reduce the compromise solution
set Cw . One possible reduction method without the DM’s aid is to generate another
compromise solution set CN w similar to Cw by maximizing the distance from the
so-called anti-ideal point zmax D .zmax max max
1 ; : : : ; zk /, where zi D max zi .x/. The
x2X
problem to be solved thus becomes

k
!1=p
X
maximize wi jzmax
i zi .x/jp (3.43)
x2X
iD1

or equivalently, for 1  p < 1,

k
X
maximize wi .zmax
i zi .x//p ; (3.44)
x2X
iD1

and for p D 1,

maximize min wi .zmax


i zi .x//: (3.45)
x2X iD1;:::;k

The compromise solution set CN w is defined by

CN w D fx 2 X j x solves (3.44) or (3.45) given w for all 1  p  1g: (3.46)

The compromise solution set Cw based on the ideal point is not identical with the
compromise solution set CN w based on the anti-ideal point. Zeleny (1976) suggests
using this fact to further reduce the compromise solution set by considering the
intersection Cw \ CN w .
An interactive strategy for reducing the compromise solution set proposed by
Zeleny (1976) is based on the concept of the so-called displaced ideal and thus
is called the method of the displaced ideal. In this approach, the ideal point
with respect to the new Cw displaces the previous ideal point, and the (reduced)
compromise solution set eventually encloses the new ideal point, terminating the
process.
94 3 Multiobjective Linear Programming

The procedure of the method of displaced ideal is summarized as follows.


Procedure of the Method of Displaced Ideal
.0/
Step 1 Let Cw D X , and initialize the iteration index, r D 1.
Step 2 Find the ideal point zmin.r/ by solving

minimize zi .x/; i D 1; : : : ; k:
.r 1/
x2Cw

.r/
Step 3 Construct the compromise solution set Cw by finding the Pareto optimal
solution set of

minimize .dQ1w .z.x/; zmin.r/ /; dQ1w .z.x/; zmin.r/ //:


.r 1/
x2Cw

.r/ .r/
Step 4 If the DM can select the final solution from Cw , or if Cw contains zmin.r/ ,
stop. Otherwise, set r D r C 1 and return to step 2.
It should be noted here that the method of displaced ideal can be viewed as the
best ideal-seeking process, not the ideal itself. Further refinements and details can
be found in Zeleny (1976, 1982).
Example 3.8 (Displaced ideal method for production planning of Example 3.2). To
illustrate the method of displaced ideal, consider the problem of Example 3.2:

minimize z1 D 3x1 8x2


minimize z2 D 5x1 C 4x2
subject to 2x1 C 6x2  27
3x1 C 2x2  16
4x1 C x2  18
x1  0; x2  0:

.0/
Let w1 D w2 D 1 and Cw D X D f.x1 ; x2 / 2 R2 j 2x1 C 6x2  27; 3x1 C 2x2 
16; 4x1 C x2  18; x1  0; x2  0g. From the definition, we have

min.1/ min.1/
z1 D min z1 .x/ D 37; z2 D min z2 .x/ D 0:
.0/ .0/
x2Cw x2Cw

In step 3, the following two-objective programming problem is formulated:

minimize .dQ1w .z.x/; zmin.1/ /; dQ1w .z.x/; zmin.1/ //;


.0/
x2Cw

where

dQ1w .z.x/; zmin.1/ / D . 3x1 8x2 . 37// C .5x1 C 4x2 0/;


3.4 Compromise Programming 95

Fig. 3.8 Displaced ideal z2


method for Example 3.2
(–37, 29) ( 28, 28)
( 13.5, 22.5)

(–36, 18)
A ( 32.2, 16.1)
C ( 24.7, 12.3)

(z1min(2) z2min(2)) B
= (–36, 12.3)

(z1min(1) z2min(1)) (0, 0) z1


= (–37, 0)

dQ1w .z.x/; zmin.1/ / D maxf. 3x1 8x2 . 37//; .5x1 C 4x2 0/g:

.1/
From this problem, we have the compromise solution set Cw which is a straight-
line segment between points A. 36; 18/ and B. 24:667; 12:333/ shown in
Fig. 3.8, where point A corresponding to a solution x D .0; 4:5/ minimizes
dQ1w .z.x/; zmin.1/ / and point B corresponding to a solution x D .0; 3:0833/
minimizes dQ1w .z.x/; zmin.1/ /:
Suppose that the DM cannot select the final solution. In step 2, we have

min.2/ min.2/
z1 D min z1 .x/ D 36; z2 D min z2 .x/ D 12:333:
.1/ .1/
x2Cw x2Cw

In step 3, the following two-objective programming problem is reformulated:

minimize .dQ1w .z.x/; zmin.2/ /; dQ1w .z.x/; zmin.2/ //;


.1/
x2Cw

where

dQ1w .z.x/; zmin.2/ / D . 3x1 8x2 . 36// C .5x1 C 4x2 12:333/;

dQ1w .z.x/; zmin.2/ / D maxf. 3x1 8x2 . 36//; .5x1 C 4x2 12:333/g:

.2/
From this problem, we have the revised compromise solution set Cw which is a
straight-line segment between points A. 36; 18/ and C. 32:222; 16:111/ shown
in Fig. 3.8, where point A minimizes dQ1w .z.x/; zmin.2/ / and point C corresponding
to a solution x D .0; 4:028/ minimizes dQ1w .z.x/; zmin.2/ /:
.1/ .2/
One finds that the compromise solution set diminishes from Cw to Cw . If
.2/
the DM still cannot select the final solution in Cw , it follows that the procedure
continues. ˙
96 3 Multiobjective Linear Programming

3.5 Interactive Multiobjective Linear Programming

The STEP method (STEM) proposed by Benayoun et al. (1971) seems to be


known as one of the first interactive multiobjective linear programming techniques,
and there have been some modifications and extensions [see, for example, Choo
and Atkins (1980); Fichefet (1976)]. Essentially, the STEM algorithm consists of
two major steps. Step 1 seeks a Pareto optimal solution that is near to the ideal
point in the minimax sense. Step 2 requires the DM to compare a vector of the
objective function values with the ideal point and to indicate which objectives can be
sacrificed, and by how much, in order to improve the current levels of unsatisfactory
objectives. The STEM algorithm is quite simple to understand and implement, in
the sense that the DM is required to give only the amounts to be sacrificed of some
satisfactory objectives until all objectives become satisfactory. However, the DM
will never arrive at the final solution if the DM is not willing to sacrifice any of the
objectives. Moreover, in many practical situations, the DM will probably want to
indicate directly the aspiration level for each objective rather than just specify the
amount by which satisfactory objectives can be sacrificed.
Wierzbicki (1980) developed a relatively practical interactive method called the
reference point method by introducing the concept of a reference point suggested by
the DM which reflects in some sense the desired values of the objective functions.
The basic idea behind the reference point method is that the DM can specify
reference values for the objective functions and change the reference objective levels
interactively due to learning or improved understanding during the solution process.
In this procedure, when the DM specifies a reference point, the corresponding
scalarization problem is solved for generating a Pareto optimal solution which is,
in a sense, close to the reference point or better than that if the reference point
is attainable. Then the DM either chooses the current Pareto optimal solution or
modifies the reference point to find a satisficing solution.
Since then, some similar interactive multiobjective programming methods have
been developed along this line [see, for example, Steuer and Choo (1983)]. However,
it is important to point out here that for dealing with the fuzzy goals of the DM for
the objective functions of the multiobjective linear programming problem, Sakawa
et al. (1987) developed the extended fuzzy version of the reference point method that
supplies the DM with the trade-off information. Although the details of the method
will be discussed in the next chapter, it would certainly be appropriate to discuss
here the reference point method with trade-off information rather than the reference
point method proposed by Wierzbicki.
Consider the following multiobjective linear programming problem:
9
minimize z1 .x/ D c1 x >
>
>
minimize z2 .x/ D c2 x >
>
=
 (3.47)
>
minimize zk .x/ D ck x >
>
>
n
>
subject to x 2 X D fx 2 R j Ax  b; x  0g; ;
3.5 Interactive Multiobjective Linear Programming 97

Fig. 3.9 Graphical z2(x)


interpretation of minimax
method ^z2
^z 22
z1(x1)
z21(x1)
^z 12 ^z1

z22(x2) z2(x2)

0 z^ 11 z11(x1) z12(x2) ^z 21 z1(x)

where z1 .x/ D c1 x; : : : ; zk .x/ D ck x are k distinct objective functions of the


decision variable vector x and X is the linearly constrained feasible region.
For each of the multiple conflicting objective functions z.x/D.z1 .x/; : : : ; zk .x//T ,
assume that the DM can specify the so-called reference point zOl D .Oz1 ; : : : ; zOk /T
which reflects in some sense the desired values of the objective functions for the
DM. Also assume that the DM can change the reference point interactively due
to learning or improved understanding during the solution process. When the DM
specifies the reference point zO D .Oz1 ; : : : ; zOk /T , the corresponding Pareto optimal
solution, which is, in the minimax sense, the nearest to the reference point or better
than that if the reference point is attainable, is obtained by solving the following
minimax problem:
)
minimize max fzi .x/ zOi g
iD1;:::;k (3.48)
subject to x 2 X;

or equivalently
9
minimize v =
subject to zi .x/ zOi  v; i D 1; : : : ; k (3.49)
;
x 2 X;

where v is an auxiliary variable.


The case of the two-objective functions in the z1 -z2 plane is shown geometrically
in Fig. 3.9. For two reference points zO1 D .Oz11 ; zO12 /T and zO2 D .Oz21 ; zO22 /T specified
by the DM, the respective Pareto optimal solutions z1 .x1 / and z2 .x2 / are obtained
by solving the minimax problems with zO1 and zO2 .
The relationships between the optimal solutions of the minimax problem and
Pareto optimality of a multiobjective linear programming problem can be character-
ized by the following two theorems.
98 3 Multiobjective Linear Programming

Theorem 3.8. If x 2 X is a unique optimal solution of the minimax problem for


any reference point zO, then x is a Pareto optimal solution of the multiobjective
linear programming problem.
Proof. If a unique optimal solution x of the minimax problem is not a Pareto
optimal solution of the multiobjective linear programming problem, then there exists
x 2 X such that zi .x/  zi .x /; i D 1; : : : ; kI i 6D j; and zj .x/ < zj .x / for
some j . Hence, it follows that

max fzi .x/ zOi g  max fzi .x / zOi g:


iD1;:::;k iD1;:::;k

This contradicts the assumption that x is a unique optimal solution of the minimax
problem. t
u
From the proof of this theorem, in the absence of the uniqueness of a solution in
the theorem, only weak Pareto optimality is guaranteed.
Theorem 3.9. If x is a Pareto optimal solution of the multiobjective linear
programming problem, then x is an optimal solution of the minimax problem for
some reference point zO.
Proof. For a Pareto optimal solution x 2 X of the multiobjective linear program-
ming, choose a reference point zO D .Oz1 ; : : : ; zOk /T such that zi .x / zOi D v ,
i D 1; : : : ; k. For this reference point, if x is not an optimal solution of the minimax
problem, then there exists x 2 X such that

zi .x/ zOi < zi .x / zOi D v ; i D 1; : : : ; k:

This implies the existence of x 2 X such that

zi .x/ < zi .x /; i D 1; : : : ; k;

which contradicts the fact that x is a Pareto optimal solution. t


u
If an optimal solution x to the minimax problem is not unique, then, as discussed
in the previous section, the Pareto optimality test for x  can be performed by solving
the following problem:

k
9
X >
maximize ©i >
>
=
iD1 (3.50)
subject to zi .x/ C ©i D zi .x /; i D 1; : : : ; k >
>
>
;
x 2 X; © D .©1 ; : : : ; ©k /  0:

For an optimal solution .Nx; ©N / of this linear programming problem, as was shown in
Theorem 3.7, (i) if ©N i D 0 for all i D 1; : : : ; k, then x is a Pareto optimal solution
of the multiobjective linear programming problem, and (ii) if ©N i > 0 for at least
3.5 Interactive Multiobjective Linear Programming 99

one i , then not x but xN is a Pareto optimal solution of the multiobjective linear
programming problem.
Now, given a Pareto optimal solution for the reference point specified by the DM
by solving the corresponding minimax problem, the DM must either be satisfied
with the current Pareto optimal solution or modify the reference point. To help
the DM express a degree of preference, trade-off information between a standing
objective function z1 .x/ and each of the other objective functions is very useful.
Such a trade-off between z1 .x/ and zi .x/ for each i D 2; : : : ; k is easily obtainable
since it is closely related to the strict positive simplex multipliers of the minimax
problem (3.49). Let the simplex multipliers associated with be denoted by  i ; i D
1; : : : ; k. If all  i > 0 for each i , it can be proved that the following expression
holds:

@zi .x/  1
D : (3.51)
@z1 .x/  i

Geometrically, however, we can understand it as follows.


In the .z1 ; : : : ; zk ; w/ space, the tangent hyperplane at some point on a Pareto
surface can be described by

H.z1 ; : : : ; zk ; w/ D a1 z1 C    C ak zk C bw D c:

The necessary and sufficient condition for the small displacement from this point
belonging to this tangent hyperplane is H D 0, i.e.,

a1 z1 C    C ak zk C bw D 0:

For fixed values of zj D 0 .j D 2; : : : ; k; j ¤ i / and w D 0 except z1 and


zi , we have

a1 z1 C ai zi D 0:

Similarly, we have

ai zi C bw D 0; a1 z1 C bw D 0:

It follows from the last two relations that

zi a1 a1 =b w=z1
D D D :
z1 ai ai =b w=zi

Consequently, it holds that

@zi @w=@z1
D :
@z1 @w=@zi
100 3 Multiobjective Linear Programming

Using the simplex multipliers  i , i D 1; : : : ; k, associated with all the active


constraints of the minimax problem, since @w=@zi D  i , we obtain (3.51).
It should be stressed here that in order to obtain the trade-off rate from (3.51), all
constraints of the minimax problem must be active. Therefore, if there are inactive
constraints, it is necessary to replace zOi for inactive constraints with zi .x / and to
solve the corresponding minimax problem to obtain the simplex multipliers.
We can now give the interactive algorithm to derive the satisficing solution for
the DM from the Pareto optimal solution set. The steps marked with an asterisk
involve interaction with the DM. Observe that this interactive multiobjective linear
programming method can be interpreted as the reference point method with trade-
off information.
Procedure of Interactive Multiobjective Linear Programming
Step 0 Calculate the individual minimum zmin i D minx2X zi .x/ and maximum
max
zi D maxx2X zi .x/ of each objective function under the given constraints.
Step 1 Ask the DM to select the initial reference point by considering the
individual minimum and maximum. If the DM finds it difficult or impossible
to identify such a point, zmin
i D minx2X zi .x/ can be used for that purpose.
Step 2 For the reference point specified by the DM, solve the corresponding
minimax problem to obtain a Pareto optimal solution together with the trade-off
rate between the objective functions.
Step 3 If the DM is satisfied with the current objective function values of
the Pareto optimal solution, stop. Then the current Pareto optimal solution
is the satisficing solution for the DM. Otherwise, ask the DM to update the
reference point by considering the current values of the objective functions
together with the trade-off rates between the objective functions and return to
Step 2.
It should be stressed to the DM that any improvement of one objective function
can be achieved only at the expense of at least one of the other objective functions.
Example 3.9 (Production planning with environmental consideration). To demon-
strate interactive multiobjective linear programming, consider the following produc-
tion planning problem with environmental consideration as previously discussed in
Example 3.2:

minimize z1 D 3x1 8x2


minimize z2 D 5x1 C 4x2
subject to 2x1 C 6x2  27
8x1 C 6x2  45
3x1 C x2  15
x1  0; x2  0:

Recall that the two objectives in this problem are to minimize both the opposite of
the total profit .z1 / and the amount of pollution .z2 /.
3.5 Interactive Multiobjective Linear Programming 101

Fig. 3.10 Interactive D(–37, 29) z2


C(–28, 28)
multiobjective linear
programming for B(–13.5, 22.5)
Example 3.2
satisficing solution
E(–36, 18) (−32, 16)
(–28, 14)
(z1, z2)=(–33, 15)

(z1, z2)=(–35, 7)

A(0, 0) z1

First, observe that the individual minima and maxima for the objective func-
tions are

zmin
1 D 37; zmax
1 D 0; zmin
2 D 0; zmax
2 D 29:

Considering these values, suppose that the DM specifies the reference point as

zO1 D 35; zO2 D 7:

For this reference point, as can be easily seen from Fig. 3.10, solving the corre-
sponding minimax problem yields a Pareto optimal solution

z1 D 28; z2 D 14 .x1 D 0; x2 D 3:5/

and the trade-off rate between the objective functions

@z2
D 0:5:
@z1

On the basis of such information, suppose that the DM updates the reference point to

zO1 D 33; zO2 D 15

in order to improve the satisfaction level of the profit at the expense of that of
the pollution amount. For the updated reference point, solving the corresponding
minimax problem yields a Pareto optimal solution

z1 D 32; z2 D 16 .x1 D 0:75; x2 D 4/

and the trade-off rate

@z2
D 0:5:
@z1
102 3 Multiobjective Linear Programming

If the DM is satisfied with the current values of the objective functions, the
procedure stops. Otherwise, a similar procedure continues in this fashion until the
satisficing solution of the DM is derived. ˙
It may be appropriate to point out here that several interactive multiobjective
programming methods including the methods presented in this chapter were
developed (Changkong and Haimes 1983; Miettinen 1999; Miettinen et al. 2008;
Sakawa 1993; Steuer 1986; Vanderpooten and Vincke 1989) from the 1970s to the
1980s, and a basic distinction has been made concerning the underlying approach.
Especially, Vanderpooten and Vincke (1989) highlighted an evolution from search-
oriented methods to learning-oriented procedures.
The recently published book entitled “Multiple Criteria Decision Making—From
Early History to the 21st Century—” (Köksalan et al. 2011), which begins with the
early history of Multiple Criteria Decision Making and proceeds to give a decade
by decade account of major developments in the field starting from the 1970s until
now, would be very useful for interested readers.

Problems

3.1 Graph the following two-objective linear programming problem in the x1 -x2
plane and z1 -z2 plane, and find all Pareto optimal solutions.

minimize z1 D x1 3x2
minimize z2 D x1 x2
subject to x1 C 8x2  112
x1 C 2x2  34
9x1 C 2x2  162
x1  0; x2  0:

Verify that if the objective functions are changed to z1 D x1 8x2 and z2 D


x1 x2 , weak Pareto optimal solutions exist, and a complete optimal solution
exists if changed to z1 D 2x1 3x2 and z2 D x1 x2 .
3.2 Prove that if x 2 X is an optimal solution of the weighting problem for
some w > 0, then x is a Pareto optimal solution of the multiobjective linear
programming problem.
3.3 Graph the following two-objective linear programming problem in the x1 -x2
plane and z1 -z2 plane, and find all Pareto optimal solutions.

minimize z1 D 2x1 5x2


minimize z2 D 3x1 C 2x2
subject to 2x1 C 6x2  27
8x1 C 6x2  45
3x1 C x2  15
x1  0; x2  0:
3.5 Interactive Multiobjective Linear Programming 103

3.4 For the two-objective linear programming problem discussed in Problem 3.3,
solve the following problems.
(i) Obtain a Pareto optimal solution for the weighting method with w1 D 0:5 and
w2 D 0:5.
(ii) Obtain a Pareto optimal solution for the constraint method with ©2 D 8.
(iii) Setting zO1 D z1 . 23:5/, obtain a Pareto optimal solution for the minimax
method with w1 D 0:5 and w2 D 0:5.
3.5 Find an optimal solution to the following linear goal programming problem
graphically.

minimize P1 d1 C P2 d2C C P3 .d3 C 2d4 /


subject to x1 C 2x2 d1C C d1 D 8
3x1 C 2x2 d2C C d2 D 10
x1 d3C C d3 D 2
x2 d4C C d4 D 3
2x1 C 6x2  27
8x1 C 6x2  45
3x1 C x2  15
x1  0; x2  0
diC  0; di  0; i D 1; 2; 3; 4:

3.6 Setting w1 D w2 D 1, employ the method of displaced ideal to the two-


objective linear programming problem

minimize z1 D 5x1 5x2


minimize z2 D 5x1 C x2
subject to 5x1 C 7x2  12
9x1 C 1x2  10
5x1 C 3x2  3
x1  0; x2  0:

3.7 Using the Excel solver, solve Examples 3.2–3.4 and confirm the Pareto
optimal solutions.
3.8 Apply the interactive multiobjective linear programming to the two-objective
linear programming problem of Problem 3.3.

You might also like