Linear Programming: Optimization), Comparative Statics, and Dynamics, Let Us Return To The Problem of
Linear Programming: Optimization), Comparative Statics, and Dynamics, Let Us Return To The Problem of
LINEAR PROGRAMMING
Now that we have covered the entire analytical spectrum of statics (including
optimization), comparative statics, and dynamics, let us return to the problem of
optimization and present some relatively recent developments in that area of
analysis. These consist of mathematical programming (a general term covering
linear programming as well as nonlinear programming) and game theory.
Mathematical programming differs from classical optimization in that it
seeks to tackle problems in which the optimizer faces inequality constraints―
constraints in the form of, say, g(x,y) ≤ c rather than g(x,y) = c. As a specific
illustration, instead of requiring a consumer to spend the exact amount of $250,
the mathematical-programming framework will allow him the freedom of
spending either $250 or less if he chooses. By thus liberalizing the constraint
requirement, this new framework of optimization makes the problem at once
more interesting and more realistic. But it also calls for the development of new
methods of solution, since inequality constraints cannot be handled by the
classical techniques of calculus.
Game theory, the other development, departs from our earlier framework
of optimization in a still more radical way. Instead of seeking a maximum or
minimum value of a variable, the optimizer will be assumed to be looking for a
so-called “minimax” or “maximin” value. Thus, there is even to be a new
optimization criterion. However, as we shall see, even though it was developed
independently of mathematical programming, game theory nevertheless bears a
close relationship to the latter.
We shall first discuss the simpler variety of mathematical programming,
known as linear programming, in which the objective function as well as the
constraint inequalities are all linear.
Minimize C 0.6 x1 x2
subject t o 10x1 4 x2 20
(18.1) 5 x1 5 x2 20
2 x1 6 x2 12
and x1 , x2 0
The first equation in (18.1), which is a cost function based on the price information
in Table 18.1, constitutes the objective function of the linear program; here the
function is to be minimized. The three inequalities that follow are the constraints
necessitated by the daily requirements; these are readily translated from the last
TABLE 18.1
Food I (per lb) Food II (per lb)
Minimum daily
Price ($) 0.60 1.00
requirement
Calcium (unit*) 10 4 20
Protein (unit*) 5 5 20
Calories (unit*) 2 6 12
* Hypothetical units are employed to allow the use of convenient round numbers in the example.
the graphical solution Since our problem involves only two choice
variables, it is amenable to graphical analysis. In Fig. 18.1, let us plot x1 and
x2 on the two axes. Because of the nonnegativity restrictions, we need only to
consider the nonnegative quadrant.
To see what the constraints will entail graphically, let us first pretend that
the three constraints are given as equations and plot them as three straight lines
as seen in Fig. 18.1a. Each of these lines―labeled as calcium border, protein
border, and calorie border, respectively,―divides the quadrant into two nonover-
lapping regions. Since each constraint is of the ≥ type, only the points (ordered
pairs) lying in the northeast region or on the border line itself will satisfy the
particular constraint involved. To satisfy all three constraints simultaneously we
can thus accept only those ordered pairs (x1,x2) which do not lie to the southeast
of any border line we have constructed. The point (1,2), for instance, does satisfy
the calorie constraint, but it fails the other two; it is therefore unfeasible in our
linear program. On the other hand, all the points located in the shaded area in
diagram b do satisfy all three constraints simultaneously. For this reason, the
shaded area is called the feasible region and each point (ordered pair) in that
region is known as a feasible solution. It should be understood that the feasible
region includes the points on its (heavy) kinked boundary. In particular, the set
of points on the horizontal axis {(x1,x2) | x1 ≥ 6, x2 = 0}, as well as the set of
points on the vertical axis {(x 1,x 2 ) | x1 = 0, x 2 ≥ 5}, are also members of the
feasible region Therefore, the feasible region can be considered as a closed set (of
ordered fairs), a concept which, akin to that of a closed interval means a set
that includes all its boundary points.
The reader will note that the kinked boundary of the feasible region is
composed of selected segments of the three constraint borders and of the axes.
Note also that, in our present (two-dimensional) case, the corner points on the
Note that, although the cutting constraint should, according to Table 18.2, be
½x1 ≤ 8, we have multiplied both sides by 2 to get rid of the fractional expression.
TABLE 18.2
Hours of processing
needed per ton of
Daily capacity
Product I Product II
(hours)
Cutting ½ 0 8
Mixing 0 1 8
Packaging ⅓ ⅔ 8
12
Cutting
border
(8, 8 x 1 = 16
Mixing (16,4)
4 border Packaging
x2 = 8 border
x 1 + 2 x 2 = 24
x1
0
0 4 8 12 16 20 24
(a )
x2
12
(8, 8
8
Feasible (16,4)
4
region
x1
0
0 4 8 12 16 20 24
(b )
FIGURE 18.2
the graphical solution The linear program in (18.2) can again be solved
graphically, as is shown in Fig. 18.2. By virtue of the nonnegativity restrictions,
the problem is again confined to the nonnegative quadrant in which we can draw
three constraint borders. The cutting border (x1 = 16) and the mixing border
An equational
constraint
Border II
Border I
x1 x1
0 (a ) 0 (b )
FIGURE 18.3
below border II, the feasible region will be a null set, and the problem cannot
be solved.
A linear program may in fact also include one or more equational constraints.
Figure 18.3b shows a (shaded) feasible region similar to the one in Fig. 18.2b, but
if an equational constraint is added, the set of feasible solutions will shrink from
the shaded area to the dotted-line segment―the intersection of the shaded area
and the new constraint line. Nevertheless, the same method of solution will apply.
In (18.3), we have borrowed the π symbol from the production example to serve
as a general symbol for the maximand (the object to be maximized), even though
in many contexts the objective function, may be something other than a profit
function. The choice variables are denoted by xj (with j = 1, 2, ..., n), and their
coefficients in the objective function are symbolized by cj (with j = 1, 2, ..., n),
which are a set of given constants. The ri symbols (i = 1, 2, ... , m)―another
set of constants―represent, on the other hand, the “restrictions” imposed in the
program. For the sake of uniformity, we have written all the m constraints as ≤
inequalities, but no loss of generality is thereby entailed. In particular, it may
be noted that, in case a ≥ constraint appears, we can always convert it into the
≤ form simply by multiplying both sides by –1. Lastly, the reader will note
that the coefficients of the choice variables in the constraints are denoted by aij,
where the double subscripts can serve to pinpoint the specific location of each
coefficient. Since there are altogether m constraints in n variables (where m can
be greater than, equal to, or less than n), the coefficients aij will form a rectangular
matrix of dimension m × n.
Analogously, a minimization problem may be written in longhand as
follows:
Minimize C c1 x1 c2 x2 ... cn xn
subject t o a11x1 a12 x2 ... a1n xn r1
a21x1 a22 x2 ... a2 n xn r2
(18.4)
... ... ... ... ...
am1 x1 am 2 x2 ... amn xn rm
and x j 0 ( j 1, 2, ..., n)
Again, we have borrowed the C symbol from the diet problem to serve as a
general symbol for the minimand (the object to be minimized), even though the
objective function in many contexts may not be a cost function. The cj in the
objective function still represent a set of given constant coefficients, as are rj in
the constraints, but the symbol r in the present context will signify requirements
rather than restrictions. The symbolism for the choice variables and the
matrix notation To see how matrix notation can be applied, let us first
define the following four matrices:
c1 x1 a11 a12 ... a1n r1
c x a a22 ... a2 n r
(18.5) c 2 x 2 A 21 r 2
... ... ... ... ... ... ...
cn xn am1 am 2 ... amn rm
Three of these are column vectors―c and x being of dimension n × 1, but r
being m × 1. Matrix A is an m × n array.
Upon these definitions, the objective function in (18.3) can be expressed by
the equation
π c' x
(1 n ) ( n1)
where, it will be recalled, the vector product c'x is 1 × 1 and, therefore,
represents a scalar. It is in regard to the m constraints, however, that the
advantage of matrix notation manifests itself distinctly for the entire set of
constraints in (18.3) can be summarized in a single inequality as follows:
A x r
( m n ) ( n1) ( m1)
The inequality sign ≤, when applied to numbers, is often used interchangeably with the sign ≤. When applied
to vectors, however, the two signs may be assigned different meanings. For a discussion, see Kelvin Lancaster,
Mathematical Economics, The Macmillan Company, New York, 1968, p. 250.
a g
c
d h
FIGURE 18.4
a particular point in the feasible region―(x̄1, x̄2, ... , x̄n)―as the optimal
solution.
As was intimated earlier, however, the optimal solution is always to be
found at one of the extreme points of the feasible region: as will be explained
below, this happens to be true even for the n-variable case. Consequently,
instead of finding the entire feasible region, all we need is a method of deter-
mining the set of all extreme points, from among which we can then select the
optimal solution.
This convenient result is based on the fact that, regardless of the number
of choice variables the feasible region in a linear program is always what is
referred to as a closed convex set. Since the theory of convex sets plays a significant
part in mathematical programming (and in game theory), it is advisable to get
acquainted with it before proceeding to the development of a method of solution
for the general n-variable linear program.
Generally speaking, a set can be a collection of any kind of objects but in the
present context our concern is one with sets of points in an n-space, which
includes, as special cases, points on a line (1-space)) and, points in a plane (2-space).
Since a point in an n-space may also be considered an n-tuple or an n-vector,
point sets are also sets of n-tuples or of n-vectors. Convex sets represent a special
genre of point sets.
1 2 2 4
As an illustration, the combination 3 0 3 9 is a convex combination. In
view of the fact that these two scalar multipliers are positive fractions adding up
to 1, such a convex combination may be interpreted as a weighted average of the
two vectors.1
The unique characteristic of the combination in (18.6) is that, for every
acceptable value of ζ, the resulting sum vector always lies on the line segment
connecting the points u and v. This can be demonstrated by means of Fig. 18.5,
x2
u
(u 1 , u 2 )
q
v
(v 1 , v 2 )
ζq
x1
0
FIGURE 18.5
u v
where we have plotted two vectors u 1 and v 1 as two points with
u2 v2
coordinates (u1, u2) and (v1, v2), respectively. If we plot another vector q such that
Oquv forms a parallelogram, then we have (by virtue of the discussion in Fig. 4.3)
u q v or q u v
It follows that a convex combination of vectors u and v (let us call it w) can be
expressed in terms of vector q, because
w ζu (1 ζ )v ζu v ζv ζ (u v) v ζq v
1
The reader will recall that this interpretation has been made use of earlier, in the discussion of concave and
convex functions in Sec. 11.4.
f (x )
k k
g (x )
x x
0 0
Ser S ≤ Ser S ≥
(a ) (b )
FIGURE 18.6
Hence, to plot the vector w we can simply add ζq and v by the familiar paral-
lelogram method. If-the scalar ζ is a positive fraction, the vector ζq will merely
be an abridged version of vector q; thus ζq must lie on the line segment Oq.
Adding ζq and v, therefore, we must find vector w lying on the line segment uv,
for the new, smaller parallelogram is nothing but the original parallelogram with
the qu side shifted downward. The exact location of vector w will, of course, vary
according to the- value of the scalar ζ; by varying ζ from zero to unity, the
location of w will shift from v to u. Thus the set of all points on the line segment
uv, including u and v themselves, corresponds to the set of all convex combina-
tions of vectors u and v.
In view of the above, a convex set may be defined as follows: A set S is
convex if and only if, for any two points u S and v S, and for every scalar
0 ≤ ζ ≤ 1, it is true that w = ζu + (1 – ζ)v S. This definition is applicable
regardless of the dimension of the space in which the vectors u and v are located.
convex set versus convex function Even though the identical word
convex appears in both the term convex set and the term convex function, it has a
widely different connotation in each context. In describing a set, the word convex
is concerned with whether the set has any holes in it, whereas, in describing a
function, the word has to do with how a curve or surface bends. What, then, is
the rationale for using the same adjective? The answer will become clear if we
compare the definition of convex set (given in the preceding paragraph) with that
of convex function [see (11.14) and the ensuing sentence]. As the reader will note,
both definitions depend on the concept of convex combinations (weighted
the set H (the hyperplane) can be more simply specified by (18.9) or by its vector version, π0 = c'x.
FIGURE 18.7
As a preliminary, the reader will note that the feasible region always represents
the intersection of a total of m + n closed convex sets. In general, any point in
the feasible region must by definition simultaneously satisfy a system of m + n
linear (weak) inequalities―the m constraints plus the n nonnegativity restric-
tions. Thus it must simultaneously be a member of m + n closed halfspaces,
i.e., must be a point in the intersection of those m + n closed convex sets. This
being the case, the following theorem will establish the feasible region as a closed
convex set: The intersection of a finite number of convex sets is a convex set,
and if each of the sets is closed, the intersection will also be closed.
The essence of this theorem can be grasped from Fig. 18.7, where the set S
(a solid square) and the set T (a solid triangle) are both convex. Their intersection
S T, represented by the heavy-shaded area, evidently is also convex. Moreover,
if S and T are both closed, then S T will also be closed, because the boundary
points of the intersection set, which are merely a subset of the boundary points
of S and of T, do belong to the intersection set.1
For a formal proof of this theorem, let u and v be a ny two points2 in the
intersection of two convex sets S and T. This means that
u, v S and concurrently u, v T
If w is any convex combination of u and v then since S is convex by assumption,
we must find w S. Similarly, since T is also assumed convex, it is true that
w T. But the concurrent membership in S and T implies that w (S T); that
is, any convex combination of u and v will, like u and v themselves, be in the
intersection set. This proves that the intersection is convex. By repeating the
process, it can then be proved that the intersection of any finite number of
convex sets must be a convex set.
1
Observe that the union of two convex sets is not necessarily convex. In Fig. 18.7, the union set S T consists
of the entire shaded area, which is reentrant.
2
The special case of ST being a null set or a set with only one point are trivial, because such sets are
considered to be convex by convention.
f Line 1 u
Line 2 H
Line 3
F
g 0
x2
F
h
x1
x1
0 (b )
(a )
FIGURE 18.6
A supporting hyperplane, (say, H̄) is a hyperplane which has one or more
points in common with a convex set (F) but which is so situated that the set F
lies on one side of H̄ exclusively. Figure 18.8 illustrates this concept for the
2-space and 3-space cases. In diagram a, lines 1, 2, and 3 are examples of
supporting hyperplanes (here, lines). Line 1 (or line 2) has only one point in
common with the solid polygon F, but line 3 has several. In either case, however,
the set F lies exclusively on one side of the supporting line; consequently, only
boundary points of F can appear on each of these lines. Note that each supporting
line contains at least one extreme point of the set F, such as f, g, and h. The
3-space illustration in diagram b is similar, except that the supporting line is now
replaced by a supporting plane. Again, the intersection of H̄ and F (this time a
solid polyhedron) can consist only of the boundary points of F; as illustrated,
only one point (u) is involved, and that point is an extreme point of the set F.
THEOREM II For a closed convex set bounded from below, there is at least
one extreme point in every supporting hyperplane.
1
George B Dantzig, “Maximization of a Linear Function of Variables Subject to Linear Inequalities,” in Tjalling
C. Koopmans (ed.), Activity Analysis of Production and Allocation, John Wiley & Sons, Inc., New York, 1951,
pp. 339-347.
slacks and surpluses The extreme points in Figs 18.1 and 18.2 fall into
three major types. These can be adequately illustrated in Figure 18.9, which
reproduces the feasible region of Fig. 18.2b.
12
Cutting
border
(8, 8) x 1 = 16
8
Mixing
(16,4) Packaging
border
4 Feasible border
x2 =8 region x 1 + 2 x 2 = 24
x1
0
0 4 8 12 16 20 24
(b )
FIGURE 18.9
The first type consists of those which occur at the intersection of two
constraint borders; examples of these are found in the points (8,8) and (16,4).
While such points do fulfill two of the constraints exactly, the remaining constraint
is inexactly fulfilled. The point (16, 4), for instance, exactly fulfills the cutting and
packaging constraints but not the mixing constraint, because the point lies below
the mixing border. With the inexact fulfillment of some constraint, there must
be an underutilization of capacity (or, in the diet problem, an overintake of some
nutrient beyond the minimum requirement). That is, a slack in capacity utiliza-
tion (or a surplus in nutrient intake) will develop.
Extreme points of the second type, exemplified by (0,8) and (16,0), occur
where a constraint border intersects an axis. Being located on one constraint
border only, such points can fulfill only one constraint exactly. Viewed differently,
there will now develop slacks in two constraints.
Lastly, as the third type of extreme point, we have the point of origin
(0,0), which fulfills none of the constraints exactly. The (0,0) extreme point,
however, is only found in a maximization program, for the feasible region of a
minimization program normally excludes the point of origin, as the reader can
see from Fig. 18.1b.
The upshot is that in the present example―where the number of constraints
(3) exceeds the number of choice variables (2)―each and every extreme point
will involve a slack in at least one of the constraints. Furthermore, as should be
evident from Fig. 18.9, the specific magnitude of the slacks entailed by each
extreme point is easily calculable. When settling on a particular extreme point
as the optimal solution, therefore, we are in effect deciding not only the values
of x̄1 and x̄2 but also the optimal values of the slacks. Let us try to consider the
slacks explicitly, denoting the slack in the ith constraint by the symbol si. The
If we set ay two of the five variables equal to zero, thereby in effect deleting
two terms from the left of (18.14), there will result a system of three equations
in three variables. A unique solution can then be obtained, provided that the
(retained) coefficient vectors on the left are linearly independent. When the
solution values of these three variables are combined with the arbitrarily assigned
zero values of the other two variables, the result will be an ordered quintuple
such as shown in Table 18.3.
basic feasible solutions and extreme points However, from the proc-
ess just described, there can arise two possible situations. First, a negative
1
It may happen, however, that less than m variables actually take nonzero values in the solution space. This
is known as the case of degeneracy, which will be discussed briefly in Sec. 18.6.
TABLE 18.3
Output space Solution space
(x1,x2) → (x1,x2,s1,s2,s3)
(0,0) (0, 0,16, 8, 24)
(16,0) (16, 0, 0, 8, 8)
(16,4) (16, 4, 0, 4, 0)
(8,8) (8, 8, 8, 0, 0)
(0,8) (0, 8,16, 0, 8)
TABLE 18.4
Tableau I π x1 x2 s1 s2 s3 Constant
Row 0 1 –40 –30 0 0 0 0
Row 1 0 1 0 1 0 0 16
Row 2 0 0 1 0 1 0 8
Row 3 0 1 2 0 0 1 24
* * *
a pivot step Now let us endeavor to improve upon the profit by switching
to a new BFS, by forming a new basis. The main idea of the basis-changing
process, known as pivoting, is to replace a column vector currently in the basis
by another column vector that is currently excluded. Or, what amounts to the
same thing, we must expel a currently included variable (sl, s2, or s3) in favor
of a current excluded one (x1 or x2). What criterion can we employ in the
selection of the outgoing and incoming vectors (or variables)?
Since our purpose is to improve upon the profit, it is natural to turn to the
objective function for a clue. As written in (18.13), the objective function indicates
that the marginal profit rate of xl is $40 and that of x2 is $30. It stands to reason
that the selection of xl as the incoming variable is more promising as a profit
booster. In terms of the tableau, the criterion is that we should select that
variable which has the negative entry with the largest absolute value in row 0. Here,
the appropriate entry is –40, and the incoming variable is to be xl; let us call
the xl column the pivot column.
The pivot column is destined to replace one of the si columns, but this
raises two problems. First, we must decide which of the latter columns is to go.
Second, we must see to it that, as the new member of the basis, the pivot column
is linearly independent of the old vectors that are retained. It happens that both
problems can be resolved in one sweep, by transforming the pivot column into a
unit vector, with 1 in either of thee three bottom rows, and 0s elsewhere. Leaving
TABLE 18.5
Tableau II π x1 x2 s1 s2 s3 Constant
Row 0 1 0 –30 40 0 0 640
Row 1 0 1 0 1 0 0 16
Row 2 0 0 1 0 1 0 8
Row 3 0 0 2 –1 0 1 8
another pivot step In row 0 of Tableau II, there is an entry –30 asso-
ciated with the variable x2. Since, a negative entry in row 0 indicates a positive
marginal profit rate, further profit improvement is possible by letting x2 replace
a zero-profit-rate or a negative-profit-rate variable in the basis. Therefore, we
adopt the x2 column as the next pivot column. As for the pivot row, since the
smallest displacement quotient is
min{8 /1 ,8 / 2 } min{8,4} 4
row 3 should be chosen. Hence the pivot element is 2, which we have encircled
to call attention.
In order to transform the elements in the x2 column into 0, 0, 0, and 1 (in
that order), we must: (1) add 15 times row 3 (the pivot row) to row 0; (2) leave
row 1 intact; subtract 1/2 times row 3 from row 2; and, (4) divide row 3 by 2.
The reader is urged to carry out these operations and check the results against
Tableau III in Table 18.6.
In Tableau III, the columns with unit vectors pertain to the variables
π, xl, x2, and s2, whose solution values can be read off as follows: π = 760,
x1 = 16, x2 = 4 (from row 3), and s2 = 4 (from row 2). Consequently, we
can write
TABLE 18.6
Tableau III π x1 x2 s1 s2 s3 Constant
Row 0 1 0 0 25 0 15 760
Row 1 0 1 0 1 0 0 16
1
Row 2 0 0 0 /2 1 –1/2 4
1 1
Row 3 0 0 1 – /2 0 /2 4
Two products are now being produced, and the profit has been raised to 760.
Again, we, can see from Table 18.3 that the second pivot step has taken us to a
new extreme point of the feasible region.
We are ready for another pivot step, but since row 0 of Tableau III contains
no more negative entries, no further pivoting will prove profitable. To appreciate
this fact, let us convert row 0 into the equation
π 760 25s1 15s3
To maximize π in this equation we must set s1 = s3 = 0; but the solution S3
does just that. Hence S3 must be optimal! As expected, this optimal solution is
identical with the results obtained graphically in Sec. 18.1.
In terms of Fig. 18.9, the simplex method has led us systematically from
the point of origin―the initial extreme point―to the next extreme point (16,0),
followed by a move to the optimal extreme point (16,4). Note that we have
arrived at the optimal solution without having to compare all five extreme
points. Also note that, by choosing x1 as the pivot column in Tableau I (in
accordance with the marginal-profit-rate criterion), we have traveled to the
optimum via a shorter route; had we chosen x2 as the pivot column, we would
have moved to the point (0,8) in Fig. 18.9 first, and that would have required
three pivot steps to reach the point (16,4).
1
We have inserted two broken lines in the 3 × 8 coefficient matrix to clarify the association between the three
groups of column vectors and the three groups of variables (choice, surplus, and artificial).
If we are to choose one of the five variables as the incoming variable to replace
an artificial variable in the initial basis, x1 can obviously contribute most to cost
minimization because it has-the negative coefficient with the highest absolute
value. In terms of Tableau II, however, x1 is the variable with the largest positive
coefficient in row 0―hence the above criterion. Accordingly, the x1 column
should be picked as the pivot column.
Here, since the smallest displacement quotient is
min{20 /10 , 20 / 5 ,12 / 2 } min{2,4,6} 2
row 1 is the pivot row, and the (encircled) element 10 is the pivot element. By
transforming the pivot column into a unit vector, we end up with Tableau III,
which gives us the solution (C,x1,v2,v3) = (9006/5,2,10,8). Now that x1 has replaced
TABLE 18.7
Choice Surplus
variables variables Artificial variables
Con-
Tableau Row C x1 x1 s1 s2 s3 v1 v2 v3 stant
1 0 10 4 –1 0 0 1 0 0 20
I
2 0 5 5 0 –1 0 0 1 0 20
3 0 2 6 0 0 –1 0 0 1 12
8497
0 1 /5 1499 –100 –100 –100 0 0 0 5200
1 0 10 4 –1 0 0 1 0 0 20
II
2 0 5 5 0 –1 0 0 1 0 20
3 0 2 6 0 0 –1 0 0 1 12
1 0 1 2
/5 –1/10 0 0 1
/10 0 0 2
III
2 0 0 3 1
/2 –1 0 – 1/ 2 1 0 10
3 0 0 26
/5 1
/5 0 –1 – /5
1
0 1 8
35,154
0 1 0 0 2498
/65 –100 7481
/130 –8998/65 0 –20,481/130 /65
1 0 1 0 –3/26 0 1
/13 3
/26 0 –1/13 18
/13
IV 15
2 0 0 0 5
/13 –1 /26 – /13
5
1 – /26
15 70
/13
3 0 0 1 1
/26 0 –5/26 –1/26 0 5
/26 20
/13
0 1 0 0 1
/15 – /75
19
0 – 1501
/15 – 7481
/75 –100 56
/15
1 0 1 0 –1/6 2
/15 0 1
/6 –2/15 0 2
/3
V 2
2 0 0 0 /3 –26/15 1 – 2/ 3 26
/15 –1 28
/3
3 0 0 1 1
/6 – /3 1
0 – /6
1 1
/3 0 10
/3
1 0 1 0 0 –3/10 1
/4 0 3
/10 – 1/ 4 3
VI
2 0 0 0 1 – /5
13 3
/2 –1 13
/5 – /2
3
14
3 0 0 1 0 1
/10 – 1/ 4 0 –1/10 1
/4 1
v1 in the basis, the cost of diet is substantially reduced: from $5200 to only
slightly over $1800.
The subsequent steps are nothing but the same process repeated. But note
that it will take two more pivot steps to drive out the remaining two artificial
variables. Inasmuch as the introduction of the vi variables inevitably lengthens
the computation process, we should try whenever possible to reduce the number
of such variables added. This can be done, for instance, if the first column of the
coefficient matrix happens to contain the elements 0, 1 and 0 (rather than
10, 5, 2), for then we may omit v2 and let x1 take its place in the initial basis.
Similarly, if the elements of that column are 0, 0 and 1, we can instead omit v3.
From Tableau II on, each successive tableau in Table 18.7 shows a reduction
in cost. When the cost is reduced to 14/5 = 2.80 in Tableau VI, we can tell from
row 0 that no further reduction is possible because no more positive entries are
present in the xi, si and vi columns. Our optimal solution is therefore
( x1 , x2 , s1 , s2 , s3 ) (3,1,14,0,0)
C $2.80
which is identical with the graphical solution obtained in Sec. 18.1.
1
Algorithm is a fancy word, meaning a routinized computational procedure.