0% found this document useful (0 votes)
4 views9 pages

Pesudo Integer Programming Algorithm

The document presents the Pseudo Primal-Dual Algorithm for solving pure integer programming problems in two stages, focusing on maintaining dual feasibility while using an all-integer matrix. This algorithm differs from existing methods by its choice of cuts and pivot rules, ensuring that the objective function value improves at least as much as the dual simplex method. Example problems illustrate the algorithm's features and variations, demonstrating its effectiveness in integer programming.

Uploaded by

xigewol345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views9 pages

Pesudo Integer Programming Algorithm

The document presents the Pseudo Primal-Dual Algorithm for solving pure integer programming problems in two stages, focusing on maintaining dual feasibility while using an all-integer matrix. This algorithm differs from existing methods by its choice of cuts and pivot rules, ensuring that the objective function value improves at least as much as the dual simplex method. Example problems illustrate the algorithm's features and variations, demonstrating its effectiveness in integer programming.

Uploaded by

xigewol345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

JOURNAL OF RESEARCH of the National Bureau of Standards - B.

Mathematic s and Mathematical Physics


Vol. 71 B, No.4, October- December 1967

A Pseudo Primal-Dual Integer Programming Algorithm*


Fred Glover**
(November 16, 1966)

The Pseudo Primal-Dual Algorithm solves the pure integer programming problem in two stages,
systematically violating and restoring dual feasibility while maintaining an all-integer matrix. The
algorithm is related to Gomory AlI-fnteger Algorithm and the Young Primal Integer Programming
Algorithm, differing from the former in the dual feasible stage by the choice of c uts and pivot variable,
and from the latter in the dual infeasible stage by the use of a more rigid (and faster) rule for restoring
dual feasibility.
The net advance in the objective function value produced by the algorithm between two co nsecu-
tive stages of dual infeasibility is shown to be at least as great as that produced by pivoting with the
dual simplex method. Example problems are given that illustrate basi c features and variations of the
method.

Key Words : Comory algorithm, integer programming, linear inequalities, maximization.

1. Introduction A = (A 0, A, . . . ,A n ) = (a ij )
The algorithm of this paper alternates between a i=O, 1, ... ,m,j=O, 1, ... ,n,
dual feasible stage related to the Gomory All-Integer
Integer Programming Algorithm [4)1 and a dual in-
where the general row equation of X = A T is rep-
feasible stage related to the Young Primal Integer resented
Programming [5]. The Pseudo Primal-Dual algorithm
departs from the Gomory and Young algorithms, how-
ever, in its choice of cuts and pivot rules, and produces Xi=aW+ !
j= 1
aij(-tj),i=O, 1, ... ,m (1)
an objective function change between two consecutive
stages of dual feasibility at least as great as produced
and the last n equations initially have the form
by a pivot with the dual simplex method. In addition,
the number of iterations of the dual infeasible stage is
less than a particular coefficient in the preceding dual X", - n+j=- (-tj),j= 1, . . . , n.
feasible matrix. The above problem represents the ordinary linear
Key features and variations of the algorithm are programming problem PI when the Xi may assume
illustrated by detailed solution of example problems in fractional values and the pure integer programming
the concluding section. problem P2 when the X i are required to be integers.
As is well known, X = A 0 provides an optimal solution
2. Description of the Problem to PI when A is both primal and dual feasible, i.e.,
Using matrix notation , the problem we are concerned when aiQsO for i=l, . . . , m and aOjsO for j=l ,
with may be written ... ,n. If in addition A 0 is all-integer X = A 0 provides
Maximize Xo an optimal solution to P2.
Subject to X=AT,
3. The Dual Simplex Algorithm and the

l~~l l~t]
Gomory All-Integer Algorithm

T= The Dual Simplex Algorithm for solving PI and the


X= XI sOfori=l, ... ,m, Gomory All-Integer Algorithm for solving P2 are
~m ~tn closely related_ A basic idea of these methods is to
employ a nonsingular transform~~on of A and T to
*An invited paper. This report was prepared as part of the activities of the Management ~tain ~ new representation X = A T for X. Thereupon ,
Sciences Research Group, Carnegie In stitute of Technology, under Contract NONR 760(24)
NR 047-{)48 with the U.S . Office of Naval Research. Reproduction in whole or in part is A and T assume the role of A and T, and the process
permitted for any purpose of the U.S. Government. repeats until an A matrix is obtained that satisfies
** Prese nt addre ss: School of Business. Unive rsity of Texas, Austin. Texas 78712.
I Figures in brackets indica te the lite rature references at the end of thi s paper.
the appropriate optimality criteria.

187
In applying the Dual Simplex Algorithm, A begins 3. Determine A so that
and remains dual feasible. The precise rules of this
method are as follows.

The Dual Simplex Algorithm (DSA ) for Solving P 1 4. Let tv == Sr and 4 tj == tj for j
# v. Designate

1. If aiO~O for i=I, ... ,m, then X=Ao is


A and T to be the current A and T and return to 1.
It is evident that the Gomory algorithm results
optimal. Otherwise select r ~ 1 such that aro < O.
simply by applying instructions 2, 3, and 4 of the DSA
2. If arj ~ 0 for j= 1, _ .. , n, then PI has no
to the cut eq (2) instead of the source eq (1) (for
feasible solution. Otherwise, select u ~ 1 such that
i = r). Simplifications in the computation of A and v
aru <O and 2 Au/aru>IAj/arj for all j~l, j#u,
are given by Gomory in [4], where it is shown that
such that arj < O.
finite convergence is guaranteed by periodically
3. Determine A by the rules: selecting r in instruction 1 to be the least i ~ 1 such
that aiO < o.
The fundamental ideas underlying the all-integer
algorithm and the DSA provide the conceptual start-
ing points for the Pseudo Primal-Dual Algorithm,
whose strategy and special characteristics we develop
to follow.
4. Let Iu == Xl' and (j == tj for j # u. Designate A and
4. The Pseudo Prima l-Dual Algorithm
T to be the current A matrix and T vector and return
to 1.
The All·Integer Integer Programming Algorithm The Pseudo Primal Dual Algorithm involves a
of Ralph Gomory modifies the DSA by introducing new sequence of " major iterations," each of which consists
equations called cuts to give the transformation of of several pivot steps using cuts of the form (2) de-
rived from a single source row (or value of i). Each
A into If. Specifically, in the context of P2, eq (1) major iteration is divided into 2 stages. The first stage
implies the cut 3
consists of a single pivot step. However, instead of
n selecting the pivot column v by applying the DSA
si = [aiO / A]+ L [aij/ A](-tj) (2) criterion to (2) (as in the all-integer algorithm), the
j~ l method selects v to be the same as u, i.e., by applying
the DSA criterion to (1). In addition, A no longer
where S i is a nonegative integer variable. truly serves as a parameter, but is always - a ru . 5
For an appropriate value of A > 0, one may use (2) If Stage 1 does not destroy dual feasibility, then Stage 2
to determine A by replacing each occurrence of arj and is vacuous. Otherwise dual feasibility is restored by a
aru in instruction 2 and 3 of the DSA by [adA] and sequence of "pseudo·primal" pivot steps using the
[aru/A]. The initial A matrix is assumed not only to column that is lexicographically most negative when
be lexicographically dual feasible, but also to consist divided by the corresponding coefficient in the source
entirely of integer coefficients. To keep A all·integer row (restricting attention to positive coefficients).
it evidently suffices to select A so that [al'u/A] =-1 To spe cify the algorithm more precisely, we intro-
(although the index u may not be the same for the duce the following additional notation. Relative to a
DSA and the all·integer algorithm). Thus, the Gomory selected equation r, let
algorithm can be described as follows .
., n, provided arj # O.
The Gomory All-Integer Algorithm

1. If aiO ~ 0 for i= 1, ... ,m, then X=Ao IS (Likewise, for the matrix A we define Al=Aj/arj.)
optimal. Otherwise, select r ~ 1 so that arQ < O.
2. If arj ~ 0 for j= 1, . . . , n, then P2 has no Let u be determined so that 6
feasible solution. Otherwise, define a,j= [arj/A],
l
j=O, . . ,no Select v~1 and A>O so that a;v=-1 U ~ 1, aru < 0 and A~ > A j for all j ~ 1, j # u
and -Av >lAj/a ryf. for all j ~ 1, j # v, such that a ryf .
~-l. such that arj < O. (3)
2 A .vector a is defined to be lexicographically larger than a vector (3 (symbolized f3a::;
or (3 .:. a) if the first nonzero component of a-f3 is positive. The condition of instru ction 2 "The cut variable Sr should in strictnes s be additionally subscri pted (e.g., with the
implies the more familiar condition uow/U rN _ Max {aoj/urj: Uri < O}, and also provides a iteration number) to avoid ambiguity; 53 from iteration 5 may not be the same variable as
J.. S3 from iteration 9.
rule that assures a finite algorithm in the case of degeneracy. It is assumed here that A :; Motivation for these choices is provided in [2], where it is s hown that once u is selected,
begins iexicographicaUy dual feasible ; that is, A J :S. 0 for all j ~ 1. Note that A pia '7J ... Aq/urq selecting A""" -a". may be interpreted as applying the Bound Escalation Method f11 to
for p =F q and p , q;;: 1 is impossible since the initial Aj vectors for j ~ 1 contain the - J an equation that is less constra ining than (1). One of th e consequ ences of this is mani·
matrix, hence the se AJ begin and remain linearly independent. fested in Theorem 5, below.
3 [y] denotes the greatest integer ~ y. S Note that this definition corresponds to the one given for u in instruction 2 of the DSA.

188
In addition, let s be determined so that
(6)
• 1 * We observe to start that (5) and (6) must hold when-
ars>O,s~l,andAs <Aj forallj~l,j""s,
ever instruction 3 is initiated since the fact that A
such that arj > O. is always lexicographically dual feasible in Stage 1
(4)
implies Aj :;) 0 for all j ~ 1, hence A: 1< 0 and Ai >10.
Beginning with A all integer and lexicographically To show that (5) and (6) hold throughout the al-
dual feasible, the Pseudo Primal·Dual Algorithm is gorithm we introduce
then as follows. LEMMA 1. Let Aw = KwAw and Aj = Aj - KjAw (j "" w),
for any scalars Kj, Kw such that Kw "" O. Then, if
The Pseudo Primal-Dual Algorithm (PPDA) a rw "" 0

STAGE 1 (7)
1. If aiO~O for i=l, . . . , m, then X=Ao is
optimal. Otherwis e, select r ~ 1 such that aro < 0 if and only if
(periodically, r = Min {i: aiO < O}).
2. If al'j ~ 0 for all j ~ 1, then P2 has no feasible (8)
solution. Otherwise, identify u by (3).
From the definitions,
3. Determine A so that PROOf; ar~~= ar~~= (arj- .Kja1'w)A~= ar~~-KjAw.
Also Aj=Aj-K~w. Thusar~~-Aj =ar~~-Aj, and
the lemma follows at once. 9
The definitions of /l w and Aj in Lem!!!a 1 may be
seen to accord with the definitions of Aw and Aj at
j=O, . . . , n j"" u. instruction 3 and 5 of the algorithm for w = u and
w=s respectively. Furthermore, if a1'j < 0, then con-
- - dition (7) of Lemma 1 is the same as A~ ::J At, permit-
4. Let tu == S1' and tj == tj for j "" u. Designate A and T ting w to be identified with u, while if arj > 0, then
(7) is the same as A~ 1< A *, permitting w to be identi-
to be the current A matrix and T vector. If Aj -}:, 0 fied with s. With these observations as a foundation,
for all j ~ 1, return to instruction 1. Otherwise, we state and prov~ the key result alluded to above.
STAGE 2 THEOREM 1: Let Aw and Aj be given as in Lemma 1
5. Retain the index r unchanged, identify s by (4), for w = r or w: s, and let u and s be defined relative
and let to the matrix A in the same way that u and s are de-
fined relative to A. Then , if
As=-As
(5) A~ 1< A;, and (6) (arj = 0) =) Aj -J 0 for all j ~ 1
Aj=Aj-[arj/arsl As j=O, . .. , n
it follows that
6. Let ts == S1' and tj == tj for j "" s. Designate A and T --1- - - ' 1
{ (5) A: < A:, and (6) (arj = O)=)Aj ::> 0 for all j ~ 1.
to be the current A and T. If Aj > 0 for all j ~ 1, return
to instruction 1. Otherwise, return to instruction 5. PROOF: Letting either w = u or w = s, A: !:::: Ai is equiv-
We observe that instruction 3 of the PPDA employs alent to
the Gomory cut (2) for i = r and A = - arlt, while
instruction 5 employs (2) for i = r and A = a rs . Also A; ~ A;!;!:::: A; for all p, q ~ 1, p, q "" w,
1
the definition of s assures As < 0, if there exists a such that arp < 0 and arq > O.
1
j ~ 1 such that Aj < 0 and arj > O.
This immediately implies (7) of Lemma 1 for arj "" 0,
THEOREMS AND PROOFS 7 and (6) implies (7) for arj = O. Thus (8) of Lemma 1 is
true, which establishes (6) and
To justify the algorithm and develop its properties, - I - I -
we will undertake to establish the validity of two very A; < A~ < A* for all p, q ~ 1
simple and important relationships for every A matrix
it generates: p, q"" w, such that (irp < 0 and arq > O.

A*It ~ A*8
s
(5) But the existence of a w satisfying this last relation-
9 This proof evidently permits the lexicographic inequality signs in (6) and (7) to be re-
7 These wi shing to defer consideration of the theorems and proofs can skip to the pre-
versed or replaced by equality signs. Also, the requirement arlO # 0 can be dropped after
liminary illustrative material of the next section with few sacrifices in understanding.
(
multiplying (6) through by arlO and (7) through by arw. -thus replacingA: and il :, with Aw
8 By convention , we assume that A * < A,* holds trivially if either u or sis not well-defined. and itw.

189
---1
ship also implies that it must hold for w = u and w = s, PROOF: From instruction 5, arj = a':i - [arj/ ars]ars-
and hence is equivalent to (5). Since y ~ [y] > y-l for all numbers y, and
We now restrict "/Iw and Aj given in Lemma 1 to
bring them into closer correspondence with the
definitions provided by the algorithm. In doing so we
establish the additional results required to demon- The proof for A given by instruction 3 is analogous .
strate the properties claimed for the method in sec- THEOREM 3. Let 8 be the value of a rs on the first visit
tion L in instruction 5 during any execution of Stage 2. Then
COROLLARY 1: Let Aw and Aj be given as in Lemma 1. instruction 5 will be visited at most 8 times before tfe
If Kw < 0 for w= s, and Kw > 0 for w= u, then (5) current execution of Stage 2 is terminated with Aj > 0
and (6) imply for all j ~ 1.
PROOF: From instruction 5, ars = - ars. Hence by
Theorem 2, o:,·s < a rs- This can occur at most 8 times,
since if s is still meaningfully defined at the 8 + 1st step,
PROOF: Since aI's> 0 and aru < 0, the sign restrictions then ars= 0, which is impossible. But by Corollary 2
on Kw for w = u and w = simply arw < 0, and hence to Theorem 2, s must be meaningfully define d unless
_ [_ Aj :::1 0 for all j ~ 1. This completes the proof. I]
either A~ < A; or w = ii. The relations A~ = A:' and
_ 1-
It should be noted that , while the value of 8 given
A~ <Af immediatelL establish the corollary. by the preceding theorem provides an upper bound
on the number of iterations of Stage 2, a larger value
COROLLARY 2: Let Aw and Aj be given as in Lemma 1. of 8 will not necessarily entail a greater number of
I
If A! < 0 for w = u or w = s, then (5) and (6) imply iterations than a smaller one.
[ _ [ - Before stating and proving the theorem that estab -
Aj < O= )arj > O, Aj < O =)arj > Oforallj~l, (9) lishes finiteness for the complete algorithm, we give
two theorems that disclose additional properties of
Stages 1 and 2.
and THEOREM 4: Let w = u if A is defined by instruction 3
A *u' A *s ' A!u' and A.!s (10) of the PPDA and w = s if A is defined by instruction 5 .
Then - [
Aj < 0, j =1= w
are lexicographically negative (if they exist).
PROOF: By the proof of Theorem 1, (5) and (6) imply implies
(7) of Lemma 1 for both w = u and w = s. Thus if
[ where
Ai:, < 0 for either w= u or w=s, it follows that p = Min (i: 1iiw =1= 0).
[ .
Aw < 0=) arlO > 0, smce Au < 0 = Au > 0, and ars > 0
* [ ) I

[
PROOF: By Corollary 2 to Theorem 1 arj < 0, and hence
holds by the definition of s. Also, Aj < 0 = ) arj > 0 for by the definition of s, A'!'s 1::: Aj* or j=s. From _Corollary
j =1= w by CD. of Theorem L Then, since (7) implies (8), 1 to Theorem 1 it follow s that A ~ < A /' and
and Aili = A::, the same argument applied to (8) yields aolO/ a rw ~ lioj/""a rj. If ao w = 0, then arj = 0 is im -
the second half of (9) above . Finally (10) follows from (9). plied by Aj < 0, and from Aw= K wA w we conclud e
Corollary 1 establishes the important fact that At apw/a rw ~ lip) arj and
(letting w = s) is always lexicographically increasing in
Stage 2 of the PPDA. Corollary 2 implies that s will liij = a;w= a ;w= 0 for i < p. If apj = 0,
always be meaningfully defined at instruction 5 of the
algorithm. We require one additional result to establish the theorem is immediately true. Thus, suppose
finiteness for Stage 2. apj < o. For w = s, we have a rs > arj (Theorem 2) and
a ps =-71ps < o. Consequently - a ps < apj. The Theore m
THEOREM 2: 10 For A given by instruction 3 of the is similarly proved for w = u.
PPDA, Theorem 4 implies that dual feasibility (though not
necessarily lexicographic dual feasibility) must be
- a ru > ifrj ~ 0 for all j =1= u restored in Stage 2 in at most - aos iterations, since
aos < aOs. must occur at every visit to instruction 5 as
and for A given by instruction 5 of the PPDA, long as aos < O. This rate of progres sion toward dual
feasibility in Stage 2 is significant in that it exceeds
a rs > arj ~ 0 for all j =1= s. any that can be proved for the primal all-integer
algorithms. 12
I(} An e quival e nt result wa s first given in the conte xt of a primal algorithm by R. D. Yo ung

[5J a nd in the context of a dual algo rithm at about th e same tim e b y the author in [I I. An I. More res tric tively 5 can be the value of ar. divided by the greatest common divi s or

interesting and easil y prove d consequence of thi s t~orem , whi c h we do not exploit here, of the a rj for j ~ 1. Al so, note that Theore m 2 impli es - af"1j > 8. wh ere aru is given at in stru c·
is th a t the "con verse" of Theorem 2 is valid whe n Aj§ given by instru ctiQ!!.. 5 of th e a lgo- tio n 3, thu s p rovidin g a known uppe r bound on the number of ite rations in Stage 2 befo re
ruhm wJ!!! 'W replac in g s at that ins truc tion ; i.e., if A j= A j- farjJa rw1A w. A IIJ = - A "., then it is initiated.
(5) and (6)' impl y (5), (6) and w = s. A correspondin g res ult hold s wh e n Ai s giv e n by in stru c- 12 The form of this progress ion has a lso le d to a choice rule that guarante es finite conv e r ·
tion 3 with w repla cing u. gence for a simplified prima l method {3].

190
Our next Theorem shows that the advance (decrease) I>"K""
a rs ·
arO = L.. (13)
in the objective function value in Stage 1 is always 1i ~2

greater than or equal to that produced by the Gomory


all-integer algorithm. Also, from the definition of A~*,

THEOREM 5: Let abo be the value of aoo obtained at


instruction 3 of the Gomory algorithm and aBo be the 2: K"A~ = 2: K"a,,!,A~*. (14)
value of aoo obtained at instruction 3 of the PPDA " ~2 h ~2

(jor the same choice ofr). Then a50 ;;2a60' 1


By Corollary 1 to Theorem 1, it follows that A~ * > A?,*
PROOF: abo = aoo + aov[aro/.\] and for all h ~ 1. Since K" ~ and a~s ~ 1, (14) implies °
I
".L.J K"A"
· ·8 2::
- AO*
U "L.J K"a"rs'
h ~2 h ~2
From the choice of .\ and v specified in instruction 2
1
of the Gomory algorithm, we have Also, since Ag* < 0, we have by (13) that

I ,,10* <Ao*
a,.QI"1-11 = II
"L.J a "rs ·
h ~2
hence
Thus, from (11) we conclud e
- [arul.\J ;;2 [aoul aov], and aTU!'\ ~ - [ao,'! aOv].
°
= 1

Since aTO < and


Ao ;;2 Ah- a~oA?'*. (15)

Finally, using the fact that a""AZ* = A~, the definitions


of A8 and a?o yield
But ao v [ao vlao v ];;2 aou and [aro /(- am)] < 0, and thus
aov[a,·o /.\] ~ aou[aTo/( -aru )]. This implies %0 ~ a50, A8 - a?oA?'* = AS - a?oAZ*· (16)
completing the proof.
But A 1= AO, hence (15) and (16) es tablish the desired
We now show that the net lexicographic decrease
in Ao brought about by the PPDA is at least as great result.
as that produced by the dual simplex algorithm. As
remarked earlier, this immediately implies that the
PPDA is finite, thereby completing the justification
of the algorithm. 5. Example Problems and Comments
THEOREM 6: Given the same A matrix and the same
choice of r , the amount of the lexicographic decrease
in Ao resulting from two successive visits to instruction Three problems are solved in this section to illustrate
1 of the PPDA equals or exceeds that resulting from the fundamental c haracteri s ti cs of the PPDA. In
two successive visits to instruction 1 of the DSA. addition, variations of the PPDA are developed by
PROOF: Let AO denote the A matrix on the first of two informal example.
consecu tive visits to instruction 1 of the PPDA, and
let if denote A on the second of these visits. In addi- EXAMPLE PROBLEM 1: 14
tion , for each iteration h at instruction 5, let A"(h ~ 1)
denote the current A matrix and let Kh = [a"!o la"!sJ.13 Maximize Xo= 0+ 23( - X3) + 17( - X4) + 3( - X5)
Then we may write

Ao=AJ - 2: K"A~ (11)


" ~2 subj. to XI = -128 - 27( - X3) - 20( - X4) -16( - X5)

-- I-"K""
a,·o - aro L.. a rs· (12) -17( -xs)
" ~2

By the definition of u, the Ao vector that re places


AS on the second of the two visits to instruction 1 of
the DSA is Ag-a?uA?'*. (We note that thi s re s ults in a
lexicographic decrease in Ao s ince a?o < and A~* < 0.) ° + 2( - xs), Xj ~ ° for j ~ 1.

Thus , to prove the Theorem we must show that 14 This problem may also be written in the form

=Ao <I AO0 - arfY'u


0 ,10*
. Minimize 23xa + 17x4 + 3X5 + 7X6
s.t.27x3+20x4 + 16x ~+ ]7xtl;<;!: 128
Since (f.·o ~ 0, from (12) we obtain 22x3 + 14x4 -9X5 - 2X6 ~ 45

for nonnegative integer Xj, where Xl and X2 are introduced as slack variables to c ha nge
13 We do not bother to represen t the fac t th at s de pe nds on h. the inequalities into equalities.

191
Representing X = AT in detached coefficient (i.e. , the identity of these variables. In Tableau 2, Aj ::J 0
tableau) form , we have for all j ~ 1, requiring a return to instruction 1. Since
aiO = - 61 < 0, instruction 2 is visited next, and r = 1
~ is the only choice. Now u= 3, and by instructions 3
O. 1 -X3 -X4 -X5 -X6 and 4 we obtain

Xo= 0 23 17 3 7
3. -84 -4 -3 9 -2
XI = -128 -27 -20 -16 -17 8 17 16 -23 6
2 1 -9 1 1
~X2= -45 -22 -14 9 2 4 3 0 -1 1
-1 -4 0 1 -1
X3= 0 - 1 0 0 0 3 1 1 - 1 1
0 0 0 0 -1
X4= 0 0 -1 0 0

X5= 0 0 0 -1 0 Once again A is dual infeasible. From its definition,


s= 4, thereby at instructions 5 and 6 yielding the
X6= 0 0 0 0 -1 tableau

Beginning with instruction 1 of the PPDA, we


observe that aiO < 0 for i = 1 and 2, and hence proceed 4. -82 0 1 1 2
2 5 4 1 -6
to instruction 2. Equation r is selected at this step to
1 -1 -11 5 -1
be the one with the fewest negative components. 15 -1
3 1 - 2 3
Thus, r= 2, as indicated by the arrow in the tableau
0 -2 2 -3 1
above pointing to row 2. From the definition of Au *, -1 -1 -1
2 3
u = 1 when r = 2, hence the arrow pointing to column 1 -4
1 2 2 1
(AI). Instructions 3 and 4 then yield ,the new tableau

A is now both primal and dual feasible, and the


1. 1 -Xs
problem is solved. From X = Ao we obtain the optimal
-6 solution xo=-82, x,=2, x2=1, x3=3, X4=0, x5=2,
Xo= -69 23 3 7
-47 -27 -16 -17 X6= 1.
x,= 7 The next example problem illustrates additional
~X2= 21 -22 8 9 2
features of the algorithm.
X3= 3 -1 1 0 0
X4= 0 0 -1 0 0
-1 0 EXAMPLE PROBLEM 2:
X5= 0 0 0
X6= 0 0 0 0 1
Maximize Xo = 0 + 3(- X2) + 5(- X3)
t
In this tableau A2 < O. Therefore, with r still at 2,
we proceed to instruction 5 and apply the indicated
transformation for s = 2. The updated tableau obtained
at instruction 6 is then
subj. to XI = - 398 - 6 (- xz) -15 (- X3) - 36 (- X4)

-23 (-X5) -41 (-X6) Xj ~ 0 j= 1, . . . ,6.

For convenience we will not bother to write down


2. -57 5 6 9 7
the last five rows of the tableau corresponding to the
-61 -6 -7 -23 -17 - I matrix and zero vector, but will explicitly repres ent
5 2 -8 1 2 these rows only when they are changed from their
1 2 -1 -1 0
original form. Thus, for the initial tableau we have
2 -3 1 1 0
0 0 0 -1 0
0 0 0 0 -1
o. Xo= \ 0 3 5 9 7 13\
Since the Xi along the left margin are unchanging ~ XI= L-_3_9_8-L_-_6__-_1_5__-_3_6__
-_2_3__-_4_1-.J
and the tj along the top margin are irrelevant, we do
not bother in this and subsequent tableaus to specify The arrows accompanying the tableau point to row
r and column u. Since u = 3, the third row of the origi·
15 Our choice here is based on considerations developed in [ 1] . nal - I matrix (corresponding to X4) will be modified
192
by the transformation defined at instruction 3. Thus, it z) must be zero in the final solution, we also adjoin
in the resulting tableau below this modified row is the two equations to assure z ~ 0 and - z ~ 0 at the
included following the modified rows 0 and 1. We keep end of the tableau. Thus we obtain
track of the components of X in the left margin since
their order has been shuffled by our bookkeeping 2'.
con ven tions.
Xo= -102 6 -4 -3 -2 1 -1
! 4 -30 21 24
21
1. 13 1 -
4
Xo= -108 -6 -4 9 -2 -5 1
XI= 34 30 21 -36 13 31 11 -1 1 1 1 1
X4= 12 1 1 -1 1 2 4:
X2= 1 1 0 -2 0 1 0
Z 0 0 0 0 0 0 -1
At instruction 5 the transformations are initiated -z = 0 0 0 0 0 0 1
to restore A to dual feasibility. Since s= 1, the first
row of the original -[ matrix (corresponding to X2)
will now be changed, yielding the new last row in the
The new column is segregated by the added parti-
tableau below.
tion. It is evident by its construction that this column
must qualify as the new As.17
2. It is unnecessary to carry out the computations at
-4 -2 instru ction 5 in order to predict two things about the
Xo= -102 6 -3 1
~XI= 4 -30 21 24 13 1 matrix .if that will result.
X4= 11 -1 1 1 1 1 First, since the first co mponent of the new column
X2= 1 1 0 -2 0 1 in Tableau 2' is -1 and the components of row 0 are
integers,18 it follows from Theorem 4 that aOj = 0 for
This tableau is still dual infeasible and instruction <.
all j ~ 1 such that Aj o. Consequently, then, If must
5 must therefore be repeated. be dual feasible (though perhaps not lexicographically
dual feasible).
The second thing to observe is that [a"olarsJ = 0
3.
Xo= -102 -2 4 1 -2 -1 (for arO=4 and ars = ~1), and hence Ao=Ao. This
~XI= 4 12 -21 3 13 1 fact and the one just established assure that the
X4= 11 1 - 1 0 1 1 (primal) feasible solution values for the Xi given in
X2= 1 1 0 -2 0 1 Tableau 2 (and 2') must also be optimal. In short, we
X3= 0 -2 1 1 0 0 have established that [arOlarsJ = 0 is a sufficient con-
dition for a feasible solution to be optimal.
Once again repeating instruction 5 we obtain If adjoined rows and columns are actually employed
in solving the problem, and not simply as a means of
4. checking for optimality, then it will eventually be
Xo= -102 2 0 1 0 1 possible to restore the tableau to its original size. 19
XI= 4 -12 3 3 1 1 This approach of adjoining columns may also be used
X4= 11 -1 1 0 0 1 at Step 2 to prevent A from becoming dual infeasible
X2= 1 -1 2 -2 - 1 1 in the first place_ There are clearly a number of pos-
X3= 0 2 -3 1 2 0 sible variations, and by following appropriate rules
the tableau need not be expanded to the extent de-
The problem is now solved, and an optimal solution picted by our illustration each time a new variable
is given by xo=-102, xI=4, x4=1l, x2=1, and is added_ To insure convergence it is of course ne ces-
X3=X5= X6=0.
sary to forbid the addition of an unlimited number of
For the proceding problem, we note that the optimal rows and columns to the tableau.
solution was already given in Tableau 2. Since A was Our last example problem is a very simple one that
not dual feasible, however, the solution was not identi· poses considerable difficulty for Stage 2 of the PPDA.
fied as optimal at that point. Nevertheless, it would
have been possible to make this identification in the .' ,
following way.
We create a new column from As (s = 2) in Tableau
.
A,) such that A* ~ A* < A*.
, .
17 More generally, if/course, we co uld adjoin any column Ah < 0 (to qualify as the new

18 By permitting rational numbers in the tableau, it suffices more genera lly to select the
first co mponent of the adjoined row to be -Ilk, where kalj is an integer for all i and j. If
2 by dividing As through by - ao s (=4) 16 and then adjoin row 0 already consists of negative components, then our remarks have reference instead
to the first row i such that ai, ¥- O. To demon strate that a feasible solution is optimal, how.
this column to the right of the others in the tableau. ever, consideration may be limited as above to row O.
Since the variable associated with this column (call 19 By selecting row r to be one or the oth er of the adjoined row s (w hich will always be
negative of eac h other), and persisting in this . eventuall) there will remain only one colum n
of the tableau with nonzero components in these row s, at which time the indicated co lumn
16 If Qo, = O. we instead divide through by the negative of the first nonzero component and rows may be dropped. The optimal solution may of course be obtained before thi s
of A•. size reduction process is completed.

193
- 1
(We will later show how to overcome the difficulty Two interacting features of Tableau 1 bequeathed
by making the algorithm more flexible than the ver- by Stage 1 appear to have contributed to the difficulty
sion of section 3_) encountered in Stage 2: (i) and Ai
are nearly the Ai
EXAMPLE PROBLEM 3: same, and (ii) the components a03 an d a r 3 of the vector
A3 = A 1+ A2 are small in absolute value relative to the
corresponding components of both Al and Az. 20
Maximize Xo = 0 + 1(- X2) + 28(- X3)
Conditions such as these may be taken to indicate
subj_ to that the transformation employed in Stage 1 should
be modified to provide a different A matrix for Stage 2.
Xj ~ 0 for j ~ 1. Specifically, we interpret (i) and (ii) to imply that the
choi ce of u in Tableau 0 would better be given by
u = 1 instead of u=2_
How is such an altered choice possible? Note that,
0_ o 1 28 if the coefficient of al1 in Tableau 0 were decreased
~ -98 -1 -45 sufficiently, then u = 1 would result by definition.
o -1 o Thus we wish to adjoin to the tableau a new equation
o o -1 (to be designated equation r) which is the same as
eq (1) except that the coefficient of - tl is appropri-
The next three tableaus are written without addi- ately decreased. Such an equation can always be
tional comment. created by adding a sufficient- positive multiple M of
tl=-I(- tl) to eq (1). Defining X4=X I +Mt l (~0) in
Tableau 0, we have I
I
L -84 -27 28
~ 37 -45 X4 = - 98 - (1 + M) ( - t I) - 45 (- t2 ) .
44
o -1 o To assure u = 1 when equation 4 ass umes the role of
3 1 -1
equation r , we require - 1/ (1 + M) > - 28/45 or
1 + M > 45/28. Consequently we assign - al4 the value
45/28 + E(- al4 = 1 + M), for E > 0_
Adjoining the new equation to Tableau 0 yields the
2. -84 27 -26
augmented tableau. 2 1
~ 37 -44 43
o 1 -2
3 -1 1
0'. XO= 0 1 28
XI = -98 -1 -45
X2 = 0 -1 0
3. -84 -25 26 X3 = 0 0 -1
~ 37 42 -43
o -3 2
3 1 -1 -98 - (~~ +E)-45
One may infer from the structure of this problem Note that by letting al4 be a,,1t in this example, the
that after six more steps we will obtain process of selecting a value of al4 corresponds pre-
cisely to selecting a value of A for a Gomory cut, when-
ever the transformation of A into II is carried out as
-64 -19 20 specified in the PPDA.22 In particular, - a14= ~~ gives
~ 0 36 -37
8 -9 8 the (largest permissible) A value prescribed by the
2 1 -1 All Integer Algorithm (using eq (1) as source equation).
Suppose instead, however, that we wish to select
This tableau gives an optimal solution by the re- al4 by taking E arbitrarily small. The effect of this in

marks relating to the previous example problem. How- transforming A into II with the PPDA can easily be
ever, to restore dual feasibility we may project by given without specifying a value of E at all, provided
inference that 20 additional iterations of Stage 2 are the updated form of equation 4 itself is disregarded.
required , at which point we obtain
20 One can readily make th ese co ndition s mor~ precise by the type of " difference" analysis
that permits the last two table aus above to be inferred without carrying out th e intervening
-64 1 o compu tations. (There is of co urse no diffi culty in defining (i) and (ii) for more general situ-
at io ns. For simplic it y, however, we co ntinu e our disc ussion by reference to the exa mple
o - 18 17 problem.)
21 The tran sform ations of the PPDA and the theorems of the precedin g section do not
8 27 -28 requi re Xr to be an in teger variable or equation r to co nt ain int ege r coefficient s.
2 -1 1 22 The Gomory a ll -integer a lgorithm ca n be desc ribed in terms of the algo rithm of [1] in
this way. See e.g. , [21.

194
To see this we observe that for any number y and .
- am = y+ E (with E arbitrarily small), we have 2' . -61 1 o
- 3 - 18 17
5 27 - 28
2 -1 1
if a':i < 0 and arj/Y is noninteger, while
The PPDA now obtains an optimal solution after six
more steps, considerably improving upon the solu-
tion attempt that disregarded the form of the A matrix
otherwise. Thus, by instruction 3 of the PPDA we
encountered in Stage 2 (as a result of the Stage 1
obtain from Table au 0'
transformations). 24
Other related ways for increasing the range of
t alternatives available to the PPDA are suggested by
the foregoing discussion . For example, one may
l '. - 61 1 0 create a new equation to take the rol~ Qf equatiQn r
-37 -1 - 17
by decreasing more than one of the arj (or even increas-
~ 61 -1 28
- 1 ing aro) , app'lying this in Stage 2 as well as Stage 1
0 0 provided Aj '< o~ arj > 0 and A~ 1< Ai for the new
equation r. Also, if there is a second equation that has
where we ha ve dropped equation 4 from the n ew the appropriate form for equation r in Stage 2 and has
tableau.
the same value for s, then it is easily proved that
any convex combination of the two equations will
Howe ver, dropping this equation is not permi ssible
qualify as the new equation r.
by a straightforward application of the PPDA since
A is not lexi cographically dual feasible, and equation r
is r equired to de fin e the transformation in Stage 2. 6. References
To re medy this apparent difficulty, we note that The- [1] Glover, Fred , A bound escalati on method for the solution of
ore m 1 and its corollari es immediately imply that the integer linear program s, Cahiers du Centre L' Etud es de
Rec herc he Operati onn elJe, 6 , Brussels (1964).
equation t" = - 1 (- t,,) will be transform ed by instruc- [2J Glover, Fred, An extension of th e bound escalation method for
tion 3 into an e quation that satisfi es the criteria for integer programming: A pse udo primal-dual algorithm of
equation r. the Go mory AU-Integer varie ty, Manage ment Scie nces Re-
search Report No. 49, Carnegie institute of Technology
(Jul y 1965).
In our present e xample, eq (3) corres ponds to t ll =- 1 [3] Glover , Fred , A new found ati on for a s implified primal intege r
(-t,,)23 in Tableau 0' and he nce qualifies as equation progra mmin g algorithm , ORC 66-39, Univers ity of Califor-
r in T able au 1' . Applying instruction 5 of the PPDA ni a, Berke ley (Nov. 1966), to a ppea r in ]ORSA.
[4] Go mory, Ralph E., All-Integer Integer Programming Algorithm ,
to T ableau l ' for r = 3 , we obtain Ind us trial Sc hed uling, J. F. Muth a nd C. L. Thomp so n, Eds.
(Prenti ce Hall, 1963).
[5] Youn g, R. D_, A prim al (all-integer) integer programmin g al-
t3 1f thi s eq ua tion does not ap pear in the tablea u, it ca n a lways be added. gorithm , J. Res. NBS 698 (Math . and Math. Phys.), No. 3,
2. T abl ea u 2' illu strates the ap pli cability of a n addition al solution st rategy that can
be em ployed in conju nction with the PPD A. The su bm atrix consisting of the two middl e 213 (1965).
rows of t he tableau is a s pec ial ins tance of a stru c ture calJ ed th e bounding fo rm , whic h
freq ue ntly appears in certain " hard" intege r progra ms a nd can be explo it ed e ffic iently
wi th th e a lgorith m of [II.
(Paper 7lB4-243)

195

You might also like