0% found this document useful (0 votes)
15 views28 pages

Active Set Methods For Problems in Column Blocks Angular Form

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views28 pages

Active Set Methods For Problems in Column Blocks Angular Form

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Comp. AppI. Math., V. 72, no. 3, pp.

199-226,1993
g Sociedade Brasileira de Matemática Aplicada e Computacional, printed in U.S.A.

ACTIVE SET METHODS FOR PROBLEMS


BLOCK AT{GULARFORM
il,{ COLT]MT,{

J.M. STERN1 and S.A. VAVASIS2

rDepartamento de Ciência da Computação


Universidade de São Paulo
Cx.P. 20570, CEP 01498
São Paulo, Brasil
and
zOomputer Science Department
Cornell University
Ithaca, NY 14853, USA

ABSTR.ACT: We study utive *t methods for optimization problems in Block Angdar


Form (BAF). We begin by reviewing some standa"rd basis factorizations, including Saun-
ders' orthogonal factoüzation and updats for the simplex method that do not impose any
rcstriction on the pivot srq,úene- and maintain the basis factorization structured in BAF
throughout the algorithm. We then suggest orthogonal factorization and updating proce-
dures that allow coarse grain parallelization, pivot updates local to the affected blocks, and
independent block reinversion. A simple pardlel environment appropriate to the descrip-
tion and complexity analysis of thesr' procdures is defrned in Section 5. The factorization
and updating procedures are prrrented in Sections 6 and 7. Our update prccedure out-
performs conventional uOdating procd:trrm even in a purely sequentia.l environment.

RESUMO: METODO DE CONJTTNTO ATIVO PARA PROBLEMAS EM FORMA


ANGULAR DE BLOCO9 Faturtza@ bzísicassão revistas, incluindo a fatorização ortog-
onal de Saunders. Sugerem-seentú Íatorizaqoes ortogonais e procedimentos de atuilização
em ambiente de paralelismo.

PALAVRAS-CHAVE: Programa@ linear, método simplex.


T S"pp*t"d by Fundação de Amparo à Fesquisa do Estado de São Paulo grant 87-054G9. Part of this
work was done while the author wâs visiting National Laboratory for the Summer Institute in
Para.llel Programming 1990. Part of thb rort vas also supported by the School of Operations Research
and Industrial Engineering at Comell Univusiç
2 Part of this work was done while the autbor sas wisiting Sandia National Laboratories, Albuquerque,
New Mexico, supported by the U.S. Oepartned ofEnergr under contÍact DE-4C04-76DP00789. Part of
this work was also supported by the Applied líathoatical Sciences Program (KC-04-02) of the Office of
Energy Research of the U.S. Departmeú of Energr under grant DE-FG02-86ER25013.4000 and in put
by the National Science Foundation, the Air Force Ofrce of Scientific Research, and the Office of Naral
Research, through NSF grant DMS 8920550.

#27 8 / 92. Received: 20/VI / 52. Accepted: n IV /53.


200 J. M. STERN AND S. A. VAVASIS

].. ACTIVE SET METHODS

Considerthe linear program

LP : min{f x: z ) 0 and Ax: d}.

If Aism x n, a (nondegenerate) vertex ofthe feasible region has n-m active inequalities
constraints, í.e. n - m variables are set to 0. These are the residuaÌ variables for that
vertex. Permuting the vector r to separate its basic, i.e. nonzero, and residual entries, and
partitioning f and A accordingly, we can write ,LP as:

min[/ô
fllii],r>ol rar[]l]:a
Using the basis inverse, B-t, we can isoÌate the basic rrariablesof this vertex

rb : ã,- Éx' = B-rd - B-r Rs,',

and the value of the objective function at this vertex i:s

ó = C- t*' : 7bã+ (f - fã)"' .


If we make a single element of rt positive, rï ) 0, the rralueof oô, the basic solution,
becomes
ab: ã- ã*r:ã-qa.,

and remains feasible if nonnegative. This sugge6É$ tfu simpl& meth,od, for going from a
feasible vertex to a better feasibÌevertex. In one ste1rof tb simplex we:
o Look for a residual index j such that z; < O-
o Computeri : Argminr A,,r2s{ã*/É.2ty-
o Make variable r| basic, and r! residrrql
o Compute the new basis inverse.
The simplex method cannot proceedif z > Oin tfrÊ fiÌst step, or if it takes the minimum
of an empty set in the second step. The secud case c'oÌTespondsto an unbounded LP,
and in the first casethe current vertex is an optimat mhion. Swapping the basic/residual
status of a pair of variables is called (to) piuoú. Àt enery pivot we have to update -É, i.e.
recompute the fundamental basis - et] of tbe nrrll space of
[, lR 81. Historically the
first version of the simplex aÌgorithm, the &rlìl€nrrr simFlex, did exactly that, updating
the tableaux matrix / n al at every pivot- But we really do not need to carry the
fundamental null spacõbasis èxplicitly. It sfiae* to bave a factorization of B that allows
us to compute the one column of -filoi.e- +lp oe fimdamental feasible direction that we
need at every iteration: see [46]. Tbis È tbe rcryisedsimplex method of [13].
We can generalizethis simple strategr to p,ohlems with nonlinear objective functions,
see [3] and [29], or even to problems with nonÌirrear constraints, see 131]and [8]. These

ï
ACTIVE SET METHODS 20L

are calìed active set methods (ASMs) and they all need, in explicit or implicit form, the
fundamental null space basis of lR Bl. The best form in which to maintain and update
the fundamentaÌ nulÌ space basis is highly dependent on the form and structure of the
problem. In this paper we examine trhisquestion for problems whose constraint matrix, Á,
is in column block angular form. Minor variations of all our considerationsand algorithms
apply to problems in row block angular form.

2. THE COLUMN BLOCK ANGULAR FORM

Optimization problems in BAF are very corrmon in practice, like multiperiod con-
trol, scenario investment planning, stoúastic programming, truss or circuit optimization,
economicstabilization,etc. [2], [15], [19], [261.[27],130],138],[39],and [43]-.
A matrix in CBAF is a block matrix Á. with bx (b+ L) blocks,A*",1< lí < b,
1 < I < b+1, whereonly the diagonalblocks,Dk : Ak'k, and the (column) angular blocks,
pk - ak,b+1,contain nonzeroelements(NZE's). Block Dk has dimensionm(k) x na(k),
and block -Ek has dimension m(k) x na(ó + 1).
We define the concatenations,shown in Figure 1,

A ' , k : l A ' , o , - . . . Á o ' * ]f o r 1 ( k < b + L


Akl : lA*'t,-..,Á*'u*t] for 1 < k < b.

The element in row i and column j of the matrix Dk is Df,r, and Df,. and D!,, are,
respectively, the block's ith row and jth column. In the same way, Ai',! and,Al;l are the
jth column and the ith row of A''k and ÁÈ''. respectively.
In an ASM we always have a special square and nonsingular matrix of columns of A,
the basis B. The nonbasic columns of Á form the residual matrix, R. We always assume
that the CBAF structure is maintained in B and R, as illustrated in Figure 1. If the kth
diagonal block of B, Bk, has n (À) coh*ns of Dk. and the /cth diagonal block of rt has the
remaining da(k) : na(k) -n(,t), we d€fineôpk(l . . . and rpk(1 . . . da(k)) as the basic
"(k))
and residual column indices of DK , in the order they appear, respectively,in Bk and in the
kth diagonal blockof -&. So ãft' and ff,for 1 < k < b+ 1, are a compete characterization
of B and -R: The diagonal blod6 of the basis are Bk : Dk(:,bpk(l : n(/c))) and the
correspondingangular blodcsof thebasis areCk : Ek(,,bçP+l(l: n(b+ 1))) t" the same
way, the diagonal and angular blods of the residual matrix ate Dk(:,rpb(l : do(,k))) and
Ek(:,rpb+L(L: da(b* 1))). Inthelast equationsthe colon (:) is to be interpretedas an
i n d e x r a n g e o p e r a t Io: rn,: f l , 2 , . . . , n l . a n d : - f , A l s m x n , A ( : , : ) : Á ( 1 : m , \ : n ) [ t Z ] .
We define d(k) : rn(k) -n(k). and sincewe assumedB to be nonsingular,d(k) > 0.
Also, since B is square,z(ô + 1) : Dï a(f).

3. BASIS FACTORIZATIONS

The basic numerical operation in an ASM is to compute and update the inverse, or
J. M. STERN AND S. A. VAVASIS

] ._,

],"
r-r-.4rDdÌ) Ds(b+l)

4,.
'1-...-.ll
rfl
d(\) *-

"(k)
ool I
l-J-l-,

D(b+l) da(b-rl)

Figure 1. The Column Block Angular Form.

a factorization, of the current basis B. The factorization most commonly used is the
ACTIVE SET METHODS

Gaussian,L(J : B, that allows us to easily compute B-Lr : U-l(L-tx), from the lower
and upper triangr:ìar factors .t and t/. In fact, we usually have the LU f.actofizatíon of
'We
QBP, where Q and P are (row and column) permutation matrices. need or can use
the permutations Q and P in order to:
o Maintain numerical stability.
r Preserve factors' sparsity.
r Preserve factors' structure.
Our main goal in this paper is to preservethe block structure in the factorization.
'We
want to use the CBAF sfucture to parallelize independent "block operations" in our
algorithms, as explained in the nerçt sections. Preserving structure is also a first step to
preserve spa.rsity,i.e-, stnrcture can be seen as a block scale or "macroscopic" sparsity:
see 15]and 125]. Struchre and sparsity are aspects of a combinatorial nature, whereas
stability is an analytical one. Not surprisingly, the criteria for choosing P and Q that
wouÌd optimize the combinatorial and the analytical properties of the factorization are
conflicting. Let us CI<aninethis point more carefuÌly:
In order to preserve the CBAF slructure in the factorization, we shall restrict the
choice of P, allowing column permrtations only within each diagonal block, or within the
angular coÌumns, refer to Figue 1- So doing we can implicitÌy give the permutation P by
the vectors bpk, rpk,1 < k < b + 1. as they are definedin Section2.
The row permutation Q sill nm divide each diagonal block in two:
. an upper square blodq cbm€n to be nonsinguÌar, to be factored.
o a lower rectangular blocL of dimension d(k) x n(k).
As the ASM progresses,re hate to pivot, i.e. we have to replace a column of B by
a column of -8, and then update the frctorization. If we could guarantee that, at each
diagonal block, the upper block remains nonsingular; then it would be easy to do these
updates preserving the CBAF súructure of the factors. Unfortunately no such guarantee
exists. In fact, in order to achiere rnrmerical stability, we will need to permute upper and
Iower rows: see [20] and [aa].
These permutations between r4ryer and lower rows, in successiveupdates, lead to
a degeneration of the CBAF slnrcture in the factorization. Some strategies have been
'n
suggestedto preserve block angular strrrctures tlrc LU factorization, see [5] and [45],
and they all have to forbid this type of, pamutations. Some remedies are suggestedto
preservestability, nevertheless,sore pivots cannot be handled. This is very inconvenient
for, as explained in Section 1, we E-âú the pir-ot sequenceto be determined by the ASM,
and have nothing to do with the detâitq of how we are carrying the implicit fundamental
null spacebasis of lR Bl.

4. THE ORTHOGONAL FACTORTZATION

In order to preserve the CBAF sfuucfure of B in the factorization, we will use the
orthogonal factorization QU : B. yke Q is oúhogonal and U upper triangular. The
orthogonal factorization of B is r:niqueÌy defined. up to a choiceof signs [24]. Furthermore
a permutation matrix, P, is itself cthogonal. So the upper trianguÌar factor, U, in the
J. M. STERN AND S. A. VAVASIS

orthogonal factorization QU : PB or (PïQ)U : Qtl : B must be independent of the


row permutation.
Once we have the QU factorization of B, only the U factor needsto be stored. What
we need is a factorization of the inverse' and instead of usitg

B-t :U-ret

we can use
gt _ u-tBt
and
B-1 :[J-LU-tBt_
Now, since (QU)'QU : TStqtQU : UtU : BtB, we can aÌso compute t/ by the
Cholesky factorization of the symmetric matrix BtB- b the orthogonal factor, Q, never
has to be explicitly computed: see [22].
'We
can see that [/ is itself in CBAF, Ì\rith ó n(/s) x n(/c) upper triangular diagonal
blocks, Vk, b correspondingn(k) x n(b + 1) rectangular angulax blocks, Wk , and the final
south-eastn(b +l) x n(b+ 1) triangular block, S-
Bodewig has aìready consideredthe idea of symetrizing a matrix in order to solve a
Iinear system, i.e. solve BtBr : Btd instead of Br : d, in order to get a more stable
procedure that is independent of the row permutation, pl- For the same reasonsSaunders
consideredthe use of the QU factorization in the simplex method, and observedthat the
B : LQ factorization would preservethe block angularstructure ofproblems in row block
angular form, 140].

5. BLOCK OPERATIONS

In this and the next sections we derive some procedrnesto compute and update the
Cholesky factor, [/, for basesin CBAF using only a fer simple block operations. Moreover
our procedures allow many of these block operations to be performed in parallel. Our
procedures have better complexity bounds than fac'tcizations and updates that do not
explicitly use the block structure of the basis, as 1221,IU| or [40]. Our update procedure
will give us much better bounds even in a puely sequelúial environment.
Let us consideran efficient direct bloclçQR Íattmíatíot+ procedure bqrQ illustrated in
Figure 2, that takes advantageof the CBAF structure of B, in order to paralÌelize several
steps in the basis' factorization:
1. Compute (in parallel) the QU factorizations of the b diagonaÌ blocks,

[u^*] = (Qu)'B*'
t0 t-
2. Apply (in parallel) the orthogonal transformations to the angular blocks, com-
puting
lwr zol = (Q\ck.
ACTIVE SET METHODS

I
Dr cr

e c

Figure 2. The QR Factorization.


J. M. STERN AND S. A. VAVASIS

3. Form and factor the "south-east" block Z, i.e.,

g:1qb+r)tz, z = lt,'f
lzo)
As we can see,almost all the work in bqrQ consistsof the repetitive application of some
simple block matrix operations. In order to take a.dvantageof this block modularity in the
pro"edures presentedin the next sections,rir'erÌow define a few simple "block operations".
A detailed ãnalysis of each one of the basic block operations we need, and the number of
point operations (FLOPs) they require, can be found in [24]:
floating -Compute
t. the parti,al Choteskyfactorizatio4 eÌiminating the first n columns of the
blockmatrix
lr Gl
lG'0l
to get

li Yl
w h e r e F : f ' t i s n x n l a n d G i s n x l . T h i s r e q u i r e( lsl 6 ) n 3 + ( L l z ) n z l +
(1"12)nI2 *O(nz +12)FLoPs.
2. Computethe partial backtransformotiot\í-e-u- in

lUYl'l::l:[#]
where v is n x nupper triangular and w is nx I, 0 and I are the zero and the
'l nl *
identity matrices, and u and 3rare column yectors. This requires (ll2)n2
O(n+ l) FLOPs.
3. Reduce to upper triangular fortnan upper Eessenbergmatrix, i.e., apply a se-
quenceof Givensrotations to the row pairs {1,2}'{2,3}," '{n- L,n} of the
block matrix
lv wj
where v is n x n upper Hessenberg,and w b nx l, in order to reduce v to upper
triangular. This requires 2n2 + 4nI * O(nz + 12)FLOPs.
4. Rerl,uceto upper triangular fonn a coÌÌrmn-upper triangular block matrix, i'e.,
- 1',n -
apply a sequenceof Givens rotations to the row pairs {n,n- L},{n
2\,. . .{2,1} of the block matrix

l" vl
where u is an n x 1 column vector, and v is ? x n upper triangular, in order to
reduce u to a single NZE in the first row, so transforming v from triangular to
upper Hessenberg.This requires 2n2 + O(n) FLOPs.
ACTIVE SET METHODS

In the block QR factorization, many of the block operations we have to execute are
independent. Therefore, the bÌock angular structure gives us not only the possibility of
preserving sparsity, but also gives us the opportunity to perform severalindependent block
operations in paralleÌ. In order to study the advantagesof parallelizing the independent
block operations required in our procedureswe define a simple parallel computer. In our
parallel complexity analysis $/e use a network environment, consisting of b * 1 nodes, every
node having aprocessor (CPU) and local memory. For È:1...ó, we allocate the blocks
of matrices A and U to specific nodes, as follows:
o The blocks Dk Ek, Vk andWk are allocatedat node &.
o The south-east blocks Z and ,9 a-realÌocated at node 0 (or ó + 1).
In the next sections we will expressour complexity bounds in terms of the sum and
the maximal block dimensions:
b
dbsurn:
I_(*)

d b m a x : m a x { m ( l ). . . . , n ( b ) , n ( b+ 1 ) } .

In our compleÍty analysis we will not only account for the processingtime measured
in FLOP-time r:nits, pTime. but also for the necessaryinternode communication, 1NC.
when b block operations, óopl ...bopb, can proceed in paralÌel (at different nodes). we
bound their processingtime by nbrflops(@) : flops(bopt) n ... A flops(bof), *h"re
A is the maximum operator, and flops(ãrclf) is the number of floating point operations
necessaryat bopk. In the equations that follow. Â has lower precedencethen any multi-
plicative or additive operator. The expressions*At node &--l:b compute', or ,,Flom node
fr:í.'bsend" mean, "At (from) allthenodes I < È < b, inparallel, compute (send)". In
the complexity upper bounds we give in the next sections, we will always neglect lower
order terms.

6. BLOCK CHOLESKY FACTORIZATÏON

We can take advantage of the CBAF of B ir order do the Cholesky factorization of


Bt B with better performance, and parallelizing severalblock operations. We now give an
aÌgorithmic description of the block Choleskv factorization, bchj, in the simple
farallei
environment defined in section 5. At each step we indicate by pTi,m,e : . . . an upper
bound to the required processingtime. in FLOP units, and by INC :. . . an upper bound
to the required internode comunication. As shown in Figure 3, the steps of àcâ0 are as
follows:
1. At node k--./..bcompute the btocks (Bk), Bk, (Bu)rck, arLd(Ck)tck.
pTi,me: m(k)n(k)2 + rn(k)n(k)n(á+ 1) + m(k)n(b+ 1), < 3d,bmarT, I NC : 0.
2. Send (Ck\tgx from node k to node 0, where we accumulateZ0 :Dur(Co)tCn.
p T i m e : b n ( b + i ) , < b d h n a x z. I N C : b n ( b + 1 ) 2< b d b m a x 2
3' At node fr compute the partial Cholesky factorization, eliminating the first n(k)
J. M. STERN AND S' A. VAVASIS

vr

Figure 3. The Block Choleky Factorization'


ACTIVE SET METHODS

coÌurnns, of the block matrix

| (Bo)rBu (B\,Ck1
L(co)rso o l
fo get
lv* wkl
lo zr)
pifime : (t/6)n(k)s + (rl2)n(k)2n(ô+ 1) + (rl2)n(k)n(b + 1)2 < (7/6)dbmar3,
INC:0.
4. Send Zk from node k to node 0 where we accumulate Z : \f; Zk.
pTi.me : b n(b+ 1)2 < b dbmnê. INC : b n(b -f 1)" < b dbma*.
5. At node 0 factor the south-east corner ^9: chol(Z), where chol} indicates the
standard ChoÌesky factorization.
pTi.me: (l/6)n(b + 1)3 < (1/6)dhnazs.INC :0.

Theorem 6.1. The block ChnleskgÍadarízetiory bch), requi,resno rnore than (4 *


Ll3)dbmars *b d,brnax2pocessing tiínc, andb dbmax2 i,ntemodecommun'icat'íon.

At Steps 2 and,4, if the the netrork allows paralleì internode communications, and
the topology of the network is riü enougb- we cân "fan in" the accumulated matrices in
only log(b) phases,see [6] a.rld [fO]- n'** phase requires x bf 2I parallel and independent
tasks, where at each task we trarrqïnit and add dhmn* reals. With this interpretation in
mind we can, in Theorem 6.1, subsitute ô bv tqg(ó) .

7. BLOCK UPDATE PROCEDI'RE

'We
now address the problem of Wdating thie QU factorization of the basis when a
basic column, namely the ou,tj column of the auúlçblock of B, gets "out" of the basis, and
the i,nj column of the inlç block of -R, q mes tn" to the basis. Actually, as explained
in Section 3, we onÌy really need [s maintain the triangular factor, [/. Therefore we want
an update procedure that recomputesU after a pivot. There are intuitive reasonsfor us
to hope for an efficient update
o A pivot can be seen as two rank one modifications of B, namely we delete a
column and then add a column to B. There are severalproceduresto update the
QU factorizalion of B in this case, like 1221,[24] or 140].Therefore we could use
a generic "delete column, add olumn" two step update procedure, even without
taking into consideration the block structure of B.
o We know, from Section 4, that the (block) structure of [/ is uniquely defined by
the structure of B. ÀJso, del€ting a column from one block of B and adding
a column to a secondblo{k, results in a "small" structural change, ín B or U.
Therefore there ought to be an eficient way to update the factor U after a pivot.
J. M' STERN AND S. A' VAVASIS
21O

coÌumn
Let us now present the block update procedure, bup\, tbat explicitly usesthe
much better performance than a generic
block anguÌar structure of B, in order to achievea
procedure is describedin terrns of the block operations
rank onÃpdate. The bÌock update
defined in Section 5. In bupQ we consider five different cases:

CaseI. inkf outlc,i'nklbit, outkt'b+l'


Case II. inlç : outk, ink I b * l.
C a s eI I I . i n k l b l t , outk:b+1.
C a s eI V . i n k : b ] - t , outklbll.
CaseV. i n k = o u t l ç , i n k:b+1.
blocks,
Let us examine in detail caseI, i.e., when in.,t and ouúÊare distinct diagonal
in Figure 4. In this casethe only NZEs in the outgoing column arc in B!"*Ín, :
as shown
are írt o'dnh:
,outk(:,bpoutk(outj)) and the only NZEs in the incoming column, o '
Dink(:,rfnk(inj)), seeFigure 1.
y has
Let us defineg : Btá,and u : Qta : (J-t Bta : fJ-tg. We notice that vector
'i,nkblock of B, and so does vector u' Namely the only
the block structure of a row ín the
NZEs on u are in the blocks uink urr6 ub-rt ,

I urnrI I Vtnt" tyin*l-t I S-* I


L"o*'1:to s I lfr'l'
and insert
In order to update u we remove the outj column of the oüú,t block, ui ffl!,
to reduce U to an upper triangular
u as the Ìast column oçryo,ink. Then we only have
The only orthogonal transformations we
matrix, by means of orthogonal transformations.
and ihe reductions by Givens rotations defined is Section 5. Namely
use axe permutations
we:
o Reduce lvoutk Woutkl from upper Hessenbeg to upper triangular.
o Reduce i.r'*t S1 to uppe, triangular'
gink'o ' Then insert the last
o Insert thl first ráw of ryb-rr'o,as the the last rorr oç
row of f]outk' as the first row of 1Jb+t'o.
o Reduce S from upper Hessenbergto upper triangular'
block
The other casesare very similar. We now give an algorithmic description of the
procedure, bup\, inthe simple parallel environment defined in section 5' The steps
update
At each step
oi arp(fur" permutations, or the basic block operations defined in section 5.
processing time, in FLOP
we indìcate by pTi.me: . . . an upper bound to the required
units, and bv-INC: . . . arÌ upper bound to the required internode communication'

These are the stePsfor caseI:


- - (Cink)taink'
1. At node ink, compute oink (Stnkltotnteand g*t
:
pTime m(tnfiÃ(tnk) * m(ink)n(b + r) < Mhrnar2, INC : 0'
2. At node ink, compute the partial back transformation

I u;'"r"1 | Vn"r, Winkl-Ú 1-rtnt 1


L,l:Lo rl Ls'*'l
ACTIVE SET METHODS

B
Dr cr wr
ouúf = I

W2
Dt C2
ink=2
-w'

È Ca

Figure 4. The Blo& Update procedure, Case I.

Then insert únk as the last olumn of Vink.


pTi.me = (712)n(ink)z + n(;*)n(ó + t) < (Z/2)d.bmanz,INC :0.
3 . F}om node i,nlç send z to node 0.
t1t
J. M. STERN AND S. A. VAVASIS

pTi.me:0, INC : n(b + t) < dbmar'


-
4. At node 0 comPuteub+L S-tz'
:
pT'ime (ll2)n(b + 1)' < (ll2)d,bman2,INC :0'
5. (a) At node outk, remove the column V!,i!fi, fromVoutk ' Then reduce
from upper Hessenbergto upper triangular'
lvoutk 1ryoutk]
(b) At node 0, reduce h'b+1 S] to upper trianguÌar'
so
observe that the block operations at steps 5a and 5b a,reindependent'
< 6dbman2' IlüC : 0'
pT ime : Zn(ink)2 + an(ink)n(b + 1) A 2n(ô + 1)2
6.FÌomnode0sendvector,gr,.tonodein/c,whereitisinsertedasthelastrow
where it is inserted as
of Wink. FYomnode 0 send elementtï*t to nodei.nk,
Flo nodeoutk send vector wftllxl,' to node 0' where it is
Üfrko*y.(u,,")tl
insèrtéd as the first row of 5'
p T i m e : 0 , I N C : 2 n ( b + 1 ) + n ( o u t k )< S d b n o r '
7. At node 0, reduce S from upper Hessenbergto upper trianguìax'
p T i m e : 2 n ( b + 1 ) 2< 2 d b m a r 2 ,I N C : o '
These are the stePs for caseII:
Steps 1-5 are exactlY as in caseT-
6. (a) IYom node ink sendVlifrrs,,(;,k) md W#o* to node 0'
(b) At node 0 reduce to upper triangular the 2 x n(b + 1) + 1 matrix

vlT,!.rtrtn
ot
e,']
pTime:4n(b+ 1) S dbmar , INC :n(b+1) l dbmat'
7. (a) Flom node 0 send the modified vector
'l
Irfink llãïf'*l''l
I vn(ink),n(ink)

back to node in.k.


(b) At node 0, reduce ,9 from upper Hessenbergto upper triangular'
pTi,me :2n(b + 1)2 < 2dbman2, INC : n(á + l) < d'bmat'
These are the stePsof caseIII:
Steps 1-4 are exactlY as in caseI'
5. (a) At node k: 1 :b removethe column W!,*ri fromWk'
from ,s.
(b) At node 0 reduce luó+1 s] to upper triangular. Then remove sc,outi
pTime :2n(b + 1)' < 2dbmat2,INC :0'
as Vlifior*',^(ink)+L'
6. Flom node 0 sendtoìode i'nk, u\+r to be insertedioyi'nk
and ,9r,. to be inserted as the last row of Wínk '
pTime :0 , INC : n(b + l) < d'bmax'
7. At node 0, reduce S from upper Hessenbergto upper triangular'
pTime : 2n(b+ 1)2 < 2dbmarz,Il[C : 0'
ACTIVE SET METHODS

',,,H

cr'
Figure 5. The Blo& UpdÀte Procedure, Case II and III.

These are the steps of casefV:


1. At node ,t--1.'ô,compute ç : 1Bk)tok and rk : (Ck)tak.
pTime : nl m(k)n(itl*) + m(k)n(ô + 1) 1 2dbmar2,I NC : 0.
2. At node k---1:ó,compute the partial back transformation

[::]: [ï \-)'lr;rl
a.nd insert ue as the last column of Wk.
2T4 J. M. STERN AND S' A' VAVASIS

pTime : A\ Gl2)n(N)2+ n(Qn(ó + 1) < (31\dhrnat2, INC :0.


3. Fïom node k:-1"ô sì^á to node 0, where we accumuÌatt ' : D\'k '
"r
pTi.me: b n(b+ 1) S b dbmax,I NC : b n(b + l) < b dAmat'
4. At node 0 compute ub+r - S-tz, and insert ub+l as the last column of ^9.
pif ime : (ll2)n(b + 1)2 < (ll2)dbmarz, I NC :0-
5. Remove the column V:,2:,"\; from Voutk , and reduce lvoutk \ryoutk] to upper
triangular.
pili,me : 2n(outk)2'f n(outk)n(b + 1) < 6dbm'or2,INC :0'
6. Send vectot Wiii[j1,. from node outk to node 0. where we insert it as the first
row of ,9, and reduce ,9 to upper triangula.r-
pTime :Zn(b + 1)2 < 2dbman2,INC : n(b + 1) < dbtwt'

These are the stePsof caseV:


Steps L-4 are exactly as in caseIV'
5. At node k:/:b, remove the column W!,ouri fror Wú, and insert uk as the last
column of Wk. At node 0 remove the col'mn S.,otj from ^9,and insert ub+1 as
the last column of ,9'
pTime:0, lNC:0.
6. At node 0, reduce S from upper Hessenbergto upper triangular'
pTime :2n(b + 1)t < 2dbman2,INC :0-
'We
summarize the complexity of bup0 in the following theorem:

Theorem 7.L. In the blockupdateproced,ute,htp}, @*ing lower order terms, we haae


the following upper boundsfor the requ'íredproezssittgüme onil internod,ecommun'ícat'ion:

Case ProcessingT'i,mes húçnúc Communicat'ion


f !2dbmax2 Hhnaz
II l2dbmar2 klhmo
III Sdbmar2 fuIhrwo
IV I2dbmaaz*b dbmar bilbnúe
V 6dbmar2*b dbmax b dbnal

As in Theorem 6.1, if the network allows pa.rallelinternode communication' we can


substitute bby log(b) in our complexity expressions.
In light of Theorem 2, and the previous sections, we c:n compare bzp0 with other
basis factorization and update techniques:
o In the standard LU factoúzation updates usedin ASMs, the original factorization
B: LU is replaced,after s pivots,by asequenceB: LLLL'"'L"U", see [1],
and [42], where trú is lower triangular with a single nontrivial column.
[31], [41],
So the L(J f.actorizationis maintained as a product sequencewith an increasing
number of factors. In our case we would also have the progressivedegrading of
the CBAF structure in Us, as explained in Section 3. That makes it undesirable
to continue to update the factorization for la.rgewaluesof s, even in the absence
of numerical errors. Instead we would frequently start a fresh factorization, i.e.

:
ACTIVE SET METHODS

Figure 6. The Block Updafe Procedure, Case IV and V.

"reinvert" the Basis. rn couhast, the QU factorization maintains the CBAF


structure of B, and the factorizatbn is given by a single matrix, u, instead of
the product sequencein the rÃI factorization. Moreover we know that the
eu
factorization has much better numerical stability then the LU f.antorization: see
[2a] and [28].
A genericdelete.columnad&column QLr update as 1221,[2a] or
[a0],would require
J. M. STERN AND S. A. VAVASIS
276

involved, the
O(d,bsum2)time. Even using sparsedata structures for the matrices
generic updates require the rotation of all rows of u, so they would still require
"o(d,usum pivot cases. Moreover those dbsum
aumar) processingtime in all the 5
The use
ca,refr:l of the CBAF structure of
rotations have tr6,úe done sequentially.
Bbybup(),givesusthemuchbetterboundsofTheorem2.F}omcomputational
real
expãriencà v/ith ASMS for large block anguÌar problems, we know that in
II, i.e. i,nk : outkl
application problems, most of the updates wiÌl be of case
in
ink f b*1, see[34]and [14],where bzpQ requiresor:tyO(dbmar2) time' even
a purely sequentialenvironment!
LU arld QU
Fïom the above we can expect the bupO to outperform the standard
d'bmar and s ) dhnar, i'e'' when the basis is
updating techniques,specially lf d'bsum'>)
block, and we have to pivot many more times than there are
much laiger than its larger
columns in a singìe block.
imposing
When pivoting, by putting the entering column at the end of its block, we are
on B. We prefer this fixed column ordering strategy for
a particulai column permutation
simplicity of the subsequent Hessenberg updates' It is however
its small overhead and
and reinversions
possibìeto consider more elaborate column orderi.g stratqies at updates
[ 1 1 ] ,[ 2 1 ] .

S. REINVERSIONS

in the up-
In any ASM, after a given number of pivots, the accumulation of errors
,,reinvert" the basis, i.e. recompute the Cholesky factor U directly from
dates forces us to
of the factorization work consists of independent
B. Fyom Section 7 we know that most
fact that in ASMs
factorizations of the diagonal blocks Bk. Moreover, it is a well known
in CBAF,lhe basis pivots frequently replace a coìumn from one block by
for problems
from the same bÌock, i.e. usually i,nk: outk, see [3a] and [14]. Therefore, we
u
"olr*r, more times, and have accumu-
will probably have some blocks that have been updated
these facts by
lated ìarger errors then others. when reinverting we can take advantage of
diagonal block factorization, (fJk)tuk : (B\t Bk , and only
checkingthe accuracy of each
blocks that have accumuìated large errors. Of couÍse' we always have
r"irrlr"rúh" diagonal
to reinvert the final south-eastblock Z '
of
Before a reinversion, we should addressthe question of how to order the columns
in section 4, the orthogonal factorization of a row and coÌumn
the basis B. As explained
still take
permutation of B, QBP, is independent of the row permutation, I' But ìrr'ecan
As mentioned
udrrantageof the column permutation, P, in order to preselve spa,rsity.
refactorization'
above, at each reinversion, only a few blocks of the basis may need a fresh
a columa ordering algorithm for each
Therefore we do not waú to pay the time to nrn
This situation is studied in At the beginning of
irrairriaouf block Bk to be reinverted. [40]:
the columns of each rectangular block, Ák,k :- 1 ' ' ' b, into a "near
the simplex, we order
columns in Bk as
,rfp", t.iungular form" (NUTF), and then at eacúeinversion' order the
they are orderedin Á& [33].
ACTIVE SET METHODS

9. NUMERICAL EXPERIMENTS

In this section vr'ecompare the performanceof the simplex algorithm, using the .L[/ or
the QR factorization, when solving linear programs in CBAF. Our test problems have the
structure of the ô-scenariosinvestment problems, Iike the example we gave is Section 1.
Our test problems have diagonal blocks of dim ensíondbmar x2 dbman, b angular columns,
plus an embedded identity matrix. In order to have a large set of test probìems, we use
random numbers to generate (arìmisqible)numerical vaÌues to the NZEs. The simplex
always begins at the identity basis whiú. from the way the problem is formulated, gives
us a feasible vertex.
In the folÌowing, qr-simplex is an implementation of the simplex algorithm using the
QR factorization and updates. as described in the previous sections, and lu-simplex is
an implementation of the simplex algorithm rr5inga sparse,L[/ factorization and rank one
updates [1]. The two aÌgorithms were implemented in different environments)so we can not
directly compare ru:ring times. nor do we har-edirect accessto a FLOPs counter. However
the runing time of both algorithms is dominated by the back solves of the form Bn : d,
where B is the current basis. using the arailable factorization of B. But the number of
FLOPs necessaryfor those back solvesis essentialÌyproportional to the number of NZEs
in the factors. Therefore we will use the fill in the basis factors as an indirect measure of
the cost, in FLOPs or runing time. of a step in the simplex. Our analysis will not take
into account all the pa.raJlelismintrinsic to the qr-simplex. Even so, in a purely sequential
environment, the qr-simplex seemsto be a better alternative to the standard lu-simplex.
In the gr-simplex we only carry the upper triangular Cholesky factor R : 8ú8. So
we only monitor the number of NZEs n R- p: nze(R). In the lu-simplex we carry the
upper triangular factor [/, the initial lower triangular factor, L, ând a sequenceof rank
o n e u p d a t e sL, ' , L ' , . . . , L t u P , w h e r e c u p i s t h e n u m b e r o tf i m e s w e u p d a t e d t h e b a s i s .
Each rank one update is a lower triangular matrix that only differs from the identity at
one column. We keep the nontriüal columns of these rank-one transformations sequence
in a dbsum x cup matrix, LSEQ, where dbsum : b dbmar. Since our starting basis is
the identity, the initial lower triangular factor is trivial, and we only monitor u : nze(U),
\: nze(LSEQ), and the totaÌ fi.ll r : u * À-
Before we analyze statistical data, let us examins in detail a small example. In this
example we have dbmar: 6 and b : 3. Each ror,. in Table 1 (next page) is one pivot step.
The columns in Table 1 are as folÌows. Column 6 is the value of the objective function.
Column 2 is the fiÌl in the Choleskyfactor, p. Column 3, 4 and 5 are the fill in [/, in LSEQ,
and the total, i.e., u, À, and r. Column 7 is the pivot's "case", as definedin Section 7.
Column 1 is the order in which the vertex was visited by the simplex.
In Figure 7 we plot columrÌs2,3. 4 atd õ of Table I, p, u, À and r, versusthe pivot
sequenceorder in column 1. The waluesin these four columns are plotted, respectively,
with a solid line, a dashedline, a dotted line. and a dash-dotted line.
J. M. STERN AND S. A. VAVASIS

Tâble 1. Vertex sequence of the exarnple in Figure 7'


Pivot p U À Ì Cost Funct.
0 18 18 0 18 100.0000
35 35 0 35 87.4386 4
2 40 39 1 40 76.7990 2
3 45432 56.3810 2
4 49484 52 47.4169 3
5 65627 69 39.1938 4
6 69 65 L2 tl 36.1511 2
,7
76 67 18 85 32.8104 2
R 82 75 24 99 31.6119 2
q
82 77 25 t02 26.1915 1
10 67 62 28 90 25.6787 3
11 68 66 28 94 24.7787 2
12 75 69 34 103 23-9997 2
13 77 71. 40 1 1 1 21.6801 I

80 75 40 l l D 20.774L 2
15 84 77 47 1 n Á t9.7974 2
16 72 74 53 t27 19.1210 3
t7 7L 76 57 133 19.1169 2
18 74 73 64 t37 18.4936 2
19 69 81 73 154 17.0904 3
20 68 78 73 151 16-8475 2
21 68 83 82 165 16.7994 2

Let us first compare nze(R) versusnze((I). There are two important effects con-
tributing to filI the upper triangular factor, each favoring one of the factorizations:
1. Let us consider the factorization of M, a 2 x n matrix. In t}re LU factorization
we add a multiple of the first to the secondrow, in order to eliminate M(z,I).
After this ,,elementaryrow transformation" the sparsity structure of the first row
remains unchanged, and the sparsity structuÌe of the second row becomes the
Boolean sum of ihe structures of the two rows. In the QR factorization we apply
a Givens rotation to eliminate M(2,7). But now the sparsity pattern of both
rows become the Boolean sum of the sparsity structure of the original rows of
M (except for the eliminated element, of course). FYom this we can see why
orthogonal transformations tend to produce much more fill than eÌementary row
transformations, which tends to favor the LU factoizatíon'
2. The eR factorízation preservesthe CBAF structure of B in the ChoÌesky factor'
.R, as extensively analyzed in the previous sections. Therefore the fill in à is
confined to the nontrivial blocks in its CBAF structure. On the other hand, the
LtJ factorization progressivelydegeneratesthe CBAF structure' allowing fill to
occur anywhere in [/. That tends to favor the Q'R factorization'
ACTIVE SET METHODS

We could observe, in our experiments, that the first effect is more important at the
first pivots of the simplex. But as the nontrivial blocks of the Cholesky factor become
denser, there is a saturation of this effect, and the the filÌ in R stabilizes, or grows very
slowly. The secondeffect has a cumulative nature, and as the algorithm progresses,it tends
to fill I/ at increasing rates. We also observethat as the simplex approachesoptimality,
the block structure of the basis becomesvery well equilibrated, i.e. all diagonal blocks
have approximately the same number of columns; this benefits the qr-simplex, but not
necessarilythe lz-simplex. In accordancewhich the commentsabove,we usually observea
spaxserU factot at first steps of the simplex, and a sparser rQat the end of the algorithm.
'We
could also observe that the greater the ratio dbsumf dbmac, or just b if as in our
test problems the blocks have a constant mr-rnberof rows, the sooner the second effect
dominates the first.

rg)

160

t4{,

lm

rü)
80
í-'
F
60

40

20

0L
0 lo 15 20 25

Figure 7. Fill in the factors at the first example in TabÌe 2.

The fill in LS EQ of the lz-simplex is easierto analyze; nze(LS EQ) is a monotonically


increasing function, beginning at z,ero,but growing always faster. When nze(LSEQ)
becomeslarger then nze(U), a basisreinversion is probably due, as explained in Section 8.
220 J. M. STERN AND S. A. VAVASIS

In our test problems that usually happens at the final steps of the algorithm. Moreover,
becausethe non-identity part of the diagonal blocks in the investment problem are dense,
and becauseclose to optimality many identity columns have been driven out of the basis,
the P3 heuristic applied to a basis at the final steps of the simplex produces almost only
spikes,making the reinversionvery expensive.As expectedfrom these reasons'in our test
problems, more frequent reinversionsdo not improve the rudng time of the lu-simplex' For
ih" ,"u6or6 above, and in order to simplify the comparative analysis, we reinvert neither
the lu-simplex nor the qr-simplex.
In Table 2 we present ratios for the total frll, pfr, and for the upper trianguÌar fill,
pf u, after 20%, 40ya, 60%,80% and 100%of the pivots, for some test problems. The first
oi thur. problems is the problem at Figure 7, and they all have the same structure, with
d b m a n : 6 a n db : 3 '

Table 2. Examples with 3 blocks of size 6.


rlp 20% 40% 60% 80% rOn%
--1J000
0.8941 0.7282 0.5669 0-4t2r
1 1.0465 1.1343 1.0870 0.9730 0.8193
------iT 2 t 3 4 2 0
2 1,0000 1.0465 0.9130 r.üno 1.0000
----r3i 08210

3 1.0370 0,9600 0.8586 0.8687 0.7636


6 I 4 2 .p--
r.ozso 1.0167 0.7922 0.7027 0.6429
4 1,t026 1.1509 1.0339 r.0196 1.0189
------irt 110310

5 1.0545 0.9672 1.0000 0.942CI 0.8902


---lÌ 2L0430

6 1.0196 1.1311 1.1143 0.9844 0.9863


----- 37430
1"0
7 1.0256 1.0000 1.0308 1.0476 1.0145
3 10 1 0 _2_
r.oooo 0.9273 0.7581 0.8125 0.6864
8 1.0256 1.0625 0.9400 1.0833 1.0000
--ll 28321

9 1.0256 1.1389 1.1356 0.9344 l.t2t2


--1Jt 16531

10 1.0000 1.0408 1.0000 1.0426 t.0426


ACTIVE SET METHODS

Table 3 is similar to Table 2, only for larger test problems, with dbmar : 6 and ô : 9.
The first problem in Table 3 is the one in Figure 8.

Table 3. Examples with 9 blocks of size 6.


rlp 20% 40% 60% 80% 100%
t.0296 0.7903 0.4L28 0.3179 0.2650
r.1767 1.0692 0.7237 0.5924 0.5619
t42 1044
0.9297 0.7085 0.4409 0.3148 0.2878
1.0620 1.0641 0.8363 0.6706 0.6588
432 11 6 3
0.9852 0.8939 0.5500 0.3288 0.2981
1.1050 1.16í1 0.9747 0.7259 0.6602
15 25 1i75
0.8667 0.6829 0.5379 0.3699 0.2538
r.0443 0.9tÌ94 0.8765 0.7370 0.5651
828 13 8 11
0.9866 0.7103 0.3546 0.2887 0.2382
1.0889 1.0510 0.6429 0.5869 0.5272
73r 11 54
1.0617 0.9603 o.57t4 0.4980 0.4295
1.1570 Lm46 0.9947 0.9213 0.8281
2m 951
0.9845 0.6675 0.4959 0.3758 0.3284
1.0641 0.95$ 0.8237 0.7740 0.7696
526 1381
0.9731 0.8320 0.6735 0.4325 0.3830
1.0824 1.1514 1.r5ffi 0.9463 0.8968
10 27 966
1.0089 0.8049 0.3984 0.2277 0.L942
1.0900 1.1898 0.7297 0.5074 0.4709
726 1375
1.0000 0.6098 0.4745 0.3037 0.2495
10 1.1565 0.9336 0.8381 0.6138 0.5482
431 L477

Table 4 has even larger problems, with dbmar : 6 and à : 18. The first problem in
Table 4 is the one in Figure 9.
J. M. STERN ANP S. A. VAVASIS

Table 4. Examples with 18 blocks of size 6.


r/p 20% 40% 60% 80% 1.00%
u/p 20% 40% 60% 80% 100%
Case12345
0.9564 0.5m
1 1.1573 0.8922 0.6236 0.6014 0.5339
16 62 25 L4 11

2 1.0950 0.9196 0.5169 0.4335 0.4247


67423128

3 r.I434 0.9576 0.7011 0.4828 0.4270


356427159
ffi-
4 L.0925 1.0121 0.6651 0.5664 0.5139
19 61 28 17 13
5 1.0462 1.0309 0.6279 0.4&15 0.4512
105721912

6 1.2098 0.9335 0.6482 0.5010 0.4219


23 69 28 16 L7
0.8868 o.
7 1.0583 0.8081 0.7282 0.7308 0.7053
206220139
0.9822 o.Sffi
8 1.1156 0.9425 0.6216 0.4157 0.3907
157t23lt7
0.824t 0.3
I 1.0898 0.7560 0.4L40 0.3401 0.3077
16 83 26 14 13
0.9782 0.7@
10 1.1681 1.0203 0.7610 0.5525 0.5600
664 laç.2

After the back solves, the most fisle çopsrrmingoperation in a simplex step
is the
update of the upper triangular factor. In general this update can be as time-consuming
as a back solve. However we know that for some of the pivot ,,cases,,in the qr_simplexl
nameÌy casesL, 2 or 3 but not 4 or 5, the qr-update is very inexpensive, involving
Èlock
operations locaÌ to the blocks receiving or losing a column. We hãve argued that,
in real
probÌems, most of the pivots should be of case2. In Tables 2 and 3 the
third line for each
test problem gives the number of pivots of each case,confirming this hypothesis.
ACTIVE SET METHODS

Figure 8. FilÌ in the ãctqs at the first example in Table 3'

The reported numerical experiments had the QR-Simplex implemented in Sparse-


Matlab (Matlab is a trademark of The Mathworks, Inc.). Ìffe are currently implementing
the QR-Simplex in C and PVM, a neüwork "Paralìel Virtual Machine" processmanager
[a]. This implementation, on an heterogeneousSun SPARCstation network, is intended to
solve large portfolio planning financial probleus [19], [43].
J. M, STERN AND S. A. VAVASIS

4500

35m

3000

2000

5m

o g)
o 20 40 60 100 120 140

Figure 9. Fill in the factors in the first exa,mplein Table 4.

REFERENCES

[1] R.H. BARTELS and G.H. GOLUB, The simplexrnethodof linear programm'ingus'ing
Comm. ACM, 12 (1969),pp.266-268.
LU decompos'it'ion,
l2l M. BASTIAN, Aspectsof basi,sfactorizati:onfor block angular sgstemswi,th coupli,ng
rows. In [11].
f3] M.S. BAZARAA and C.M. SHETTY, Nonlinear Programm'ing,John Wiley, Chich-
ester, 1979.
l4l A. BEGUELIN, J. DONGARRA, A. GEIST, R. MANCKEK, and V. SUNDERAM,
A Users' Guide to PVM Parallel Virtual Machine,ORNL-TM-11826,Oak Ridge
National Laboratory, 1992.
t5] J.M. BENNETT, An aproachto some structured linear programmi,ngproblems, Op-
erationsResearch,14(4) (1966),pp. 636-645.
[6] D.P. BERTSEKAS and J.N. TSITSIKLIS, Parallel and Distributed,Computation:
Numerical Method,s,Prentice Hall, Englewood Cliffs, 1989.
17) E. BODE\MIG, Matrin Calculus,North-Holland,Amsterdam, 1956.
ACTIVESETMETHODS 225

f8] A.G. BUCKLEY and J.L. GOFFIN, Algorithmsfor ConstrainedMini,mization of


Smooth Nonlinear Functions, Mathematical Programming Study 16. North Hol-
Iand, Amsterdam, 1982.
f9] J.R. BUNCH and D.J. ROSE, SparseMatri.r Computati,ons, Academic Press,New
York, 1976.
[10] G.F. CAREY, Parallel Supercomput'i,ng, John Wiley, Chichester,1989.
[11] T.F. COLEMAN and A. POTIIEN, The nuII spaceproblemII. algorithms,SIAM J.
Alg. Disc. Meth., 8 (1987),pp. 544-563.
U2l T.F. COLEMAN and C. VAN LOAN, Handbookfor Matrir Computations,SIAM,
Philadelphia,1988.
[13] G.B. DANTZIG, Li,near Progzmming and Ertens'ioris,Princeton University Press,
Princeton, 1963.
[14] B. DANTZIG, M.A.H. DEÌIIPSTER. and M.J. KALLIO, Eds., Large-ScaleLi'n-
ear Programming,IIASA Collaborative ProceedingsSeriesCP-81-S1,Laxenburg,
Austria.
f15l B. DANTZIG and W. GLYNN. Povallel processorsfor planning under uncerta'intg
In [38J..
[16] I.S. DUFF and G.W. STEWARI. SparseMatrir Proceedi,ngs, SIAM, Philadelphia,
1978.
f17] LS. DUFF, Di,rcct Methodsfor SporseMatrices, Clarendon Press, Oxford, 1986.
[18] A.M. ERISMAN, R.G. GRINÍE'S,J.G. LEWIS, and W.G. POOLE, A structurally
stable modification of EelbtwrRorick's P.11algori.thm,for reordering unsyrn-
metric sparsematrie.s, SIAIvI J. Numer. AnaL,22(2) (1985),pp. 369-385.
[19] E. ERMOLIEV and R.J.B. WETS- NurnencalProcedures Jor StochasticOptimiza-
Íion, Springer-Verlag,Berlin, 1987.
[20] G.E. FORSYTHE and C.M- MOLER. Computer Soluti,onof Linear Algebric Sgs-
tems, Prentice Hall, Englewood Clift, 1967.
f21] J. GILBERT and M. HEATH, Conptfittg o sparsebasisfor the null space,SIAM J.
Aìg. Disc. Meth., 8 (1987),pp.446-459.
l22l P.E. GILL, G.H. GOLUB,'W. MURRAY. and M. SAUNDERS, Methodsfor modi-
fy'íng matrix factorizat'i.ons.Math. Comp.. 28 (1974), pp. 505-535.
[23] P.E. GILL, G.H. GOLUB, W. MURRAY. and M. SAUNDERS, Methodsfor comput-
ing and modr,fyingLDV factorímüo'ns of a matrir, Math. of Comp., 29 (1975),
pp. 1051-1077.
[24] G.E. GOLUB and C.F. VAN LOAN. Matrir Computati,ons, Johns Hopkins Univer-
sity Press,Baltimore, 1983.
[25] F. GUSTAVSON,Find,ingtheblocklower triangularform of a sparsematrir. In [9]..
[26] D. KENDRICK, Stochastic Conhol for Econorni,cModels,McGraw-Hill, New York,
1980.
l27l L.S. LASDON, Optimization Theoryfor Large Sgstems,The Macmillan Company,
New York, 1970.
[28] C.L. LAWSON and R.J. HANSON, Solui'ngLeast SquareProblems,Prentice Hall,
Englewood Cliffs, 1974.
226 J, M. STERNAND S. A. VAVASIS

[2e] D.G. LUENBERGER, Linear and Nonlinear Programm'ing,Addison-\Mesley,Read-


ing Massachusetts,1984.
[30] J.M. MULVEY and H. VLADIMIROU, Stachast'icNetwork Programming for Fi'-
nancial Planning Problems,Report SOR-89-7,Department of Civil Engineering
and Operations Research,,Princeton University, Princeton, 1989.
[31] B.A. MURTAG, AduancedLinear Programming,McGraw-Hill, New York, 1981.
l32l B.A. MUilAGH and M.A. SAUNDERS, / projected,Lagrangeanalgorit'hm and, its
implernentationsfor sparsenonlinear constraints. In [8].
[33] \M. ORCHARD-HAYS, Ad,uanced,Linear-Programming Computing Techniques,Mc-
Graw-Hill, New York, 1.968.
134lA.F. PEROLD and G.B. DANTZIG, A basi,sfactorization methodfor blocktriangu-
Iar linear prograríLs.In [16].
[35] S. PISSANETZKY, SparseMatrir Technology, AcademicPress,London, 1984.
[36] A. PREKOPA, on
Stud,i,es Mathematical Programming, Akademiai Kiado, Budapest,
1980.
[37] D.J. ROSE and R.A. \MILLOUGHBY, eds., SparseMatrices and Applications,
Plenum Press,New York, 7L.
[38] J,B. ROSEN, ed., Supercomputersand Large-ScaleOpt'im'ization,BaltzerAG, Basel,
1990.
[3e] J.B. ROSEN and R.S. MAIER, Parallel solution of large-scaleblocleangular linear
prograrns.In [38]..
[40] M.A. SAUNDERS, Large ScaleLinear Programming Usi,ngthe CholeskyFactoriza-
tion, Computer ScienceDepartment, Stanford University, 1972.
[41] M.A. SAUNDERS, ,4 fast, stableimplementation of the simpleamethod,using Bartels
and, Golub updating. In [9].
l42l J.M. STERN, Algoritimos Ef,ci,entesde Programacõ,o Linear,Instituto de Matemá-
tica e Estatistica da Universidade de São Paulo, São Paulo, 1987.
143lJ.M. STERN, SparseNull Basesfor StructuredOptim'izationProblems,Ph.D. thesis,
School of Operations Researchand Industrial Engineering, Cornell University,
Ithaca, 1991.
144)J.H. \MILKINSON, The Algebric EigenualueProblem, Clarendon Press, Oxford,
1965.
145lC. WINKLER, Basis factorization for block angular l'inear programs. In [36].
146lG. ZOUTENDIJK, Method,sof Feasi,bleDi,rections,Elsevier, Amsterdam, L960,

You might also like