Loop
Loop
ERWIN KALVELAGEN
Abstract. This document illustrates the Dantzig-Wolfe decomposition algorithm using GAMS.
1. Introduction
Dantzig-Wolfe decomposition [2] is a classic solution approach for structured
linear programming problems. In this document we will illustrate how DantzigWolfe decomposition can be implemented in a GAMS environment. The GAMS
language is rich enough to be able to implement fairly complex algorithms as is
illustrated by GAMS implementations of Benders Decomposition [10], Cutting Stock
Column Generation [11] and branch-and-bound algorithms [12].
Dantzig-Wolfe decomposition has been an important tool to solve large structured models that could not be solved using a standard Simplex algorithm as they
exceeded the capacity of those solvers. With the current generation of simplex and
interior point LP solvers and the enormous progress in standard hardware (both in
terms of raw CPU speed and availability of large amounts of memory) the DantzigWolfe algorithm has become less popular.
Implementations of the Dantzig-Wolfe algorithm have been described in [5, 6, 7].
Some renewed interest in decomposition algorithms was inspired by the availability
of parallel computer architectures [8, 13]. A recent computational study is [16].
[9] discusses formulation issues when applying decomposition on multi-commodity
network problems. Many textbooks on linear programming discuss the principles
of the Dantzig-Wolfe decomposition [1, 14].
2. Block-angular models
Consider the LP:
min cT x
(1)
Ax = b
x0
B0 B1 B2
A1
A2
(2)
Ax =
...
..
BK
.
AK
x0
b0
x 1 b1
x 2 b2
=
.. ..
. .
xK
bK
ERWIN KALVELAGEN
The constraints
K
X
(3)
B k x k = b0
k=0
corresponding to the top row of sub-matrices are called the coupling constraints.
The idea of the Dantzig-Wolfe approach is to decompose this problem, such that
never a problem has to be solved with all sub-problems Ak xk = bk included. Instead
a master problem is devised which only concentrates on the coupling constraints,
and the sub-problems are solved individually. As a result only a series of smaller
problems need to be solved.
3. Minkowskis Representation Theorem
Consider the feasible region of an LP problem:
(4)
P = {x|Ax = b, x 0}
(5)
j = 1
j 0
If the feasible region can not assumed to be bounded we need to introduce the
following:
X
X
x=
j x(j) +
i r(i)
j
X
(6)
j = 1
j 0
i 0
where r(i) are the extreme rays of P . The above expression for
Px is sometimes
called Minkowskis Representation Theorem[15]. The constraint j j = 1 is also
known as the convexity constraint.
A more compact formulation is sometimes used:
X
x=
j x(j)
j
(7)
j j = 1
j 0
where
(8)
(
1
j =
0
I.e. we can describe the problem in terms of variables instead of the original
variables x. In practice this reformulation can not be applied directly, as the number
of variables j becomes very large.
4. The decomposition
The K subproblems are dealing with the constraints
Ak xk = bk
(9)
xk 0
(10)
B k x k = b0
x0 0
We can substitute equation 7 into 10, resulting in:
min
cT0 x0
pk
K X
X
(j)
(cTk xk )k,j
k=1 j=1
pk
K X
X
(j)
B0 x0 +
(Bk xk )k,j = b0
k=1 j=1
(11)
pk
X
j=1
x0 0
k,j 0
This is a huge LP. Although the number of rows is reduced, the number of extreme
(j)
points and rays xk of each subproblem is very large, resulting in an enormous
number of variables k,j . However many of these variables will be non-basic at
zero, and need not be part of the problem. The idea is that only variables with a
promising reduced cost will be considered in what is also known as a delayed column
generation algorithm.
The model with only a small number of the variables, compactly written as:
min cT0 x0 + cT 0
B0 x0 + B0 = b0
(12)
0 = 1
x0 0
0 0
is called the restricted master problem. The missing variables are fixed at zero.
The restricted master problem is not fixed in size: variables will be added to this
problem during execution of the algorithm.
ERWIN KALVELAGEN
Sub 1
4 Sub 2
Duals
u
Master
New columns
..
.
Sub K
(15)
Bk xx = bk
xk 0
The operation to find these reduced costs is often called Pricing. If k < 0 we can
introduce the a new column k,j to the master, with a cost coefficient of cTk xk .
A basic Dantzig-Wolfe decomposition algorithm can now be formulated:
Dantzig-Wolfe decomposition algorithm.
{initialization}
Choose initial subsets of variables.
while true do
1The reduced cost of a variable x is
j
(13)
j = cj T Aj
{Master problem}
Solve the restricted master problem.
1 := duals of coupling constraints
(k)
2 := duals of the k th convexity constraint
{Sub problems}
for k=1,. . . ,K do
(k)
Plug 1 and 2 into sub-problem k
Solve sub-problem k
if k < 0 then
Add proposal xk to the restricted master
end if
end for
if No proposals generated then
Stop: optimal
end if
end while
5. Initialization
We did not pay attention to the initialization of the decomposition.
The first thing we can do is solve each sub-problem:
min cTk xk
Ak x k = bk
(16)
xk 0
If any of the subproblems is infeasible, the original problem is infeasible. Otherwise,
we can use the optimal values xk (or the unbounded rays) to generate an initial set
of proposals.
6. Phase I/II algorithm
The initial proposals may violate the coupling constraints. We can formulate
a Phase I problem by introducing artificial variables and minimizing those. The
use of artificial variables is explained in any textbook on Linear Programming (e.g.
[1, 14]). It is noted that the reduced cost for a Phase I problem are slightly different
from the Phase II problem.
As an example consider that the coupling constraints are
X
(17)
xj b
j
min xa
The reduced cost of a variable xj is now as in equation (14) but with cTk = 0.
It is noted that it is important to remove artificials once a phase II starts. We
do this in the example code by fixing the artificial variables to zero.
ERWIN KALVELAGEN
X
(20)
(i,j)A
xki,j
(j,i)A
xki,j
kK
xki,j
xkj,i = bkj
ui,j
xki,j = supplyik
(21)
xki,j = demandkj
xki,j ui,j
kK
xki,j
$ontext
Dantzig-Wolfe Decomposition with GAMS
Reference:
http://www.gams.com/~erwin/dw/dw.pdf
Erwin Kalvelagen, April 2003
$offtext
sets
i origins
j destinations
p products
;
table supply(p,i)
GARY
CLEV
bands
400
700
coils
800
1600
plate
200
300
;
PITT
800
1800
300
2http://www.amsterdamoptimization.com/models/dw/dw.gms
table demand(p,j)
FRA
DET
bands
300
300
coils
500
750
plate
100
100
;
LAN
100
400
0
WIN
75
250
50
STL
650
950
200
FRE
225
850
100
LAF
250
500
250
parameter limit(i,j);
limit(i,j) = 625;
table cost(p,i,j) unit cost
FRA DET LAN WIN
BANDS.GARY
30
10
8
10
BANDS.CLEV
22
7
10
7
BANDS.PITT
19
11
12
10
STL
11
21
25
FRE
71
82
83
LAF
6
13
15
COILS.GARY
COILS.CLEV
COILS.PITT
39
27
24
14
9
14
11
12
17
14
9
13
16
26
28
82
95
99
8
17
20
PLATE.GARY
PLATE.CLEV
PLATE.PITT
41
29
26
15
9
14
12
13
17
16
9
13
17
28
31
86
99
104
8
18
20
positive variable
x(i,j,p)
shipments
;
variable
z
objective variable
;
equations
obj
supplyc(i,p)
demandc(j,p)
limitc(i,j)
;
obj..
model m/all/;
solve m minimizing z using lp;
*----------------------------------------------------------------------* subproblems
*----------------------------------------------------------------------positive variables xsub(i,j);
variables zsub;
parameters
s(i)
d(j)
c(i,j)
pi1(i,j)
pi2(p)
supply
demand
cost coefficients
dual of limit
dual of convexity constraint
ERWIN KALVELAGEN
pi2p
;
equations
supply_sub(i)
demand_sub(j)
rc1_sub
rc2_sub
;
supply_sub(i)..
demand_sub(j)..
rc1_sub..
rc2_sub..
positive variables
lambda(p,k)
excess
artificial variable
;
variable zmaster;
equations
obj1_master
phase 1 objective
obj2_master
phase 2 objective
limit_master(i,j)
convex_master
;
obj1_master..
obj2_master..
limit_master(i,j)..
sum(pk, proposal(i,j,pk)*lambda(pk)) =l= limit(i,j) + excess;
convex_master(p).. sum(pk(p,k), lambda(p,k)) =e= 1;
model master1 phase 1 master /obj1_master, limit_master, convex_master/;
model master2 phase 2 master /obj2_master, limit_master, convex_master/;
sub2.solprint = 2;
*----------------------------------------------------------------------* options to speed up solver execution
*----------------------------------------------------------------------master1.solvelink = 2;
master2.solvelink = 2;
sub1.solvelink = 2;
sub2.solvelink = 2;
*----------------------------------------------------------------------* DANTZIG-WOLFE INITIALIZATION PHASE
*
test subproblems for feasibility
*
create initial set of proposals
*----------------------------------------------------------------------display "-----------------------------------------------------------------",
"INITIALIZATION PHASE",
"-----------------------------------------------------------------";
set kk(k) current proposal;
kk(proposal1) = yes;
loop(p,
*
* solve subproblem, check feasibility
*
c(i,j) = cost(p,i,j);
s(i) = supply(p,i);
d(j) = demand(p,j);
pi1(i,j) = 0;
pi2p = 0;
solve sub2 using lp minimizing zsub;
abort$(sub2.modelstat = 4) "SUBPROBLEM IS INFEASIBLE: ORIGINAL MODEL IS INFEASIBLE";
abort$(sub2.modelstat <> 1) "SUBPROBLEM NOT SOLVED TO OPTIMALITY";
*
* proposal generation
*
proposal(i,j,p,kk) = xsub.l(i,j);
proposalcost(p,kk) = sum((i,j), c(i,j)*xsub.l(i,j));
pk(p,kk) = yes;
kk(k) = kk(k-1);
);
option proposal:2:2:2;
display proposal;
*----------------------------------------------------------------------* DANTZIG-WOLFE ALGORITHM
*
while (true) do
*
solve restricted master
*
solve subproblems
*
until no more proposals
*-----------------------------------------------------------------------
10
ERWIN KALVELAGEN
*
* solve master problem to get duals
*
if (phase=1,
solve master1 minimizing zmaster using lp;
abort$(master1.modelstat <> 1) "MASTERPROBLEM NOT SOLVED TO OPTIMALITY";
if (excess.l < 0.0001,
display "Switching to phase 2";
phase = 2;
excess.fx = 0;
);
);
if (phase=2,
solve master2 minimizing zmaster using lp;
abort$(master2.modelstat <> 1) "MASTERPROBLEM NOT SOLVED TO OPTIMALITY";
);
pi1(i,j) = limit_master.m(i,j);
pi2(p) = convex_master.m(p);
count = 0;
loop(p$(not done),
*
* solve each subproblem
*
c(i,j) = cost(p,i,j);
s(i) = supply(p,i);
d(j) = demand(p,j);
pi2p = pi2(p);
if (phase=1,
solve sub1 using lp minimizing zsub;
abort$(sub1.modelstat = 4) "SUBPROBLEM IS INFEASIBLE:
abort$(sub1.modelstat <> 1) "SUBPROBLEM NOT SOLVED TO
else
solve sub2 using lp minimizing zsub;
abort$(sub2.modelstat = 4) "SUBPROBLEM IS INFEASIBLE:
abort$(sub2.modelstat <> 1) "SUBPROBLEM NOT SOLVED TO
);
*
* proposal
*
if (zsub.l < -0.0001,
count = count + 1;
display "new proposal", count,xsub.l;
proposal(i,j,p,kk) = xsub.l(i,j);
proposalcost(p,kk) = sum((i,j), c(i,j)*xsub.l(i,j));
pk(p,kk) = yes;
kk(k) = kk(k-1);
);
);
*
* no new proposals?
*
abort$(count = 0 and phase = 1) "PROBLEM IS INFEASIBLE";
done$(count = 0 and phase = 2) = 1;
);
abort$(not done) "Out of iterations";
11
parameter xsol(i,j,p);
xsol(i,j,p) = sum(pk(p,k), proposal(i,j,pk)*lambda.l(pk));
display xsol;
parameter totalcost;
totalcost = sum((i,j,p), cost(p,i,j)*xsol(i,j,p));
display totalcost;
GARY.STL
GARY.FRE
GARY.LAF
CLEV.FRA
CLEV.DET
CLEV.LAN
CLEV.WIN
CLEV.STL
CLEV.FRE
CLEV.LAF
PITT.FRA
PITT.DET
PITT.LAN
PITT.WIN
PITT.STL
PITT.FRE
PITT.LAF
----
bands
coils
plate
400.000
64.099
625.000
110.901
160.901
264.099
67.906
43.972
250.000
74.024
35.901
232.094
100.000
31.028
225.000
175.976
457.094
400.000
169.025
260.901
162.003
150.976
500.000
292.906
80.975
625.000
62.997
238.123
39.099
10.901
100.000
50.000
39.099
100.000
89.099
210.901
199500.000
References
1. V. Chvatal, Linear programming, Freeman, 1983.
2. G. B. Dantzig and P. Wolfe, Decomposition principle for linear programs, Operations Research
8 (1960), 101111.
3. R. Fourer and D. Gay, Looping with ampl, http://www.netlib.org/ampl/looping/, 1995.
4. R. Fourer, D. Gay, and B. Kernighan, AMPL: A modelling language for mathematical programming, Boyd & Fraser, 1993.
5. J. K. Ho and E. Loute, An advanced implementation of the Dantzig-Wolfe decomposition
algorithm for linear programming, Mathematical Programming 20 (1981), 303326.
6.
, Computational experience with advanced implementation of decomposition algorithms for linear programming, Mathematical Programming 27 (1983), 283290.
7. James K. Ho and R. P. Sundarraj, DECOMP: an implementation of Dantzig-Wolfe decomposition for linear programming, Lecture Notes in Economics and Mathematical Systems, vol.
338, Springer-Verlag, 1989.
8.
, Distributed nested decomposition of staircase linear programs, ACM Transactions on
Mathematical Software (TOMS) 23 (1997), no. 2, 148173.
9. K. L. Jones, I. J. Lustig, J. M. Farvolden, and W. B. Powell, Multicommodity network flows:
The impact of formulation on decomposition, Mathematical Programming 62 (1993), 95117.
10. Erwin Kalvelagen, Benders decomposition with GAMS, http://www.amsterdamoptimization.
com/pdf/benders.pdf, December 2002.
11.
, Column generation with GAMS, http://www.amsterdamoptimization.com/pdf/
colgen.pdf, December 2002.
12.
, Branch-and-bound methods for an MINLP model with semi-continuous variables,
http://www.amsterdamoptimization/pdf/bb.pdf, April 2003.
13. Deepankar Medhi, Decomposition of structured large-scale optimization problems and parallel
optimization, Tech. Report 718, Computer Sciences Department, University Of Wisconsin,
September 1987.
12
ERWIN KALVELAGEN