0% found this document useful (0 votes)
25 views

Module 21 - DantzigWolfe

The document summarizes the Dantzig-Wolfe decomposition technique for solving large-scale linear programs (LPs). It involves separating the constraints into two sets, representing variables as convex combinations of extreme points, and applying column generation to solve the reformulated master problem. Specifically: 1. The constraints are separated into a "hard" set and a polyhedral set defined by the "easy" constraints. 2. The variables are replaced by their representation as combinations of the extreme points of the polyhedron. 3. Column generation is applied to the reformulated master problem to iteratively generate new extreme point columns and optimize the combination weights.

Uploaded by

nicholaszyli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Module 21 - DantzigWolfe

The document summarizes the Dantzig-Wolfe decomposition technique for solving large-scale linear programs (LPs). It involves separating the constraints into two sets, representing variables as convex combinations of extreme points, and applying column generation to solve the reformulated master problem. Specifically: 1. The constraints are separated into a "hard" set and a polyhedral set defined by the "easy" constraints. 2. The variables are replaced by their representation as combinations of the extreme points of the polyhedron. 3. Column generation is applied to the reformulated master problem to iteratively generate new extreme point columns and optimize the combination weights.

Uploaded by

nicholaszyli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ISyE 6669

Dantzig-Wolfe Decomposition
Andy Sun

Spring 2020

In the last module, we introduced the general framework of column generation and constraint
generation for solving large scale linear optimization problems. We applied the column generation
algorithm to the cutting stock problem. In this lecture, we study another application of the column
generation technique, the so-called Dantzig-Wolfe decomposition. It is designed to deal with large
scale LPs with special structures. We will first derive the Dantzig-Wolfe decomposition principle,
then, see an example.

1 The Dantzig-Wolfe Decomposition


Consider the following LP:
(LP ) min cT x
s.t. Dx = b0
Fx = b
x≥0
where D is a m1 × n matrix, and F is a m2 × n matrix. So there are m1 + m2 equality constraints
and n variables.

Step 1: Separate the Constraint Set into Two


The above problem is already written in an explicit way that there are two sets of equality con-
straints. Suppose the first set Dx = b0 is harder to deal with. We can define a polyhedron P using
the constraints F x = b and x ≥ 0 as follows:
P = {x | F x = b, x ≥ 0}.
Then, the original problem can be written in the following way, which singles out the hard constraint
Dx = b0 :
(LP2 ) min cT x
s.t. Dx = b0
x∈P
This separation usually depends on the problem structure. We will talk more about this later.

1
Step 2: Use the Extreme Points Representation (Key Idea!)
The following idea of reformulating (LP2 ) will probably blow your mind. Just sit back and watch.
As we already discussed in earlier lectures, any point in a polytope (i.e., a bounded polyhedron)
can be represented as a convex combination of its extreme points. Let us denote all the extreme
points of P as x1 , x2 , . . . , xN . Here each xi is a vector in Rn , and the number of extreme points N
can be huge.
Example: Here is an example. Think about how many extreme points a two-dimensional cube
has. A cube in R2 can be described as the set {x : 0 ≤ xi ≤ 1, ∀i = 1, 2}. The answer is 4. How
many extreme points does a three-dimensional cube have? The answer is 8. How many extreme
points does a n-dimensional cube have? The answer is 2n . When n = 10, the number is 1024. It
gets big exponentially quickly as n grows.
However, we already know that, regardless of how large N is, we can always write any point x
in the polytope P as a convex combination of its extreme points as:
N
X
x= λi xi (1)
i=1
N
X
λi = 1
i=1
λi ≥ 0, ∀i = 1, . . . , N.

The weights λi ’s depend on where the point x is. Different x’s will have different λi ’s. But the
extreme points xi ’s are fixed for the polyhedron P .
Now comes the key idea of Dantzig and Wolfe: In the problem (LP2 ), we can replace the
variable x by its convex representation (1). The new variables are λi ’s.
(LP2 ) now becomes
N
!
X
T i
min c λi x
λ1 ,...,λN
i=1
N
!
X
s.t. D λi xi = b0
i=1
N
X
λi = 1
i=1
λi ≥ 0, ∀i = 1, . . . , N.

2
Multiply c and D into the parenthesis, we have
N
X
λi cT xi

(M P ) min
λ1 ,...,λN
i=1
XN
λi Dxi = b0

s.t.
i=1
N
X
λi = 1
i=1
λi ≥ 0, ∀i = 1, . . . , N.

(MP) stands for the master problem. (MP) is exactly equivalent to the original problem (LP),
namely, if λ1 , . . . , λN is optimal for (MP), then we can use (1) to recover an optimal solution x for
(LP).

Step 3: Apply Column Generation to (MP)


Let us look at the master problem (MP). It has N variables λ1 , λ2 , . . . , λN , which can be a very
large number, and (m1 + 1) equality constraints. Note that cT xi is a scalar number, and Dxi is a
vector in Rm1 . We are in a perfect situation to apply the Column Generation Algorithm to solve
(MP).

1. Choose a subset I of columns and variables. Solve the restricted master problem:
X
λi cT xi

(RM P ) min
λi ,∀i∈I
i∈I
X
λi Dxi = b0

s.t. ← ŷ
i∈I
X
λi = 1 ← r̂
i∈I
λi ≥ 0, ∀i ∈ I.

Obtain an optimal solution λ̂ of (RMP), and an optimal dual solution ŷ and r̂.

2. Check the optimality of λ̂ using the dual information (recall Notes 6). Here is a good exercise
to write down the reduced cost. See if you can do it without looking at the solution below.
Find the minimum reduced cost:
n o
cT xi − ŷ T Dxi − r̂
 
Z = min (2)
i=1,...,N

If Z ≥ 0, then λ̂ is optimal for (MP). Terminate the algorithm. Otherwise, add the column
with the minimum reduced cost to (RMP), and continue.

The above Column Generation Algorithm is clear. However, to make it effective, we need to be
able to find the minimum reduced cost efficiently. Enumerating all the extreme points in (2) will

3
be too slow, plus we do not even know all the extreme points. We need to be able to generate them
on the fly.
Now comes the second key idea of Dantzig and Wolfe, which is essentially the same as what we
did in the pricing subproblem for the cutting stock problem. Namely, enumerating through all
the extreme points of P in (2) is equivalent to minimizing the reduced cost over the
polytope P directly!

n o
cT xi − ŷ T Dxi − r̂
 
Z = min
i=1,...,N
n o
T T

= min c x − ŷ (Dx) − r̂
x
s.t. x ∈ P
n  o
= min cT − ŷ T D x − r̂
x
s.t. F x = b, x ≥ 0

Note that the dual variable r̂ is fixed, so we can pull it outside of the minimization problem. This
step here is a reverse application of the extreme point representation: going from the extreme point
representation of a point back to its polytope representation.
Now an efficient Column Generation Algorithm can be written as:

1. Choose a subset I of columns and variables. Solve the restricted master problem:
X
λi cT xi

(RM P ) min
λi ,∀i∈I
i∈I
X
λi Dxi = b0

s.t. ← ŷ
i∈I
X
λi = 1 ← r̂
i∈I
λi ≥ 0, ∀i ∈ I.

Obtain an optimal solution λ̂ of (RMP). And the optimal dual solution ŷ and r̂.

2. To check the optimality of λ̂, we solve the following subproblem (SP):


 
(SP ) w = min cT − ŷ T D x
s.t. F x = b, x ≥ 0.

If w ≥ r̂, then λ̂ is optimal for (MP). Terminate the column generation algorithm.
Otherwise, the optimal basic feasible soluion of the
 above problem is an extreme point of the
i
Dx
polyhedron P . Denote it as xi . Add the column to the constraint matrix in restricted
1
master problem, and add cT xi to the cost coefficient.

Remarks:

4
• Note that the master problem (MP) can be written in the matrix form as follows:
T
cT x1
  
λ1
 cT x2   λ2 
(M P ) min
   
 ..   .. 
λ1 ,...,λN  .   . 
cT xN λN
 
λ1
Dx1 Dx2 · · · DxN  λ2 
    
b0
s.t.  ..  =
1 1 ··· 1  .  1
λN
λi ≥ 0, ∀i = 1, . . . , N.

The restricted master problem (RMP) involves a subset of variables and columns of (MP).

• The dual variables ŷ and r̂ can be obtained from computing cTB B −1 as we did in the simplex
method. In particular, ŷ is the first m1 components of cTB B −1 , and r̂ is the last component
of cTB B −1 . Here cB is the vector of coefficients
 of λi ’s correspondingto the basic variables,
Dx Dx1 · · · DxN
1
and B is the basis matrix of the big matrix .
1 1 ··· 1

2 A Numerical Example
We now consider a numerical example and go through the details of the algorithm. This will give
us some hands-on experience with the algorithm.

min − 4x1 − x2 − 6x3


s.t. 3x1 + 2x2 + 4x3 = 17
1 ≤ x1 ≤ 2
1 ≤ x2 ≤ 2
1 ≤ x3 ≤ 2

We divide the constraints into two groups: The first group consists of the constraint Dx = b0
where D = [3 2 4] and b0 = 17. The second group is the constraint x ∈ P , where the polyhedron
P = {x ∈ R3 : 1 ≤ xi ≤ 2, ∀i = 1, 2, 3}. P is a 3-dimensional cube, which has eight extreme
points x1 , . . . , x8 ; it is bounded and has no extreme rays.

5
The master problem (MP) is written as
8
X
λi cT xi

(M P ) min
λ1 ,...,λ8
i=1
8
X
λi Dxi = 17

s.t.
i=1
X8
λi = 1
i=1
λi ≥ 0, ∀i = 1, . . . , 8.
Note that (MP) has two equality constraints. So let us pick two columns, or equivalently
choose two extreme points of P , to start the column generation algorithm. Pick x1 = (2, 2, 2) and
x2 = (1, 1, 2), and the corresponding convex weights λ1 , λ2 . Note that here the specific indices do
not matter. We can always order the extreme points so that the first extreme point is (2, 2, 2) and
the second is (1, 1, 2). In class we used another ordering where the first extreme point is (1, 1, 1)
and the eighth is (2, 2, 2). But this ordering is not material. We have
   
2 1
cT x1 = [−4 − 1 − 6] 2 = −22, cT x2 = [−4 − 1 − 6] 1 = −17
2 2
   
2 1
Dx1 = [3 2 4] 2 = 18, Dx2 = [3 2 4] 1 = 13.
2 2
So the restricted master problem (RMP) can be written explicitly as
(RM P ) min − 22λ1 − 17λ2
λ1 ,λ2

s.t. 18λ1 + 13λ2 = 17


λ1 + λ2 = 1
λ1 , λ2 ≥ 0.
The basis matrix and its inverse are
   
18 13 −1 0.2 −2.6
B= B =
1 1 −0.2 3.6
     
λ̂1 −1 17 0.8
The optimal solution is =B = . Form the dual variable:
λ̂2 1 0.2

r̂ = cTB B −1 = [−22 − 17]B −1 = [−1 − 4].


 T 

To compute the minimum reduced cost, we form the following subproblem:
 
(SP ) min cT − ŷ T D x
s.t. x ∈ P

6
which is written more explicitly (note cT − ŷ T D = [−4, −1, −6] − (−1)[3, 2, 4] = [−1, 1, −2]):

w = min − x1 + x2 − 2x3
s.t. 1 ≤ x1 ≤ 2, 1 ≤ x2 ≤ 2, 1 ≤ x3 ≤ 2.

The optimal solution is x3 = [2, 1, 2], which is another extreme point of the cube P . The optimal
cost w = −5, which is less than r̂ = −4, therefore, the reduced cost of λ3 is −1, this variable enters
into the restricted problem. The associated column is [Dx3 , 1] = [16, 1], and the associated cost
coefficient is cT x3 = −21. The updated (RMP) is

(RM P ) min − 22λ1 − 17λ2 − 21λ3


λ1 ,λ2 ,λ3

s.t. 18λ1 + 13λ2 + 16λ3 = 17


λ1 + λ2 + λ3 = 1
λ1 , λ2 , λ3 ≥ 0.

The optimal solution is λ̂1 = 0.5, λ̂2 = 0, λ̂3 = 0.5. The optimal basis matrix and its inverse are
   
18 16 0.5 −8
B= B −1 =
1 1 −0.5 9

The dual variables are

[ŷ, r̂] = cTB B −1 = [−22, −21]B −1 = [−0.5, −13].

To check the optimality of the solution λ̂i ’s, we need to form the subproblem.

cT − ŷ T D = [−4, −1, −6] − (−0.5)[3, 2, 4] = [−2.5, 0, −4]

The subproblem is given as

w = min − 2.5x1 − 4x3


s.t. 1 ≤ x1 ≤ 2, 1 ≤ x2 ≤ 2, 1 ≤ x3 ≤ 2.

An optimal solution is x4 = [2, 2, 2] with the optimal cost w = −13. Note that r̂ = −13. So w = r̂
which means the current solution is optimal for the master problem. In terms of the variables xi ,
the optimal solution is
     
2 2 2
∗ 1 3
x = 0.5x + 0.5x = 0.5 2 + 0.5 1 = 1.5 .
    
2 2 2

You might also like