0% found this document useful (0 votes)
3 views121 pages

VRPSolverBonn IntroductionCG

The document provides an introduction to Column Generation, focusing on the Revised Simplex Algorithm and Dantzig-Wolfe decomposition for Linear Programming (LP) and Integer Programming (IP). It outlines the structure of LP problems, the steps involved in the Revised Simplex Algorithm, and discusses the advantages of using this method, particularly when dealing with a large number of variables. Additionally, it introduces the concept of Dantzig-Wolfe decomposition as a strategy for solving LPs with many variables efficiently.

Uploaded by

Dragon Fist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views121 pages

VRPSolverBonn IntroductionCG

The document provides an introduction to Column Generation, focusing on the Revised Simplex Algorithm and Dantzig-Wolfe decomposition for Linear Programming (LP) and Integer Programming (IP). It outlines the structure of LP problems, the steps involved in the Revised Simplex Algorithm, and discusses the advantages of using this method, particularly when dealing with a large number of variables. Additionally, it introduces the concept of Dantzig-Wolfe decomposition as a strategy for solving LPs with many variables efficiently.

Uploaded by

Dragon Fist
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 121

Introduction to Column Generation

Eduardo Uchoa
Departamento de Engenharia de Produç~
ao
Universidade Federal Fluminense, Brazil
INRIA International Chair 2022-2026, Bordeaux

Hausdorff School – Bonn 2022 Introduction to Column Generation 1 / 84


Outline

1 Revised Simplex Algorithm

2 Dantzig-Wolfe decomposition for LP

3 Dantzig-Wolfe decomposition for IP

4 DW decomposition with multiple subproblems


Example: Generalized Assignment Problem

5 DW decomposition with identical subproblems


Example: Cutting Stock Problem

6 Guidelines on when trying DW decomp for IP

Hausdorff School – Bonn 2022 Introduction to Column Generation 2 / 84


Linear Programming
An LP has the following format:

min z = cx

subject to

Ax = b
x ≥0

c: 1 × n vector, objective function A: m × n matrix, constraint


coefficients coefficients

x: n × 1 vector, decision variables b: m × 1 vector, right-hand


side constants

Hausdorff School – Bonn 2022 Introduction to Column Generation 3 / 84


Revised Simplex Algorithm for LP (Dantzig, 1953)

More efficient than the original Simplex (Dantzig, 1947)

Takes advantage of the fact that n is usually significantly larger


than m

Hausdorff School – Bonn 2022 Introduction to Column Generation 4 / 84


Revised Simplex

Definition: Basic solution, basic variables


Let (B N) be a partition of the columns in A, such that B has
dimension m × m and is invertible. Let x = (xB xN ) and
c = (cB cN ) be the corresponding partitions of x and c. A feasible
solution x = (xB xN ) is said to be basic if xN = 0. Variables in xB
are basic variables, those in xN are non-basic variables.

Hausdorff School – Bonn 2022 Introduction to Column Generation 5 / 84


Revised Simplex
EXAMPLE
min z = 24x1 + 29x2 + 10x3 + 38x4
s.t. x1 + 4x2 + 5x3 = 60
2x2 + x3 ≤ 12
2x1 + x2 − x3 + 4x4 ≥ 10
x1 , x2 , x3 , x4 ≥ 0

Hausdorff School – Bonn 2022 Introduction to Column Generation 6 / 84


Revised Simplex
EXAMPLE
min z = 24x1 + 29x2 + 10x3 + 38x4
s.t. x1 + 4x2 + 5x3 = 60
2x2 + x3 ≤ 12
2x1 + x2 − x3 + 4x4 ≥ 10
x1 , x2 , x3 , x4 ≥ 0

Simplex algorithms (both original and revised variants) require


converting all inequalities to equalities, slack/surplus
variables should be added.

Hausdorff School – Bonn 2022 Introduction to Column Generation 6 / 84


Revised Simplex
EXAMPLE
min z = 24x1 + 29x2 + 10x3 + 38x4
s.t. x1 + 4x2 + 5x3 = 60
2x2 + x3 + x5 = 12
2x1 + x2 − x3 + 4x4 − x6 = 10
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0

Hausdorff School – Bonn 2022 Introduction to Column Generation 6 / 84


Revised Simplex
EXAMPLE
min z = 24x1 + 29x2 + 10x3 + 38x4
s.t. x1 + 4x2 + 5x3 = 60
2x2 + x3 + x5 = 12
2x1 + x2 − x3 + 4x4 − x6 = 10
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0

 

c = 24 29 10 38 0 0
 x1
x2   
 
x3  60
  x =  b = 12 .
1 4 5 0 0 0 x4 
  10
A = 0 2 1 0 1 0 x5 
2 1 −1 4 0 −1 x6

Hausdorff School – Bonn 2022 Introduction to Column Generation 6 / 84


Revised Simplex Algorithm

Step 1: Find an initial basic feasible solution


Find m × m submatrix B of A such that linear system BxB = b has
solution x B ≥ 0.

Suppose that x1 , x3 and x5 are the variables chosen to be basic


         
1 5 0 x1 60 x1 10
B · xB = 0 1 1  · 3 =
x   12  ⇒ x B = 3 = 10
 x  
2 −1 0 x5 10 x5 2

z = 340

Hausdorff School – Bonn 2022 Introduction to Column Generation 7 / 84


Revised Simplex Algorithm

Step 1: Find an initial basic feasible solution


Find m × m submatrix B of A such that linear system BxB = b has
solution x B ≥ 0.

Suppose that x1 , x3 and x5 are the variables chosen to be basic



 x1 + 5x3 = 60 x1 = 10
B · xB = b ⇔ x3 + x5 = 12 ⇒ x3 = 10
2x1 − x3 = 10 x5 = 2

z = 340

Hausdorff School – Bonn 2022 Introduction to Column Generation 7 / 84


Revised Simplex

Step 2: Find the dual solution


The reduced cost of a variable xj is given by c j = cj − πAj , where
Aj is the j-th column of A. Basic variables have zero reduced
cost. Dual solution π is the solution of linear system πB = cB .

 |    |  |  |
π1 1 5 0 24 π1 4
πB = π2  · 0 1 1 = 10 ⇒ π2  =  0 
π3 2 −1 0 0 π3 10

Hausdorff School – Bonn 2022 Introduction to Column Generation 8 / 84


Revised Simplex

Step 2: Find the dual solution


The reduced cost of a variable xj is given by c j = cj − πAj , where
Aj is the j-th column of A. Basic variables have zero reduced
cost. Dual solution π is the solution of linear system πB = cB .


 π1 + 2π3 = 24 π1 = 4
πB = cB ⇔ 5π1 + π2 − π3 = 10 ⇒ π2 = 0
π2 = 0 π3 = 10

Hausdorff School – Bonn 2022 Introduction to Column Generation 8 / 84


Revised Simplex

Step 3: Pricing (finding a variable to enter the basis)


Calculate the reduced cost (c j = cj − πAj ) of the non-basic
variables. If no variable has negative reduced cost , the current
solution is optimal. Otherwise, one of those variables can be
chosen to enter the basis.

Non-basic variables are x2 , x4 , and x6


c 2 = 29 − 4π1 − 2π2 − π3 = 3
c 4 = 38 − 4π3 = −2
c6 = 0 + π3 = 10

Hausdorff School – Bonn 2022 Introduction to Column Generation 9 / 84


Revised Simplex

Step 3: Pricing (finding a variable to enter the basis)


Calculate the reduced cost (c j = cj − πAj ) of the non-basic
variables. If no variable has negative reduced cost (positive
reduced cost for maximization problems), the current solution is
optimal. Otherwise, one of those variables can be chosen to enter
the basis.

Non-basic variables are x2 , x4 , and x6


c 2 = 29 − 4π1 − 2π2 − π3 = 3
c 4 = 38 − 4π3 = −2
c6 = 0 + π3 = 10

x4 is the only variable that can enter the basis

Hausdorff School – Bonn 2022 Introduction to Column Generation 9 / 84


Revised Simplex

Passo 4: Finding the direction of improvement


Calculate the direction vector d that leads the current basic
solution into the next basic solution. Solve linear system Bd = Aj ,
where Aj is the column of A corresponding to the entering variable.

      

1 5 0 d1 0 20/11
Bd = 0 1 1 · d3 = 0
     ⇒ d = −4/11 .
2 −1 0 d5 4 4/11

Hausdorff School – Bonn 2022 Introduction to Column Generation 10 / 84


Revised Simplex

Step 5: Choose a variable to leave the basis


One would like to walk in direction d as much as possible.
Determine θ∗ = max{θ ≥ 0 : x B − θ · d ≥ 0}. One of the variables
that most limited θ∗ should be chosen to leave the basis. If no
variable limits θ∗ , the LP is unbounded.

Variables x1 and x5 are eligible to leave the basis


   
10 20/11
max θ such that 10 − θ −4/11 ≥ 0 =⇒ θ∗ = 11/2 = 5.5
2 4/11

Assume that x1 is chosen.

Hausdorff School – Bonn 2022 Introduction to Column Generation 11 / 84


Revised Simplex
Step 6: Update B and xB
Basis B is updated by replacing the column of the leaving variable
by the column of the entering variable. Calculate new basic
solution and go to Step 2.

New basis is formed by x4 , x3 and x5


     
0 5 0 x4 5.5
B= 0 1 1
  xB = x3 = 12 
  
4 −1 0 x5 0

z = 329

Step 2 in the next RSA iteration would obtain dual variables


π = [ 3.9 0 9.5 ].
Step 3 would calculate the following reduced costs: c 1 = 1.1, c 2 = 3.9,
and c 6 = 9.5.
So, the current solution is optimal.
Hausdorff School – Bonn 2022 Introduction to Column Generation 12 / 84
Revised Simplex
Step 6: Update B and xB
Basis B is updated by replacing the column of the leaving variable
by the column of the entering variable. Calculate new basic
solution and go to Step 2.

New basis is formed by x4 , x3 and x5


     
0 5 0 x4 5.5
B= 0 1 1
  xB = x3 = 12 
  
4 −1 0 x5 0

z = 329

Step 2 in the next RSA iteration would obtain dual variables


π = [ 3.9 0 9.5 ].
Step 3 would calculate the following reduced costs: c 1 = 1.1, c 2 = 3.9,
and c 6 = 9.5.
So, the current solution is optimal.
Hausdorff School – Bonn 2022 Introduction to Column Generation 12 / 84
Advantage of Revised Simplex

Why Revised Simplex is usually better than original Simplex?


Only Step 3 (pricing) has complexity depending on n
All other steps have complexities that only depend on m

Significant advantage when n is much larger than m

Hausdorff School – Bonn 2022 Introduction to Column Generation 13 / 84


A fundamental insight

It is possible to solve LPs with not so many constraints but


with a HUGE number of variables, as long as those variables
have a special structure that allows their efficient pricing
Instead of calculating the reduced cost for each individual
variable, the whole pricing step should be solved as another
optimization problem!

Hausdorff School – Bonn 2022 Introduction to Column Generation 14 / 84


Dantzig-Wolfe Decomposition (1960)

Consider an LP (O) in the following format:

(O) max z = cx

subject to

Ax = b
Dx = d
x ≥0

LP (O) has m constraints (not counting the non-negativities)


and n variables; submatrix A has p rows and submatrix D has
q rows

Hausdorff School – Bonn 2022 Introduction to Column Generation 15 / 84


Dantzig-Wolfe Decomposition

Defining polyhedron P = {Dx = d, x ≥ 0}, (O) is equivalent


to:

(O0 ) max z = cx

subject to

Ax = b
x ∈P

Assuming that P is limited, any solution x ∈ P can be represented


as a convex combination of points in the set R of the extreme
points of P.

Hausdorff School – Bonn 2022 Introduction to Column Generation 16 / 84


Dantzig-Wolfe Decomposition

Defining polyhedron P = {Dx = d, x ≥ 0}, (O) is equivalent


to:

(O0 ) max z = cx

subject to

Ax = b
x ∈P

Assuming that P is limited, any solution x ∈ P can be represented


as a convex combination of points in the set R of the extreme
points of P.

Hausdorff School – Bonn 2022 Introduction to Column Generation 16 / 84


Dantzig-Wolfe Decomposition

Defining polyhedron P = {Dx = d, x ≥ 0}, (O) is equivalent


to:

(O0 ) max z = cx

subject to

Ax = b
x ∈P

Assuming that P is limited, any solution x ∈ P can be represented


as a convex combination of points in the set R of the extreme
points of P.

Hausdorff School – Bonn 2022 Introduction to Column Generation 16 / 84


Dantzig-Wolfe Decomposition

Any x ∈ P can be represented by vectors of |R| variables λ


that correspond to convex combinations of its extreme points:
X
x= r λr
r ∈R
X
λr = 1
r ∈R
λ≥0

Hausdorff School – Bonn 2022 Introduction to Column Generation 17 / 84


Dantzig-Wolfe Decomposition
Replacing x variables in LP (O0 ) by their equivalent
representations in λ variables, one gets:
X
(MP) max z = (cr )λr
r ∈R

subject to
X
(Ar )λr = b (π)
r ∈R
X
λr = 1 (ν)
r ∈R
λ≥0

New LP (MP) is called a Master Problem and is equivalent to LP


(O). Vector π and scalar ν are the dual variables of the
corresponding constraints.
Hausdorff School – Bonn 2022 Introduction to Column Generation 18 / 84
Dantzig-Wolfe Decomposition

Consequences of the reformulation


(MP) has p + 1 constraints, less than the p + q constraints
of (O)
(MP) has many more variables (columns) than (O)
The number of columns in (MP) is |R|, the number of
extreme points of P. That number can be exponentially large

Hausdorff School – Bonn 2022 Introduction to Column Generation 19 / 84


Solving the Master Problem by Column Generation

No matter how large is |R|, the pricing step can be efficiently


performed by solving the following LP:

(SP) max c = (c − πA)x − ν


s.t. Dx = d
x ≥0

Hausdorff School – Bonn 2022 Introduction to Column Generation 20 / 84


Solving the Master Problem by Column Generation

Step 1: Initialize the Restricted Master Problem (RMP)


Use a small subset of the variables in (MP), only enough to
provide a feasible basis, for creating (RMP). If necessary, use
artificial variables.

Step 2: Solve the current (RMP)


Besides the primal solution, also get the dual variables π and ν.

Hausdorff School – Bonn 2022 Introduction to Column Generation 21 / 84


Solving the Master Problem by Column Generation

Step 3: Pricing
Solve the pricing subproblem (SP), its objective function depends
on the dual solution found in Step 2. The optimal solution x ∗ of
(SP) is a point r ∈ R. If c∗ ≥ 0, no variable in (MP) is suitable
to enter in the current basis of (RMP), so the current solution
of (RMP) is also an optimal solution for (MP) and the
column generation stops.

Step 4: Update the RMP


Add the variable λr (and its corresponding column in the
matrices), associated to x ∗ , to (RMP). Go to Step 2.

Hausdorff School – Bonn 2022 Introduction to Column Generation 22 / 84


Solving the Master Problem by Column Generation

Even if the number of variables in (MP) is huge, only a very


small subset of its variables is likely to be added to (RMP).

So, Column Generation can solve (MP)

Hausdorff School – Bonn 2022 Introduction to Column Generation 23 / 84


Is it worthy to use DW decomposition for solving an LP?

DW actually believed that it would work for problems with


block-diagonal structure, where the pricing subproblem
decomposes into many independent LPs. Those smaller LPs
would be easier to solve, especially if they have a nice
particular structure (like defining network flows, as in Ford Jr
and Fulkerson (1958)).

D1

D2

D3

Hausdorff School – Bonn 2022 Introduction to Column Generation 24 / 84


Is it worthy to use DW decomposition for solving an LP?

Usually not!
Even when the LP has a block-diagonal structure, it is usually
faster to solve (O) directly than to apply DW decomposition
and solve (MP) by Column Generation
In fact, top solvers like CPLEX, Gurobi and XPRESS do not
even offer DW decomposition for LP

Hausdorff School – Bonn 2022 Introduction to Column Generation 25 / 84


Dantzig-Wolfe decomposition for Integer Programming

An Integer Program (IP) has the following format:

(IP) max z = cx
s.t. Ax = b
x ∈ Zn+

An IP is an LP with additional integrality constraints over the


variables.

Hausdorff School – Bonn 2022 Introduction to Column Generation 26 / 84


Integer Programming

An IP often arises as a formulation for a Combinatorial


Optimization Problem (COP)
Each solution of the COP is mapped into an integer point in
the n-dimensional space
Let X be the set of those points

If it was possible to know all inequalities that define Conv (X ),


the COP could be solved as simple LP

Hausdorff School – Bonn 2022 Introduction to Column Generation 27 / 84


Set of integer points and its convex hull

• • • • •

• • • • •

• • • • •

• • • • •

• • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 28 / 84


Set of integer points and its convex hull

• • • • •

• • • • •

• • • • •

• • • • •

• • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 28 / 84


Integer Programming for solving COPs

If a COP is NP-hard, there is no efficient way to separate all


inequalities that define Conv (X ) (unless P = NP)
In practice, one defines a formulation, a set of inequalities
that contain all points in X , but no integer point not in X
The same COP can have many different formulations.

Branch-and-Bound algorithm
Work with the linear relaxation of an IP. Better formulations
lead to smaller gaps (difference between the LP value to the
IP value). That integrality gap has an exponential impact on
the size of the B&B tree

Hausdorff School – Bonn 2022 Introduction to Column Generation 29 / 84


Two possible formulations for the same set X

• • • • • • • • • •

• • • • • • • • • •

• • • • • • • • • •

• • • • • • • • • •

• • • • • • • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 30 / 84


Dantzig-Wolfe decomposition for IP

(O) max cx
(O0 ) max cx
s.t. Ax = b
=⇒ s.t. Ax = b
Dx = d
x ∈P
x ∈ Zn+

P = {Dx = d, x ∈ Zn+ }

is assumed to be a finite set of points numbered from p1 to pQ .

Hausdorff School – Bonn 2022 Introduction to Column Generation 31 / 84


Dantzig-Wolfe decomposition for IP

A point x ∈ P can be (trivially) described as an integer


convex combination of those Q points:
Q
X
x= pj λj
j=1
Q
X
λj = 1
j=1

λ ∈ {0, 1}Q

Hausdorff School – Bonn 2022 Introduction to Column Generation 32 / 84


Dantzig-Wolfe decomposition for IP

Replacing x in (O0 ) by its equivalent, the following Integer


Master Problem is obtained:
Q
X
(IMP) max z = (cpj )λj
j=1

subject to
Q
X
(Apj )λj = b
j=1
Q
X
λj = 1
j=1

λ ∈ {0, 1}Q

Hausdorff School – Bonn 2022 Introduction to Column Generation 33 / 84


Dantzig-Wolfe decomposition for IP

The linear relaxation of (IMP) is the following Master LP:


Q
X
(MP) max z = (cpj )λj
j=1

subject to
Q
X
(Apj )λj = b (π)
j=1
Q
X
λj = 1 (ν)
j=1

λ≥0

Hausdorff School – Bonn 2022 Introduction to Column Generation 34 / 84


Dantzig-Wolfe decomposition for IP

Consequences of the reformulation


(MP) usually has a huge number of variables. Yet, it can be
solved by column generation
The value of (MP) can be better than the linear
relaxation of (O)!
This may happen because the integrality is not relaxed in the
subproblem:

(SP) max c = (c − πA)x − ν


s.t. Dx = d
x ∈ Zn+

Hausdorff School – Bonn 2022 Introduction to Column Generation 35 / 84


Dantzig-Wolfe decomposition for IP
Q
X
max z = (cpj )λj
j=1
Q
X max cx
s.t. (Apj )λj = b
j=1 ⇔ s.t. Ax = b
Q
X x ∈ Conv {Dx = d, x ∈ Zn+ }
λj = 1
j=1

λ≥0

The reformulation is equivalent to convexifying part of the


constraints in (O)

Hausdorff School – Bonn 2022 Introduction to Column Generation 36 / 84


How DW decompostion for IP improves a formulation

Suppose we partition the set of contraints of an Original


Formulation into two sets: green and orange

• • • • •

• • • • •

• • • • •

• • • • •

• • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 37 / 84


How DW decompostion for IP improves a formulation

The Original Formulation is the intersection of the two sets

• • • • •

• • • • •

• • • • •

• • • • •

• • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 37 / 84


How DW decompostion for IP improves a formulation

Convexifying the orange constraints: obtaining the convex hull of


the integer points in that set

• • • • •

• • • • •

• • • • •

• • • • •

• • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 37 / 84


How DW decompostion for IP improves a formulation

Convexifying the orange constraints: obtaining the convex hull of


the integer points in that set

• • • • •

• • • • •

• • • • •

• • • • •

• • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 37 / 84


How DW decompostion for IP improves a formulation

New Improved Formulation

• • • • •

• • • • •

• • • • •

• • • • •

• • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 37 / 84


How DW decompostion for IP improves a formulation

Original formulation vs Improved formulation

• • • • • • • • • •

• • • • • • • • • •

• • • • • • • • • •

• • • • • • • • • •

• • • • • • • • • •

Hausdorff School – Bonn 2022 Introduction to Column Generation 37 / 84


DW decomposition with multiple subproblems
Consider an IP decomposable into K independent
subproblems:

min c 1 x 1 + c 2 x 2 + · · · + c K x K

subject to

A1 x 1 + A2 x 2 + · · · + AK x K = b
Dk xk = dk k = 1, . . . , K
nk
x k ∈ Z+ k = 1, . . . , K

For each k = 1, . . . , K , Ak is a p × nk matrix, D k is a q k × nk


matrix; the remaining vectors have compatible dimensions

Hausdorff School – Bonn 2022 Introduction to Column Generation 38 / 84


DW decomposition with multiple subproblems
Consider an IP decomposable into K independent
subproblems:

min c 1 x 1 + c 2 x 2 + · · · + c K x K

subject to

A1 x 1 + A2 x 2 + · · · + AK x K = b
Dk xk = dk k = 1, . . . , K
nk
x k ∈ Z+ k = 1, . . . , K

When K = 1 we have the case already considered


min cx
S.t. Ax = b
Dx = d
x ∈ Z+n
Hausdorff School – Bonn 2022 Introduction to Column Generation 38 / 84
DW decomposition with multiple subproblems
Consider an IP decomposable into K independent
subproblems:

min c 1 x 1 + c 2 x 2 + · · · + c K x K

subject to

A1 x 1 + A2 x 2 + · · · + AK x K = b
Dk xk = dk k = 1, . . . , K
nk
x k ∈ Z+ k = 1, . . . , K

When K > 1 the problem is said to have a block-diagonal structure


A1 A2 A3
D1
D2
D3
Hausdorff School – Bonn 2022 Introduction to Column Generation 38 / 84
DW with Multiple subproblems
min c 1 x 1 + c 2 x 2 + · · · + c K x K

subject to

A1 x 1 + A2 x 2 + · · · + AK x K = b
xk ∈ Pk k = 1, . . . , K

k
P k = {D k x k = d k , x ∈ Zn+ }

is assumed a finite set of points numbered from p1k to pQ


k .
k

Hausdorff School – Bonn 2022 Introduction to Column Generation 39 / 84


DW with Multiple subproblems
min c 1 x 1 + c 2 x 2 + · · · + c K x K

subject to

A1 x 1 + A2 x 2 + · · · + AK x K = b
xk ∈ Pk k = 1, . . . , K

A point x k ∈ P k can be described as:


k
Q
X
k
x = pjk λkj
j=1
Qk
X
λkj = 1
j=1

λ ∈ {0, 1}Qk

Hausdorff School – Bonn 2022 Introduction to Column Generation 39 / 84


Dantzig-Wolfe decomposition for IP

Replacing every x k by its equivalent, the following Integer


Master Problem is obtained:
Q
K X k
X
(IMP) min z = (c k pjk )λkj
k=1 j=1

subject to

Q
K X k
X
(Ak pjk )λkj = b
k=1 j=1
Q k
X
λkj = 1 k = 1, . . . , K
j=1
k
λ ∈ {0, 1}Q k = 1, . . . , K

Hausdorff School – Bonn 2022 Introduction to Column Generation 40 / 84


Master LP Problem

The linear relaxation of (IMP) is the following Master LP


Problem:
Q
K X k
X
(MP) min z = (c k pjk )λj k
k=1 j=1

subject to

Q
K X k
X
(Ak pjk )λkj = b (π)
k=1 j=1
Q k
X
λkj = 1 k = 1, . . . , K (ν k )
j=1

λ≥0 k = 1, . . . , K

Hausdorff School – Bonn 2022 Introduction to Column Generation 41 / 84


A remark on the Master LP Problem
For the subproblems where x k = 0 is a solution, it is possible
to relax the corresponding convexity constraint to ≤ 1. If all
subproblems have that property, we can write:

Q
K X k
X
(MP) min z = (c k pjk )λkj
k=1 j=1

subject to

Q
K X k
X
(Ak pjk )λkj = b (π)
k=1 j=1
Q k
X
λkj ≤ 1 k = 1, . . . , K (ν k )
j=1

λ≥0 k = 1, . . . , K

Hausdorff School – Bonn 2022 Introduction to Column Generation 42 / 84


Pricing subproblems

For each k = 1, . . . , K , there is a pricing subproblem:

(SP k ) min c k = (c k − πAk )x k − ν k


s.t. D k x k = d k
k
x k ∈ Zn+

The restricted master LP is optimal when c k ≥ 0, for


k = 1, . . . , K

Hausdorff School – Bonn 2022 Introduction to Column Generation 43 / 84


Generalized Assignment Problem (GAP)

Set J of tasks; set K of machines; capacity W k , k ∈ K ;


assignment cost cjk and load wjk , k ∈ K , j ∈ J
Find an assignment of tasks to machines such that the total
load in each machine does not exceed its capacity, with
minimum total cost

Hausdorff School – Bonn 2022 Introduction to Column Generation 44 / 84


Example of instance

cost (cjk ) load (wjk ) Wk


jobs 1 2 3 4 1 2 3 4
1 8 3 2 9 2 3 3 1 5
machines
2 1 7 5 2 5 1 1 3 8

j1 j2 j3 j4 S
9
wj1
8 3 2 3
9

5
wj2
1 1
2
7 5

Optimal solution value: 18


Hausdorff School – Bonn 2022 Introduction to Column Generation 45 / 84
Generalized Assignment Problem (GAP)

Set J of tasks; set K of machines; capacity W k , k ∈ K ;


assignment cost cjk and load wjk , k ∈ K , j ∈ J
Find an assignment of tasks to machines such that the total
load in each machine does not exceed its capacity, with
minimum total cost
Original formulation (O):
XX
Min z = cjk xjk (1a)
k∈K j∈J
X
S.t. xjk = 1, j ∈ J; (1b)
k∈K
X
wjk xjk ≤ W k , k ∈ K; (1c)
j∈J

xjk ∈ {0, 1}, j ∈ J, k ∈ K . (1d)

Hausdorff School – Bonn 2022 Introduction to Column Generation 46 / 84


Example: Generalized Assignment Problem (GAP)

cost (cjk ) load (wjk ) Wk


jobs 1 2 3 4 1 2 3 4
1 8 3 2 9 2 3 3 1 5
machines
2 1 7 5 2 5 1 1 3 8

Hausdorff School – Bonn 2022 Introduction to Column Generation 47 / 84


Example: Generalized Assignment Problem (GAP)

cost (cjk ) load (wjk ) Wk


jobs 1 2 3 4 1 2 3 4
1 8 3 2 9 2 3 3 1 5
machines
2 1 7 5 2 5 1 1 3 8

Original formulation (O):

Min zIP = 8x11 + 3x21 + 2x31 + 9x41 + x12 + 7x22 + 5x32 + 2x42
S.t. x11 + x12 = 1
1
x2 + x22 = 1
1
x3 + x32 = 1
1
x4 + x42 = 1
2x11 + 3x21 + 3x31 + x41 ≤ 5
5x12 + x22 + x32 + 3x42 ≤ 8
0 ≤ x ≤ 1
x ∈ Z8

Hausdorff School – Bonn 2022 Introduction to Column Generation 47 / 84


Example: Generalized Assignment Problem (GAP)

Linear relaxation of the original formulation:


Min zIP = 8x11 + 3x21 + 2x31 + 9x41 + x12 + 7x22 + 5x32 + 2x42
S.t. x11 + x12 = 1
1
x2 + x22 = 1
x31 + x32 = 1
x41 + x42 = 1
2x11 + 3x21 + 3x31 + x41 ≤5
5x12 + x22 + x32 + 3x42 ≤ 8
x ≥0

Hausdorff School – Bonn 2022 Introduction to Column Generation 48 / 84


Example: Generalized Assignment Problem (GAP)

Linear relaxation of the original formulation:


Min zIP = 8x11 + 3x21 + 2x31 + 9x41 + x12 + 7x22 + 5x32 + 2x42
S.t. x11 + x12 = 1
1
x2 + x22 = 1
x31 + x32 = 1
x41 + x42 = 1
2x11 + 3x21 + 3x31 + x41 ≤5
5x12 + x22 + x32 + 3x42 ≤ 8
x ≥0

z = 9.69 x11 = 0.077 x21 = 1 x31 = 0.615 x41 = 0


x12 = 0.923 x22 = 0 x32 = 0.385 x42 = 1

Hausdorff School – Bonn 2022 Introduction to Column Generation 48 / 84


Applying DW decomposition to (O)

Let P k = {p1k , p2k , . . . , pQ


k } be the set of all possible
k
allocations of tasks to machine k.
pqk = (pq1
k , p k , . . . , p k ) is a feasible solution to:
q2 q|J|
X
wjk pqj
k
≤ Wk (2a)
j∈J
k
pqj ∈ {0, 1}, j ∈J (2b)

Let λkq , k ∈ K , q = 1, . . . , Q k , be a binary variable indicating


whether allocation pqk is selected to machine k

Hausdorff School – Bonn 2022 Introduction to Column Generation 49 / 84


Applying DW decomposition to (O)

Resulting Integer Master problem:


 
X X X
Min z =  cjk pqj
k  k
λq (3a)
k∈K q∈P k j∈J
X X
k k
S.t. pqj λq = 1, j ∈ J; (3b)
k∈K q∈P k
X
λkq ≤ 1, k ∈ K; (3c)
q∈P k

λkq ∈ {0, 1}, k ∈ K , q = 1, . . . , Q k(3d)

Hausdorff School – Bonn 2022 Introduction to Column Generation 50 / 84


Applying DW decomposition to (O)

Master LP:
 
X X X
Min z =  cjk pqj
k  k
λq (4a)
k∈K q∈P k j∈J
X X
k k
S.t. pqj λq = 1, j ∈ J; (4b)
k∈K q∈P k
X
λkq ≤ 1, k ∈ K; (4c)
q∈P k

λkq ≥ 0, k ∈ K , q = 1, . . . , Q k(4d)

Hausdorff School – Bonn 2022 Introduction to Column Generation 51 / 84


Applying DW decomposition to (O)

The pricing subproblems are, for each machine k, solve the


following binary knapsack problem:
J
X
Min c k = (cjk − πj )xjk − ν k (5a)
j=1
X
S.t. wjk xjk ≤ W k , (5b)
j∈J

xjk ∈ {0, 1}, j ∈ J, (5c)

where πj and ν k are the dual variables associated to Constraints


(4b) and (4c), respectively.
The Binary Knapsack Problem is (weakly) NP-hard, but
extremely well solved in practice (see Pferschy et al. (2004))

Hausdorff School – Bonn 2022 Introduction to Column Generation 52 / 84


Example: Generalized Assignment Problem (GAP)

cost (cjk ) load (wjk ) Wk


jobs 1 2 3 4 1 2 3 4
1 8 3 2 9 2 3 3 1 5
machines
2 1 7 5 2 5 1 1 3 8

P1 P2
pqk p11 p21 p31 p41 p51 p61 p71 p81 p91 p12 p22 p32 p42 p52p62 p72 p82 p92 2
p10 2
p11
k
pq1 1 0 0 0 1 1 1 0 0 1 0 0 0 1 1 1 0 0 0 1
k
pq2 0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 0 1 1 0 0
k
pq3 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 1 1 1 0
k
pq4 0 0 0 1 0 0 1 1 1 0 0 0 1 0 0 0 0 1 1 1

cost 8 3 2 9 11 10 17 12 11 1 7 5 2 8 13 6 12 14 7 3
λkq λ11 λ12 λ13 λ14 λ15 λ16 λ17 λ18 λ19 λ21 λ22 λ23 λ24 λ25 λ26 λ27 λ28 λ29 λ210 λ211

Hausdorff School – Bonn 2022 Introduction to Column Generation 53 / 84


Example: Generalized Assignment Problem (GAP)

cost (cjk ) load (wjk ) Wk


jobs 1 2 3 4 1 2 3 4
1 8 3 2 9 2 3 3 1 5
machines
2 1 7 5 2 5 1 1 3 8

Integer Master Problem


Min z = 8λ11 + 3λ12 + 2λ13 + 9λ14 + · · · + 13λ26 + 6λ27 + 12λ28 + 14λ29 + 7λ210
λ11 + . . . + λ26 + λ27 =1
1
λ2 + . . . + λ26 + λ28 + λ29 =1
λ13 + . . . + λ26 + λ27 + λ28 + λ29 + λ210 = 1
λ14 + . . . + λ29 + λ210 = 1
1 1 1 1
λ1 λ2 λ3 λ4 + . . . ≤1
. . . + λ26 + λ27 + λ28 + λ29 + λ210 ≤ 1
λ ∈ {0, 1}

Hausdorff School – Bonn 2022 Introduction to Column Generation 53 / 84


Example: Generalized Assignment Problem (GAP)

cost (cjk ) load (wjk ) Wk


jobs 1 2 3 4 1 2 3 4
1 8 3 2 9 2 3 3 1 5
machines
2 1 7 5 2 5 1 1 3 8

Relaxing the integrality ⇒ (MP)


Min z = 8λ11 + 3λ12 + 2λ13 + 9λ14 + · · · + 13λ26 + 6λ27 + 12λ28 + 14λ29 + 7λ210
λ11 + . . . + λ26 + λ27 =1
1
λ2 + . . . + λ26 + λ28 + λ29 =1
λ13 + . . . + λ26 + λ27 + λ28 + λ29 + λ210 = 1
λ14 + . . . + λ29 + λ210 = 1
λ11 λ12 λ13 λ14 + . . . ≤1
. . . + λ26 + λ27 + λ28 + λ29 + λ210 ≤ 1
λ≥0

Hausdorff School – Bonn 2022 Introduction to Column Generation 53 / 84


Solving the (MP) by Column Generation
Iteration 1: (RMP) initialized with artificial variables
Restricted Master Problem
Min z = 99µ1 + 99µ2 + 99µ3 + 99µ4
µ1 =1 (π1 = 99)
µ2 =1 (π2 = 99)
µ3 =1 (π3 = 99)
µ4 =1 (π4 = 99)
≤1 (ν 1 = 0)
≤1 (ν 2 = 0)
λ≥0

z = 396; µ1 = 1, µ2 = 1, µ3 = 1, µ4 = 1

Hausdorff School – Bonn 2022 Introduction to Column Generation 54 / 84


Solving the (MP) by Column Generation
Iteration 1:
Restricted Master Problem
Min z = 99µ1 + 99µ2 + 99µ3 + 99µ4
µ1 =1 (π1 = 99)
µ2 =1 (π2 = 99)
µ3 =1 (π3 = 99)
µ4 =1 (π4 = 99)
≤1 (ν 1 = 0)
≤1 (ν 2 = 0)
λ≥0

z = 396; µ1 = 1, µ2 = 1, µ3 = 1, µ4 = 1

(cj1 − πj )xj1 − ν 1 (cj2 − πj )xj2 − ν 2


P P
Subproblem 1: min Subproblem 2: min
j∈J j∈J

Min c̄ 1 = −91x11 − 96x21 − 97x31 − 90x41 Min c̄ 2 = −97x12 − 92x22 − 94x32 − 97x42
S.t. 2x11 + 3x21 + 3x31 + x41 ≤ 5 S.t. 5x12 + x22 + x32 + 3x42 ≤ 8
x ∈ {0, 1} x ∈ {0, 1}

S = (1, 0, 1, 0) and c̄ 1 = −188; S ⇔ λ16 S = (1, 1, 1, 0) and c̄ 2 = −284; S ⇔ λ26

Hausdorff School – Bonn 2022 Introduction to Column Generation 54 / 84


Solving the (MP) by Column Generation
Iteration 2:
Restricted Master Problem
Min z = 99µ1 + 99µ2 + 99µ3 + 99µ4 + 10λ1 + 13λ2
µ1 + λ1 + λ2 =1 (π1 = 0)
µ2 + λ2 =1 (π2 = 3)
µ3 + λ1 + λ2 =1 (π3 = 10)
µ4 =1 (π4 = 99)
λ1 ≤1 (ν 1 = 0)
λ2 ≤1 (ν 2 = 0)
λ≥0

z = 112

(cj1 − πj )xj1 − ν 1 (cj2 − πj )xj2 − ν 2


P P
Subproblem 1: min Subproblem 2: min
j∈J j∈J

Min c̄ 1 = 8x11 − 8x31 − 90x41 Min c̄ 2 = x12 + 4x22 − 5x32 − 97x42


S.t. 2x11 + 3x21 + 3x31 + x41 ≤ 5 S.t. 5x12 + x22 + x32 + 3x42 ≤ 8
x ∈ {0, 1} x ∈ {0, 1}

S = (0, 0, 1, 1) and c̄ 1 = −98; S ⇔ λ19 S = (0, 0, 1, 1) and c̄ 2 = −102; S ⇔ λ210

Hausdorff School – Bonn 2022 Introduction to Column Generation 54 / 84


Solving the (MP) by Column Generation
Iteration 3:
Restricted Master Problem
Min z = 99µ1 + 99µ2 + 99µ3 + 99µ4 + 10λ1 + 13λ2 + 11λ3 + 7λ4
µ1 + λ1 + λ2 =1 (π1 = 10)
µ2 + λ2 =1 (π2 = 7)
µ3 + λ1 + λ2 + λ3 + λ4 =1 (π3 = 0)
µ4 + λ3 + λ4 =1 (π4 = 11)
λ1 + λ3 ≤1 (ν 1 = 0)
λ2 + λ4 ≤1 (ν 2 = −4)
λ≥0

z = 24

(cj1 − πj )xj1 − ν 1 (cj2 − πj )xj2 − ν 2


P P
Subproblem 1: min Subproblem 2: min
j∈J j∈J

Min c̄ 1 = −2x11 − 4x21 + 2x31 − 2x41 Min c̄ 2 = −9x12 + 5x32 − 9x42 + 4


S.t. 2x11 + 3x21 + 3x31 + x41 ≤ 5 S.t. 5x12 + x22 + x32 + 3x42 ≤ 8
x ∈ {0, 1} x ∈ {0, 1}

S = (1, 1, 0, 0) and c̄ 1 = −6; S ⇔ λ15 S = (1, 0, 0, 1) and c̄ 2 = −14; S ⇔ λ211

Hausdorff School – Bonn 2022 Introduction to Column Generation 54 / 84


Solving the (MP) by Column Generation
Iteration 4:
Restricted Master Problem
Min z = . . . 10λ1 + 13λ2 + 11λ3 + 7λ4 + 11λ5 + 3λ6
. . . λ1 + λ2 + λ5 + λ6 =1 (π1 = 2)
... + λ2 + λ5 =1 (π2 = 9)
. . . λ1 + λ2 + λ3 + λ4 =1 (π3 = 6)
... + λ3 + λ4 + λ6 =1 (π4 = 5)
. . . λ1 + λ3 + λ5 ≤1 (ν 1 = 0)
... λ2 + λ4 + λ6 ≤1 (ν 2 = −4)
λ≥0

z = 18

(cj1 − πj )xj1 − ν 1 (cj2 − πj )xj2 − ν 2


P P
Subproblem 1: min Subproblem 2: min
j∈J j∈J

Min c̄ 1 = 6x11 − 6x21 − 4x31 + 4x41 Min c̄ 2 = −x12 − 2x22 − x32 − 3x42 + 4
S.t. 2x11 + 3x21 + 3x31 + x41 ≤ 5 S.t. 5x12 + x22 + x32 + 3x42 ≤ 8
x ∈ {0, 1} x ∈ {0, 1}

S = (0, 1, 0, 0) and c̄ 1 = −6; S ⇔ λ12 S = (0, 1, 1, 1) and c̄ 2 = −2; S ⇔ λ29

Hausdorff School – Bonn 2022 Introduction to Column Generation 54 / 84


Solving the (MP) by Column Generation
Iteration 5:
Restricted Master Problem
Min z = . . . 10λ1 + 13λ2 + 11λ3 + 7λ4 + 11λ5 + 3λ6 + 3λ7 + 14λ8
. . . λ1 + λ2 + λ5 + λ6 =1 (π1 = 5)
... + λ2 + λ5 + λ7 + λ8 =1 (π2 = 7)
. . . λ1 + λ2 + λ3 + λ4 + λ8 =1 (π3 = 9)
... + λ3 + λ4 + λ6 + λ8 =1 (π4 = 6)
. . . λ1 + λ3 + λ5 λ7 ≤1 (ν 1 = −4)
... λ2 + λ4 + λ6 + λ8 ≤1 (ν 2 = −8)
λ≥0

z = 15

(cj1 − πj )xj1 − ν 1
P
Subproblem 1: min (cj2 − πj )xj2 − ν 2
P
Subproblem 2: min
j∈J j∈J

Min c̄ 1 = 3x11 − 4x21 − 7x31 + 3x41 + 4 Min c̄ 2 = −4x12 − 4x32 − 4x42 + 8


S.t. 2x11 + 3x21 + 3x31 + x41 ≤ 5 S.t. 5x12 + x22 + x32 + 3x42 ≤ 8
x ∈ {0, 1} x ∈ {0, 1}

S = (0, 0, 1, 0) and c̄ 1 = −3; S ⇔ λ13 c̄ 2 = 0

Hausdorff School – Bonn 2022 Introduction to Column Generation 54 / 84


Solving the (MP) by Column Generation
Iteration 6:
Restricted Master Problem
Min z = . . . 10λ1 + 13λ2 + 11λ3 + 7λ4 + 11λ5 + 3λ6 + 3λ7 + 14λ8 + 2λ9
. . . λ1 + λ2 + λ5 + λ6 =1 (π1 = 8)
... + λ2 + λ5 + λ7 + λ8 =1 (π2 = 10)
. . . λ1 + λ2 + λ3 + λ4 + λ8 + λ9 =1 (π3 = 9)
... + λ3 + λ4 + λ6 + λ8 =1 (π4 = 9)
. . . λ1 + λ3 + λ5 λ7 + λ9 ≤1 (ν 1 = −7)
... λ2 + λ4 + λ6 + λ8 ≤1 (ν 2 = −14)
λ≥0

z = 15

(cj1 − πj )xj1 − ν 1 (cj2 − πj )xj2 − ν 2


P P
Subproblem 1: min Subproblem 2: min
j∈J j∈J

Min c̄ 1 = − 7x21 − 7x31 +7 Min c̄ 2 = −7x12 − 3x32 − 4x32 − 7x42 + 14


S.t. 2x11 + 3x21 + 3x31 + x41 ≤ 5 S.t. 5x12 + x22 + x32 + 3x42 ≤ 8
x ∈ {0, 1} x ∈ {0, 1}

c̄ 1 = 0 c̄ 2 = 0

c̄ ≥ 0 for all subproblems ⇒ Optimal (MP) Solution

Hausdorff School – Bonn 2022 Introduction to Column Generation 54 / 84


Combining Column Generation with Cutting Planes:
Robust vs Non-Robust

Definition
Robust Cut, Robust Branch-Cut-and-Price Algorithm
(BCPA). A cutting plane in a BCPA is robust if it does not force
any change in the structure of the pricing subproblems in
subsequent calls to the Column Generation algorithm. A cutting
plane that does force changes in the pricing structure is non-robust.
A BCPA that only performs robust branchings and only separates
robust cuts is said to be robust, otherwise, it is non-robust.

Hausdorff School – Bonn 2022 Introduction to Column Generation 55 / 84


Robust Cuts

A fractional solution λ∗ to (MP) can be converted into a solution


x ∗ , using:
Q
X

x = pj λ∗j .
j=1

We may separate a valid inequality αx ≥ α0 cutting that point.


Then, it can be translated back to
 
XQ Xn
 αj pqj  λq ≥ α0 . (6)
q=1 j=1

The dual variable of the new cut is included in the π vector, its α
coefficients are included as an additional row in matrix A.
Everything happens as if αx ≥ α0 was part of the original IP.
So, there is no change in the pricing structure and the cut is robust.

Hausdorff School – Bonn 2022 Introduction to Column Generation 56 / 84


Example: Generalized Assignment Problem (GAP)

(RMP) Solution:

λ1 = λ1[1010]| = 0.5
λ7 = λ1[0100]| = 0.5 x11 = x12 = x21 = x22 = x31 = x32 = 0.5
=⇒ x42 = 1
λ6 = λ2[1001]| = 0.5
z = 15
λ8 = λ2[0111]| = 0.5

Hausdorff School – Bonn 2022 Introduction to Column Generation 57 / 84


Example: Generalized Assignment Problem (GAP)

(RMP) Solution:

λ1 = λ1[1010]| = 0.5
λ7 = λ1[0100]| = 0.5 x11 = x12 = x21 = x22 = x31 = x32 = 0.5
=⇒ x42 = 1
λ6 = λ2[1001]| = 0.5
z = 15
λ8 = λ2[0111]| = 0.5

Separate robust cut:

x12 + 2x21 + 2x31 + x42 ≤ 3

Translating to λ variables:

2λ1 + λ2 + 2λ3 + λ4 + 2λ5 + 2λ6 + 2λ7 + λ8 + 2λ9 < 3

Hausdorff School – Bonn 2022 Introduction to Column Generation 57 / 84


Example: Generalized Assignment Problem (GAP)
Restricted Master Problem with the robust cut
Min z = . . . 10λ1 + 13λ2 + 11λ3 + 7λ4 + 11λ5 + 3λ6 + 3λ7 + 14λ8 + 2λ9
. . . λ1 + λ2 + λ5 + λ6 =1 (π1 = 8)
... + λ2 + λ5 + λ7 + λ8 =1 (π2 = 8.6)
. . . λ1 + λ2 + λ3 + λ4 + λ8 + λ9 =1 (π3 = 7.6)
... + λ3 + λ4 + λ6 + λ8 =1 (π4 = 9)
. . . 2λ1 + λ2 + 2λ3 + λ4 + 2λ5 + 2λ6 + 2λ7 + λ8 + 2λ9 <3 (π5 = −2.8)
. . . λ1 + λ3 + λ5 λ7 + λ9 ≤1 (ν 1 = 0)
... λ2 + λ4 + λ6 + λ8 ≤1 (ν 2 = −8.4)
λ≥0

Hausdorff School – Bonn 2022 Introduction to Column Generation 58 / 84


Example: Generalized Assignment Problem (GAP)
Restricted Master Problem with the robust cut
Min z = . . . 10λ1 + 13λ2 + 11λ3 + 7λ4 + 11λ5 + 3λ6 + 3λ7 + 14λ8 + 2λ9
. . . λ1 + λ2 + λ5 + λ6 =1 (π1 = 8)
... + λ2 + λ5 + λ7 + λ8 =1 (π2 = 8.6)
. . . λ1 + λ2 + λ3 + λ4 + λ8 + λ9 =1 (π3 = 7.6)
... + λ3 + λ4 + λ6 + λ8 =1 (π4 = 9)
. . . 2λ1 + λ2 + 2λ3 + λ4 + 2λ5 + 2λ6 + 2λ7 + λ8 + 2λ9 <3 (π5 = −2.8)
. . . λ1 + λ3 + λ5 λ7 + λ9 ≤1 (ν 1 = 0)
... λ2 + λ4 + λ6 + λ8 ≤1 (ν 2 = −8.4)
λ≥0

z = 16.4

Subproblem
P 1 1: 1 Subproblem
P 2 2: 2
min (cj −πj )xj −2π5 x21 −2π5 x31 −ν 1 min (cj − πj )xj − π5 x12 − π5 x42 − ν 2
j∈J j∈J

Min c̄ 1 = 0x11 + 0x21 + 0x31 + 0x41 Min c̄ 2 = −4.2x12 − 1.6x22 − 2.6x32 − 4.2x42 + 8.4
S.t. 2x11 + 3x21 + 3x31 + x41 ≤ 5 S.t. 5x12 + x22 + x32 + 3x42 ≤ 8
x ∈ {0, 1} x ∈ {0, 1}

c̄ 1 = 0 c̄ 2 = 0

c̄ ≥ 0 for all subproblems ⇒ Optimal (MP) Solution


Hausdorff School – Bonn 2022 Introduction to Column Generation 58 / 84
Example: Generalized Assignment Problem (GAP)

New (MP) Solution with the robust cut:

x11 = 0.6
x12 = 0.4
λ1 = λ1[1010]| = 0.4
x21 = 0.4
λ5 = λ1[1100]| = 0.2
x22 = 0.6
λ7 = λ1[0100]| = 0.2 =⇒
x31 = 0.4
λ6 = λ2[1001]| = 0.4
x32 = 0.6
λ8 = λ2[0111]| = 0.6
x42 = 1
z = 16.4

Then, a robust branching over variable x31 (implemented by adding


cuts x31 ≤ 0 or x31 ≥ 1) finds the optimal integer solution with
z = 18 and solves the instance in 3 nodes.

Hausdorff School – Bonn 2022 Introduction to Column Generation 59 / 84


Non-robust Robust Cuts

A fractional solution λ∗ can be cut directly by a cutting plane


Q
X
α(pq )λq ≥ α0 ,
q=1

where coeffs α(pq ) are given by an arbitrary function. The new


dual variable will be denoted by σ. Now, the new pricing
subproblem is:

min c = (c − πA)x − σα(x) − ν


subject to x ∈ P.

The non-robust cut introduces a non-linear term −σα(x) in the


objective function of the pricing, breaking its original structure and
forcing algorithmic adaptations that may make it much less
efficient.
Hausdorff School – Bonn 2022 Introduction to Column Generation 60 / 84
Non-robust Chvátal-Gomory Cuts (CGCs)

Given valid inequalities Ax ≤ b and a set of multipliers ρ ≥ 0, the


following GCC is valid:

bρAcx ≤ bρbc.

Relaxing the set partitioning constraints in GAP reformulation, and


applying the CGC procedure we get:

X X XJ XJ
k
b ρj pqj cλkq ≥ b ρj c. (8)
k∈K q∈P k j=1 j=1

The cut is non-robust because the floor operator makes the


coefficients non-linear with respect to the points p.

Hausdorff School – Bonn 2022 Introduction to Column Generation 61 / 84


Example: Generalized Assignment Problem (GAP)

RMP Solution:
λ1 = λ1[1010]| = 0.5
λ7 = λ1[0100]| = 0.5
λ6 = λ2[1001]| = 0.5
λ8 = λ2[0111]| = 0.5
Using ρ = [2/3 1/3 1/3 1/3], we obtain the following violated cut:

λ1 + λ2 + λ5 + λ6 + λ8 ≤ 1

By also separating a second CGC with ρ = [1/3 1/3 1/3 2/3], the
instance is solved at the root node.

However, the pricing is not a pure binary knapsack problem


anymore, each added CGC makes it significantly harder to solve.

Hausdorff School – Bonn 2022 Introduction to Column Generation 62 / 84


The identical subproblems case

min c 1 x 1 + c 2 x 2 + · · · + c K x K

subject to

A1 x 1 + A2 x 2 + · · · + AK x K = b
Dk xk = dk k = 1, . . . , K
nk
x k ∈ Z+ k = 1, . . . , K

Suppose that there are matrices A0 , D 0 , c 0 and d 0 such that


Ak = A0 , D k = D 0 , c k = c 0 and d k = d 0 , for k = 1, . . . , K . In that
case, the K pricing subproblems would be identical. Assume that
this set has Q solutions.

P 0 = {D 0 x 0 = d 0 , x 0 ≥ 0 and integer}

Hausdorff School – Bonn 2022 Introduction to Column Generation 63 / 84


The identical subproblems case

In that case, we can have a simpler Master LP Problem:


Q
X
(IMP) min z = (c 0 pj )λj
j=1

subject to
Q
X
(A0 pj )λj = b (π)
j=1
Q
X
λj = K (ν)
j=1

λ≥0

Hausdorff School – Bonn 2022 Introduction to Column Generation 64 / 84


The identical subproblems case

There is a single pricing subproblem:

(SP) min c = (c 0 − πA0 )x 0 − ν


s.t. D 0 x 0 = d 0
x 0 ≥ 0 and integer

The restricted master LP is optimal when c ≥ 0

Hausdorff School – Bonn 2022 Introduction to Column Generation 65 / 84


Cutting Stock Problem

1 Instance
Stocks of length W
Set J of items
Each item j ∈ J has length wj and a demand of bj copies
2 Problem
Obtain the demanded number of copies of each item by
cutting the minimum possible number of stocks

The particular case where all demands bj are unitary is known as


the Bin Packing problem

Hausdorff School – Bonn 2022 Introduction to Column Generation 66 / 84


IP Formulation
Assumes the existence of a heuristic upper bound M. Number the
potentially used stocks from 1 to M:
Variables
xij : determines how many items j are cut from stock i
yi : indicates whether stock i is used or not
M
X
min z= yi (9a)
i=1
M
X
S.t. xij = bj , j ∈ J; (9b)
i=1
X
wj xij ≤ Wyi , i = 1, . . . , M; (9c)
j∈J
M×|J|
xij ∈ Z+ , i = 1, . . . , M, j ∈ J; (9d)
yi ∈ {0, 1}, i = 1, . . . , M. (9e)

Hausdorff School – Bonn 2022 Introduction to Column Generation 67 / 84


IP Formulation

Issues
A not-really compact formulation. The formulation size is
pseudo-polynomial
Even when M is not so large, branch-and-bound algorithms
perform poorly on it
the linear relaxation
P lower bound is equal to the the trivial
j∈J bj wj
lower bound
W
suffers from symmetry: the same fractional/integer solution
can be represented in many different ways by only permutating
the stock indices

Hausdorff School – Bonn 2022 Introduction to Column Generation 68 / 84


Example: Cutting Stock Problem

Stock length 100

Length Demand
40 4

35 5

31 5

13 8

Hausdorff School – Bonn 2022 Introduction to Column Generation 69 / 84


Example: Cutting Stock Problem

Stock length 100

Length Demand
40 4

35 5

31 5

13 8

Trivial lower bound = (4 × 40 + 5 × 35 + 5 × 31 + 8 × 13)/100 = 5.94


Rounding up =⇒ Lower Bound: 6 stocks
Hausdorff School – Bonn 2022 Introduction to Column Generation 69 / 84
Gilmore and Gomory (1961) Formulation

Based on the concept of cutting patterns


A cutting pattern is a possible way of cutting a stock; it is
defined by the number of copies of each item obtained from
that stock

Hausdorff School – Bonn 2022 Introduction to Column Generation 70 / 84


Example: Cutting Stock Problem

Some Possible Cutting Patterns

Hausdorff School – Bonn 2022 Introduction to Column Generation 71 / 84


Gilmore and Gomory (1961, 1963) Formulation

Q
X
min z= λq (10a)
q=1
Q
X
S.t. pqj λq = bj j ∈J (π) (10b)
q=1

λq ∈ Z q = 1, . . . , Q (10c)

Exponential number of λ variables, one for each cutting


pattern (numbered from 1 to Q)
pqj indicates how many copies of item j are obtained in the
q-th cutting pattern
Its linear relaxation can be efficiently solved by column
generation

Hausdorff School – Bonn 2022 Introduction to Column Generation 72 / 84


Gilmore and Gomory (1961, 1963) Formulation

Pricing subproblem
At each iteration, the following Integer Knapsack Problem is
solved:
X
min c̄ = 1 − πj xj
j∈J
X
S.t. wj xj ≤ W ,
j∈J
|J|
xj ∈ Z+ ∀j ∈ J.

Each subproblem solution is a cutting pattern


The Integer Knapsack Problem is (weakly) NP-hard, but very
well solved in practice (see Pferschy et al. (2004))

Hausdorff School – Bonn 2022 Introduction to Column Generation 73 / 84


Relation between formulations

Gilmore-Gomory Formulation can be obtained by a DW


decomposition of the symmetric formulation.

GG Formulation is stronger (its linear relaxation yields better


lower bounds) because knapsack constraints (9c) are
convexified

Hausdorff School – Bonn 2022 Introduction to Column Generation 74 / 84


Gilmore and Gomory (1961, 1963) Formulation

Remarkably strong linear relaxation bounds


It is very hard to find an instance where the (MP) solution
value rounded up is not equal to the value of an optimal
integer solution!
It is conjectured that the (MP) solution value rounded up is
at most one unit away from the value of an optimal integer
solution!!

Hausdorff School – Bonn 2022 Introduction to Column Generation 75 / 84


Example: Cutting Stock Problem
Iteration 1:
min z = λ1 + λ2 + λ3 + λ4
2λ1 = 4 (π1 = 0.50)
2λ2 = 5 (π2 = 0.50)
3λ3 = 5 (π3 = 0.33)
7λ4 = 8 (π4 = 0.14)
λ≥0

z = 7.31; λ1 = 2, λ2 = 2.5, λ3 = 1.67, λ4 = 1.14

Hausdorff School – Bonn 2022 Introduction to Column Generation 76 / 84


Example: Cutting Stock Problem
Iteration 1:
min z = λ1 + λ2 + λ3 + λ4
2λ1 = 4 (π1 = 0.50)
2λ2 = 5 (π2 = 0.50)
3λ3 = 5 (π3 = 0.33)
7λ4 = 8 (π4 = 0.14)
λ≥0

z = 7.31; λ1 = 2, λ2 = 2.5, λ3 = 1.67, λ4 = 1.14


Subproblem: min 1 − nj=1 πj xj
P

min c̄ = −0.5x1 − 0.5x2 − 0.33x3 − 0.14x4 + 1


S.t. 40x1 − 35x2 + 31x3 + 13x4 ≤ 100
x ∈ Z+4

S = (0, 2, 0, 2) and c̄ = −0.29

Hausdorff School – Bonn 2022 Introduction to Column Generation 76 / 84


Example: Cutting Stock Problem
Iteration 2:
min z = λ1 + λ2 + λ3 + λ4 + λ5
2λ1 =4 (π1 = 0.50)
2λ2 + 2λ5 =5 (π2 = 0.36)
3λ3 =5 (π3 = 0.33)
7λ4 + 2λ5 =8 (π4 = 0.14)
λ≥0

z = 6.59; λ1 = 2, λ3 = 1.67, λ4 = 0.43, λ5 = 2.5


Subproblem: min 1 − nj=1 πj xj
P

min c̄ = −0.5x1 − 0.36x2 − 0.33x3 − 0.14x4 + 1


S.t. 40x1 − 35x2 + 31x3 + 13x4 ≤ 100
x ∈ Z+4

S = (2, 0, 0, 1) and c̄ = −0.14

Hausdorff School – Bonn 2022 Introduction to Column Generation 76 / 84


Example: Cutting Stock Problem
Iteration 3:
min z = λ1 + λ2 + λ3 + λ4 + λ5 + λ6
2λ1 + 2λ6 =4 (π1 = 0.43)
2λ2 + 2λ5 =5 (π2 = 0.36)
3λ3 =5 (π3 = 0.33)
7λ4 + 2λ5 + λ6 =8 (π4 = 0.14)
λ≥0

z = 6.31; λ3 = 1.67, λ4 = 0.14, λ5 = 2.5, λ6 = 2


Subproblem: min 1 − nj=1 πj xj
P

min c̄ = −0.43x1 − 0.36x2 − 0.33x3 − 0.14x4 + 1


S.t. 40x1 − 35x2 + 31x3 + 13x4 ≤ 100
x ∈ Z+4

S = (0, 1, 0, 5) and c̄ = −0.07

Hausdorff School – Bonn 2022 Introduction to Column Generation 76 / 84


Example: Cutting Stock Problem
Iteration 4:
min z = λ1 + λ2 + λ3 + λ4 + λ5 + λ6 + λ7
2λ1 + 2λ6 =4 (π1 = 0.44)
2λ2 + 2λ5 + λ7 =5 (π2 = 0.38)
3λ3 =5 (π3 = 0.33)
7λ4 + 2λ5 + λ6 + 5λ7 =8 (π4 = 0.12)
λ≥0

z = 6.29; λ3 = 1.67, λ5 = 2.38, λ6 = 2, λ7 = 0.25


Subproblem: min 1 − nj=1 πj xj
P

min c̄ = −0.44x1 − 0.38x2 − 0.33x3 − 0.12x4 + 1


S.t. 40x1 − 35x2 + 31x3 + 13x4 ≤ 100
x ∈ Z+4

S = (0, 1, 2, 0) and c̄ = −0.04

Hausdorff School – Bonn 2022 Introduction to Column Generation 76 / 84


Example: Cutting Stock Problem
Iteration 5:
min z = λ1 + λ2 + λ3 + λ4 + λ5 + λ6 + λ7 + λ8
2λ1 + 2λ6 =4 (π1 = 0.44)
2λ2 + 2λ5 + λ7 + λ8 =5 (π2 = 0.38)
3λ3 + 2λ8 =5 (π3 = 0.31)
7λ4 + 2λ5 + λ6 + 5λ7 =8 (π4 = 0.12)
λ≥0

z = 6.19; λ5 = 0.81, λ6 = 2, λ7 = 0.88, λ8 = 2.5


Subproblem: min 1 − nj=1 πj xj
P

min c̄ = −0.44x1 − 0.38x2 − 0.31x3 − 0.12x4 + 1


S.t. 40x1 − 35x2 + 31x3 + 13x4 ≤ 100
x ∈ Z+4

S = (2, 0, 0, 1) and c̄ = 0
New lower bound: 7 stocks. But no integer solution. What to do?
Hausdorff School – Bonn 2022 Introduction to Column Generation 76 / 84
Getting an (heuristic) integer solution

Change constraints to ≥ and solve the restricted master as an IP:

min z = λ1 + λ2 + λ3 + λ4 + λ5 + λ6 + λ7 + λ8
2λ1 + 2λ6 ≥4
2λ2 + 2λ5 + λ7 + λ8 ≥5
3λ3 + 2λ8 ≥5
7λ4 + 2λ5 + λ6 + 5λ7 ≥8
λ ∈ Z+8

λ5 = 1, λ6 = 2, λ7 = 1, λ8 = 3, z = 7

Hausdorff School – Bonn 2022 Introduction to Column Generation 77 / 84


Example: Cutting Stock Problem
The obtained optimal solution with 7 stocks (after trimming the
surplus copies)


λ5

n
λ6


λ7

(
λ8

Hausdorff School – Bonn 2022 Introduction to Column Generation 78 / 84


Getting an optimal integer solution for CSP

Somehow rounding the (RMP) solution, the idea originally


proposed by Gilmore and Gomory, works reasonably well in
practice, but it is not guaranteed to find an optimal integer
Of course, a solution obtained by any heuristic that matches
the GG lower bound is proved to be optimal
Yet, only much later (mid-1990s) Branch-and-Price algorithms
for the CSP and the BPP were created
The original symmetric variables are useless for branching or
cutting. More complex non-robust schemes should be devised

Hausdorff School – Bonn 2022 Introduction to Column Generation 79 / 84


When DW decomposition for IP should be tried?

Some guidelines:
When DW decomp improves linear relaxation significantly, but the
subproblems are still tractable
Solving a (MP) by CG is time-consuming. This is usually
only worthy if the resulting bounds are a lot better
This means that the subproblems should not be easy
polynomial problems. The big gains are obtained exactly by
convexifying constraints that define NP-hard problems
“Tractable” NP-hard problems often includes those solvable in
pseudo-polynomial time or those where some exact algorithm
still performs well on instances of reasonable size
A delicate balance

Hausdorff School – Bonn 2022 Introduction to Column Generation 80 / 84


When DW decomposition for IP should be tried?

When DW decomp leads to many small subproblems, better if


many of them are identical
Gains in pricing time and also in CG convergence

When DW decomp removes the symmetry of a bad formulation


CSP and BPP are good examples

Hausdorff School – Bonn 2022 Introduction to Column Generation 81 / 84


Robust vs Non-Robust BCPs
Robust BCPAs can improve a lot over BP algorithms:
Robust BCP is easier to implement, if you already know
families of cuts for the problem and have their separation
algorithms
Yet, sometimes even families of facets are useless, because
they are already implied by the column generation!
No worries about destroying the pricing structure

Non-robust BCPAs can possibly do better:


In some problems all robust cuts are useless (due to a
symmetric original formulation)
Each master variable carries much more information
than an original variable. It is much easier to find strong
non-robust cuts over them!
Yet, non-robust cuts are indeed “non-robust”: a few dozen
bad cuts can make your pricing 1000x slower!
Hausdorff School – Bonn 2022 Introduction to Column Generation 82 / 84
Advanced Branch-Cut-and-Price Algorithms

The most advanced state-of-the-art BCPAs are exactly those where


the non-robust cuts and the pricing algorithm are jointly and
symbiotically designed, in such a way that the pricing can handle a
large number of very tailored non-robust cuts without becoming
too inefficient.

Hausdorff School – Bonn 2022 Introduction to Column Generation 83 / 84


Advanced Branch-Cut-and-Price Algorithms

The most advanced state-of-the-art BCPAs are exactly those where


the non-robust cuts and the pricing algorithm are jointly and
symbiotically designed, in such a way that the pricing can handle a
large number of very tailored non-robust cuts without becoming
too inefficient.

Spoiler: BCPs for Vehicle Routing Problems

Hausdorff School – Bonn 2022 Introduction to Column Generation 83 / 84


References I

Dantzig, G. B. and Wolfe, P. (1960). Decomposition principle for linear


programs. Operations research, 8(1):101–111.
Ford Jr, L. R. and Fulkerson, D. R. (1958). A suggested computation for
maximal multi-commodity network flows. Management Science,
5(1):97–101.
Gilmore, P. C. and Gomory, R. E. (1961). A linear programming approach
to the cutting-stock problem. Operations research, 9(6):849–859.
Gilmore, P. C. and Gomory, R. E. (1963). A linear programming
approach to the cutting stock problem—part ii. Operations research,
11(6):863–888.
Pferschy, U., Kellerer, H., and Pisinger, D. (2004). Knapsack Problems.
Springer.

Hausdorff School – Bonn 2022 Introduction to Column Generation 84 / 84

You might also like