Dual-Feasible Functions For Integer Programming and Combinatorial Optimization - Basics, Extensions and Applications
Dual-Feasible Functions For Integer Programming and Combinatorial Optimization - Basics, Extensions and Applications
Cláudio Alves
François Clautiaux
José Valério de Carvalho
Jürgen Rietz
Dual-Feasible
Functions for Integer
Programming
and Combinatorial
Optimization
Basics, Extensions and Applications
EURO Advanced Tutorials on Operational
Research
Series editors
M. Grazia Speranza, Brescia, Italy
José Fernando Oliveira, Porto, Portugal
More information about this series at http://www.springer.com/series/13840
Cláudio Alves • François Clautiaux •
José Valério de Carvalho • JRurgen Rietz
Dual-Feasible Functions
for Integer Programming
and Combinatorial
Optimization
Basics, Extensions and Applications
123
Cláudio Alves François Clautiaux
Department of Production and Systems Institut de Mathématiques de Bordeaux
University of Minho University of Bordeaux
Braga, Portugal Talence, France
The concept of dual-feasible function (DFF) has been used to improve the resolution
of several combinatorial optimization problems involving knapsack inequalities like
cutting and packing, scheduling, and vehicle routing problems. DFF were used for
the first time by Lueker (1983) to obtain lower bounds for the bin-packing problem.
Since then, the main application of DFF was in the computation of lower bounds,
even though other applications do exist, as, for instance, the generation of valid
inequalities for integer programs (Chvátal 1973).
During many years, DFF were seen only as mere rounding functions that lead to
lower bounds for standard packing problems by changing the value of the input data.
In this tutorial, we bring a broader perspective to the subject by discussing it within
the general framework of duality. A revision of the standard concepts, properties,
and instances is provided with illustrative examples. We show that many lower
bounds derived for packing problems can be expressed as DFF. Also we explore
relevant extensions of standard DFF and their application to different combinatorial
optimization problems.
The link between DFF, column generation models, and the underlying Dantzig-
Wolfe decomposition is strong. The classical DFF rely on the dual perspective of the
well-known column generation model of Gilmore and Gomory for the cutting stock
problem. Many functions were proposed within this specific context. We explore
the general properties that identify the best DFF. Additionally, we describe the
general approaches that can be followed to derive new non-dominated functions. In
particular, it is shown how to derive high-quality DFF from superadditive functions
using symmetry. We show how to use them to derive concrete examples, and we
further illustrate these ideas through the analysis of some of the current DFF that
lead to the best results reported in the literature.
A first generalization of the classical DFF can be done by considering the general
formulation of a set covering model somehow disconnected from the cutting stock
problem. Here, we analyze this generalization and explore how it can influence (or
act upon) the general properties of classical DFF. Extending DFF to nonclassical
formulations is usually not straightforward. In this tutorial, we show how this
extension can be done for different cases including the existence of two-dimensional
v
vi Preface
rectangular items, conflicts, and general domains. Examples are provided to guide
the readers through the rationale behind the development of these extensions and to
provide the basis for future contributions.
Extensions and applications within the scope of integer and linear programming
will be also discussed. Recent developments extending DFF to the domain of
negative values are described. These generalized DFF can be used to derive valid
inequalities for integer programs from a single constraint or a set of generating
constraints.
This monograph was primarily written for graduate students and also advanced
undergraduate students in operations research/management science, computer sci-
ence, and mathematics. It requires a knowledge of linear and integer programming
and duality theory. Chapters 1 and 2 present the basic concepts and definitions
of DFF. Chapter 3 contains extensions of DFF to wider domains. Chapters 4
and 5 address applications in cutting and packing problems and in deriving valid
inequalities, respectively. Exercises are also proposed at the end of each chapter
covering the different parts of the material. Solutions or hints to a selected set of
exercises are provided at the end of the book.
The authors have been using DFF in research for many years, and these proved
to be very useful for the efficient computation of lower bounds for many different
integer programs and combinatorial optimization problems, with nice computational
results. Additionally, they also used them to derive valid inequalities. DFF are
problem dependent. We hope that the insight provided by this monograph may foster
research and a more widespread use of DFF in other problems. The future still holds
much to be discovered in this area.
We would like to thank many colleagues and researchers, who discussed many
issues and stimulated our interest and research in this area. The authors also would
like to thank Gleb Belov, Julia Bennell, Rita Macedo, and Daniel Porumbel, who
did a careful review of an earlier draft of this manuscript and helped to improve its
quality.
This work was supported by FEDER funding through the Programa Operacional
Factores de Competitividade (COMPETE) and by national funding through the
Portuguese Science and Technology Foundation (FCT) in the scope of the project
PTDC/EGE-GES/116676/2010 (Reference from COMPETE: FCOMP-01-0124-
FEDER-020430), and by FCT within the project scope UID/CEC/00319/2013.
vii
viii Contents
2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 39
2.4.1 Applying Symmetry . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 39
2.4.2 Using Rounding Functions and Applying Symmetry .. . . . . . . . 39
2.4.3 Improving a Function by Using Its Limiting Behaviour .. . . . . 41
2.4.4 A Special Case: A Staircase Function
with Infinitely Many Stairs . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 43
2.5 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 45
2.6 Exercises.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 46
3 General Dual-Feasible Functions . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
3.2 Extension of Dual-Feasible Functions to General Domains . . . . . . . . . . 52
3.2.1 Definition.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 52
3.2.2 Maximality .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 54
3.2.3 Extremality .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 56
3.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 58
3.4 Properties of Maximal General Dual-Feasible Functions .. . . . . . . . . . . . 60
3.4.1 Structure .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 61
3.4.2 Behaviour at Given Points . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 64
3.4.3 Limits of Possible Convexity .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 66
3.4.4 Composition and Convex Combinations . .. . . . . . . . . . . . . . . . . . . . 66
3.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 67
3.6 Building Maximal General Dual-Feasible Functions . . . . . . . . . . . . . . . . . 73
3.6.1 Method I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 73
3.6.2 Method II .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 76
3.6.3 Method III .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 77
3.6.4 Examples .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 80
3.7 Related Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 86
3.8 Exercises.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 87
4 Applications for Cutting and Packing Problems . . . . .. . . . . . . . . . . . . . . . . . . . 91
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 91
4.2 Set-Covering Dual-Feasible Functions.. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 91
4.2.1 Data-Dependent Dual-Feasible Functions .. . . . . . . . . . . . . . . . . . . . 93
4.2.2 Data-Independent Dual-Feasible Functions . . . . . . . . . . . . . . . . . . . 94
4.2.3 General Properties . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 94
4.3 Vector Packing Dual-Feasible Functions . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 95
4.3.1 Basic Definition .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 95
4.3.2 General Properties of VP-MDFF . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 98
4.3.3 General Classes of VP-MDFF . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 102
4.4 Orthogonal Packing .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 110
4.4.1 DFF for the Oriented Case (m-OPP-O-DFF) .. . . . . . . . . . . . . . . . . 111
4.4.2 DFF for the Case with Rotation (m-OPP-R-DFF) . . . . . . . . . . . . 112
4.5 Bin-Packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 113
Contents ix
References .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 157
Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 159
Acronyms
xi
Chapter 1
Linear and Integer Programming
1.1 Introduction
Integer Programming (IP) is a modelling tool that has been widely applied in the last
decades to obtain solutions for complex real problems, as those that arise in cutting
and packing, location, routing and many other areas. IP models are of the form:
where P D fxp g is the set of extreme points of X and R D fyr g is the set of extreme
rays of X: Replacing the value of x in the original model, and rearranging the terms,
we obtain the following reformulation (or DW-model) of the problem:
X X
min zDW WD .c> xp /p C .c> yr /r
p2P r2R
X X
s: to p .Axp / C r .Ayr / b
p2P r2R
X
p D 1
p2P
p 0; 8p 2 P
r 0; 8r 2 R:
The decision variables of the reformulated problem are the variables p and r :
The elements c> xp and c> yr define the objective function coefficients, while
the columns Axp and Ayr define the constraints of the reformulated problem.
4 1 Linear and Integer Programming
Given that ConvfXIP g XDWI XLP ; it follows that zIP zDWI zLP for
minimization problems. Note that, if the polyhedron X has the integrality property,
then Convfx 2 X and integerg D X; and XDWI D fx 2 X W Ax b; x og: In this
case, the DWI-model is as strong as the LP relaxation, and zIP zDWI D zLP :
Example 1.1 The comparison of the sets XLP and XDWI in Fig. 1.1 illustrates how
DW-decomposition may provide a strong model. The domain of the IP problem is
the finite set of full dots that belong to the space of the LP relaxation, XLP ; shown in
Fig. 1.1a, delimited by the double line.
The set of constraints is decomposed into the first set of general constraints,
which is the single constraint A1 x b1 ; and the second set, which is the single
constraint A2 x b2 and the non-negativity constraints x o; which defines the
set X: The set X does not have the integrality property, because it has a fractional
extreme point in the x1 axis. The set Convfx 2 X and integerg is delimited by the
double line, in Fig. 1.1b, and its extreme points are all integer.
The solution space of the reformulated model, XDWI ; is shown in Fig. 1.1c,
delimited by the double line. It is the set of points x 2 ConvfA2 x b2 ; x
o and integerg that also obey the constraint in the first set. t
u
1.2 Dantzig-Wolfe Decomposition 5
(a) x2
A2 x ≤ b2
XLP = {x : A1 x ≤ b1 ,
A2 x ≤ b2 ,
A1 x ≤ b1 x ≥ o}
x1
(b) x2
A2 x ≤ b2
x1
(c) x2
A2 x ≤ b2
XDW I = {x : A1 x ≤ b1 ,
A1 x ≤ b1 x ∈ Conv{A2 x ≤ b2 , x ≥ o and integer}}
x1
Fig. 1.1 Getting a strong model with Dantzig-Wolfe decomposition. (a) The set XLP D
fx W A1 x b1 ; A2 x b2 ; x og. (b) The set ConvA2 x b2 ; x o and integerg. (c) The set
XDWI D fx W A1 x b1 ; x 2 ConvfA2 x b2 ; x o and integergg
are x1 D .0; 0/>; x2 D .2; 0/> and x3 D .0; 4/> ; and there are no extreme
rays, because X is a bounded set. Therefore, Œc> x1 ; c> x2 ; c> x3 D Œ0; 2; 4 and
ŒAx1 ; Ax2 ; Ax3 D Œ0; 2; 12: The DWI-model resulting from this decomposition
is:
The .1 ; 2 ; 3 / coordinates of the extreme points of XDWI are .1; 0; 0/; .0; 1; 0/;
. 12 ; 0; 12 / and .0; 35 ; 25 /; respectively. The last extreme point, which is the optimal
solution, maps to the solution in the original space x D 0 x1 C 35 x2 C 25 x3 D
. 65 ; 85 /> : t
u
Given a set of m 2 N different item lengths `i (i 2 f1; : : : ; mg) to be cut from stock
rolls of length L > 0, the 1-dimensional cutting stock problem (1D-CSP) consists
of finding how to cut these items such that the number of used rolls is minimized.
The Gilmore and Gomory model for the Cutting Stock Problem is as follows:
X
min z WD xj (1.5)
j2J
s: to Ax b; (1.6)
xj 2 N; 8j 2 J; (1.7)
where b 2 Nm describes the order demands of the items, and J is the index set of
all feasible patterns aj , which form the matrix A. The quantity of the j-th pattern to
be used is xj . A pattern aj 2 Nm is feasible if and only if
l> aj L; (1.8)
i.e., the total sum of the lengths of all the items to be cut in the corresponding
quantity does not exceed the length of the roll L.
The dual of the continuous relaxation of (1.5)–(1.7) is
Both the primal and the dual problem are solvable, if no item is longer than the
length of the stock rolls. Due to the strong duality theorem, the optimal objective
function values of the continuous relaxation of (1.5)–(1.7) and its dual (1.9)–(1.11)
are the same, and for all feasible solutions of both problems, one has z zD . Note
that because of the constraints (1.10) and (1.11), it follows that ui 1 for all i.
The solutions of the Gilmore and Gomory model result from a non-negative
combination of columns. It can be shown that this structure results from a DW-
decomposition of an original arc-flow model, whose solutions are extreme rays
that can be associated to solutions of a knapsack problem (see Sects. 1.6 and 1.7).
Therefore, the convexity constraint is not needed.
8 1 Linear and Integer Programming
Example 1.3 Consider a 1D-CSP instance with rolls of length 8 and items of
lengths 4, 3 and 2, with order demands of 5, 4 and 8, respectively. A model is as
follows:
Cutting patterns
LD8 x1 x2 x3 x4 x5 x6 Demand bi
`i D 4 2 1 1 5
3 1 2 1 4
2 2 1 2 4 8
min 1 1 1 1 1 1
Each column describes the number of items of each length produced in a cutting
pattern. A feasible cutting plan is a combination of cutting patterns that satisfies the
demand. The objective is to minimize the number of rolls used.
The structure of the model is appealing. It is a covering model, in which one has
to select a set of columns that cover the demand. If the values of the demands were
all equal to one, clearly the feasible cutting patterns should only have one item of
each type. In this special case, the model would be a set covering model. t
u
Consider the problem of building a feasible schedule for a set of parallel non-
identical machines. A plan for a machine is feasible if the jobs assigned to the
machine can be executed within a given time slot. There is a set of feasible plans, one
for each different machine. The plan for each machine is one selected from several
plans. Each plan for a given machine represents an extreme point of the solution set
of the machine. Therefore, a convexity constraint is needed to select just one plan
for each machine.
The model that results from DW-decomposition is a set partitioning model (the
derivation is left as an exercise). The columns represent the feasible plans for the
machines. The model has a set of constraints for the jobs, which indicate that each
job is executed once, and a set of constraints for the machines, which are convexity
constraints.
10 1 Linear and Integer Programming
Example 1.4 Consider an example with four jobs and two machines. Let yik be a
decision variable that represents a feasible plan, indexed by k; that assigns a set of
jobs to machine i: Each feasible plan for a machine has a cost, and the objective is
to minimize the total cost.
y10 y11 y12 y13 y14 y15 y20 y21 y22 y23 y24 y25
Job 1 1 1 1 1 1 1 D1
2 1 1 1 1 1 D1
3 1 1 1 1 D1
4 1 1 1 1 1 D1
Machine 1 1 1 1 1 1 1 D1
2 1 1 1 1 1 1 D1
min 0 27 25 24 22 21 0 12 16 14 10 14
Note that each machine has a schedule that is the null solution, meaning that
the machine is idle in the plan. Clearly, the machine constraints might be replaced
by constraints, because the idle machine columns are slack variables for those
constraints. t
u
The DWI-model may be stronger, but it comes at a price. The first issue is that
there is generally an exponential number of extreme points and extreme rays in the
set Convfx 2 X and integerg; and the second is that to find them it is necessary to
solve an integer optimization problem. To overcome the first issue, DWI-models are
solved in practice using column generation algorithms, which only pick attractive
variables.
The solution of DWI-models with column generation algorithms is not a central
topic in this monograph. Nevertheless, it is described succinctly as follows. The
DWI-model has a master problem, defined by the first set of constraints in the DW-
decomposition. The column generation algorithm starts with a restricted master
problem, a model that has a restricted set of variables, which is optimized. The
dual information from the restricted master problem is used in one or several
subproblems to find the most attractive column to be inserted in the restricted master
problem, which is then re-optimized. The iterative algorithm is repeated until no
more attractive columns are found, yielding a solution that is provably optimal.
In each iteration, the optimal solution of the subproblem, which is stated in terms
of the original variables x; is an extreme point or an extreme ray of the set Convfx 2
X and integerg: Therefore, the subproblem is an integer optimization problem. This
second issue may not be too hard to overcome. For instance, in the Cutting Stock
1.4 Duality and Bounds from Dual Feasible Solutions 11
The Weak Duality theorem states that, given a primal feasible solution xO and
a dual feasible solution u;O then b> uO zD z c> xO : It follows that any feasible
solution uO to the dual problem provides a lower bound, b> u;
O to the optimum solution
of the primal problem, z. On the other hand, the Strong Duality theorem states that
if the optima of the two problems are finite, then zD D z:
Let us analyze what happens when we consider two primal models, one weak
and one strong. For instance, recall that the primal minimization DWI-model that is
stronger than the LP relaxation model yields a better optimum solution, i.e., zLP
zDWI zIP . According to the Strong Duality theorem, the optimal value of the dual
maximization problem of the DWI-model will also be greater than or equal to the
optimal value of the LP relaxation model. Therefore, one may expect to find feasible
solutions of the dual maximization problem of the DWI-model that have an objective
function value that is greater than or equal to zLP .
Recall that we aim at finding feasible solutions of the dual models of strong
DWI-models without enumerating any of the (exponentially many) columns that
correspond to all the extreme solutions of the sets in the subproblem(s). Instead,
by analyzing the structure of the dual of the DWI-models, we aim at deriving
functions that provide dual feasible solutions that obey all the (exponentially many)
constraints.
1.5 Examples
Two examples are presented below, illustrating that feasible solutions of the dual of
a DWI-model can provide lower bounds that are better than trivial lower bounds.
In Sect. 2.1, for the cutting stock problem, and in Sect. 4.3.3, for the vector packing
problem, we will see how these dual feasible solutions are derived from DFF suited
to each problem.
A trivial lower bound for the 1D-CSP results from calculating the minimum number
of rolls of size L that are needed to place the sum of the sizes of all items:
X
m
LBT D d bi `i =Le:
iD1
The dual polytope of the Gilmore and Gomory model may have dual feasible
solutions that provide better lower bounds. Recall that the dual polytope has all the
(exponentially many) constraints that correspond to all feasible cutting patterns. As
we will see, only maximal patterns (in which there is no room for any other item)
are needed to define the dual polytope.
1.5 Examples 13
Example 1.6 A company has to deliver ten items with weight 0.4 and 40 items
with weight 0.3. Each vehicle can carry a weight of 1. The lower bound LBT D
d.10 0:4 C 40 0:3/=1e D 16, meaning that the sum of all weights is 16, and so,
at least, 16 vehicles are needed. However, the optimal solution requires 17 vehicles.
One optimal solution is having ten vehicles carrying items of sizes 0.4, 0.3 and 0.3,
six vehicles carrying three items of size 0.3, and one vehicle carrying two items of
size 0.3.
The set of all maximal feasible cutting patterns is JM D f.a1 ; a2 / 2 N2 W 0:4 a1 C
0:3 a2 1g D f.2; 0/; .1; 2/; .0; 3/g J: The pair of primal-dual problems is as
follows:
1
2
A B
1
3 TC
O D
1 1 u1
2
14 1 Linear and Integer Programming
In the m-dimensional vector packing problem (mD-VPP), with m 2 Nnf0; 1g; items
with m independent dimensions (for instance, volume and weight in 2-dimensional
problems) have to be packed into a minimum number of larger objects, which are
m-dimensional bins. There are m capacity constraints, one for each dimension of
the problem, i.e., the sum of the lengths of all packed items must not exceed the
bin size in any of the m directions. The m-dimensional bins are all equal and have
lengths Ld , d D 1; : : : ; m, and there are n 2 N different items with lengths `id
(i D 1; : : : ; n; d D 1; : : : ; m/:
A trivial lower bound for the m-dimensional Vector Packing Problem, which
amounts to applying the trivial lower bound for the 1D-CSP to all the dimensions
and then taking the best value, is:
(& ')
X
n
LVPP D max `id =Ld : (1.15)
dD1;:::;m
iD1
A DWI-model for the mD-VPP is similar to the Gilmore and Gomory model for
the 1D-CSP (1.5)–(1.7), but the packing aj 2 Nn is feasible if and only if
X
n
j
ai `id Ld ; d D 1; : : : ; m; (1.16)
iD1
i.e., for each dimension, the sum of the lengths of the items packed in a bin does not
exceed the length of the bin. Again, the dual polytope has dual feasible solutions
that provide better lower bounds.
Example 1.7 A company has vehicles that can carry a volume of 4 and a weight of
5, and has to deliver four items with the following volumes and weights:
Item 1 2 3 4 Vehicle
Volume 2 3 1 2 4
Weight 3 2 4 1 5
1.5 Examples 15
packings
.v; w/ item x1 x2 x3
.2; 3/ 1 1
.3; 2/ 2 1
.1; 4/ 3 1
.2; 1/ 4 1 1
.4; 5/
min z WD x1 C x2 C x3 max zD WD u1 C u2 C u3 C u4
s: to x1 1 s: to u1 C u4 1
x2 1 u2 1
(Primal) (Dual)
x3 1 u3 C u4 1
x1 C x3 1 u1 ; u2 ; u3 ; u4 0:
x1 ; x2 ; x3 0;
Some feasible solutions of the dual space are uO 1 D . 12 ; 1; 0; 12 /> ; uO 2 D
.1; 1; 0; 0/>; uO 3 D .1; 1; 1; 0/> and uO 4 D .1; 0; 1; 0/>. The dual solution uO 3 is the
one that provides the best lower bound, equal to 3, for this 2D-VPP instance. t
u
16 1 Linear and Integer Programming
1.7 Exercises
represents the subproblem is shown in the figure (some arcs can be discarded
and are not represented). Build the corresponding Gilmore and Gomory
model.
0 1 2 3 4 5 6 7 8
5. Given rolls of the same size L, each treated separately and indexed by k, k 2 K,
where K is a set of rolls that are sufficient to pack all the items, and clients with
demands of di items of sizes `i , 0 < `i L, i 2 I, the cutting stock problem can be
modelled using integer decision variables xik , which represent the number of times
item i is cut from roll k, and binary decision variables yk , with yk D 1, if roll k is
used, and 0, otherwise. The model is as follows:
X
min z WD yk (1.26)
k2K
X
s: to xik di ; 8i 2 I; (1.27)
k2K
X
`i xik Lyk ; 8k 2 K; (1.28)
i2I
The objective is to cut the minimum number of rolls to satisfy demand, and the
first set of constraints enforces that the demand is satisfied, while the second imposes
that the sum of the lengths of the items placed in one roll cannot exceed a function
that takes the value of the length of the roll when the roll is used, and the value 0,
otherwise.
(a) Apply a DW-decomposition to this model with a block angular structure,
treating each roll as a separate entity, keeping (1.27) in the master problem,
and each knapsack constraint of the set (1.28), together with (1.29)–(1.30),
as a separate subproblem.
(b) Which is the meaning of the convexity constraint for each roll k in the
reformulated model?
(c) Noting that all the rolls have equal size and their cutting stock patterns
are identical, simplify the resulting DW-model, dropping the convexity
constraints, to obtain the Gilmore and Gomory model.
Chapter 2
Classical Dual-Feasible Functions
2.1 Introduction
Dual-feasible functions (DFF) have been used to improve the resolution of different
combinatorial optimization problems with knapsack inequalities, including cutting
and packing, scheduling and network routing problems. They were used mainly to
compute algorithmic lower bounds, but also to generate valid inequalities for integer
programs. During a long time, the literature concerning these two applications
of DFF was somehow disconnected. Functions defined for lower bounding were
often referred to as dual-feasible, whereas the functions used to strengthen integer
programming models were referred to as superadditive and nondecreasing. The
relationship between these two families of functions is that the latter is a dominant
family of DFF. Other designations are also used as for instance “redundant function”
in the context of scheduling problems. These functions are in fact discrete DFF.
A dual-feasible function is defined formally as follows.
Definition 2.1 A function f W Œ0; 1 ! Œ0; 1 is a dual-feasible function, if for any
finite index set I of nonnegative real numbers xi 2 RC , i 2 I, it holds that
X X
xi 1 H) f .xi / 1:
i2I i2I
This implies immediately f .0/ D 0 and f .x/ 1=b1=xc for all x 2 .0; 1.
Example 2.1 Figure 2.1 shows a parameter dependent staircase function f W Œ0; 1
! Œ0; 1, defined as
y y y y
1 1 1 1
1 x 1 x 1 x 1 x
˚ 12 12 36 60
Fig. 2.1 Dual-feasible function (2.1) for parameter values C 2 11
; 7
; 7 ; 11
for several parameter values C 1. Black filled circles mean that point belongs
to the graph of the function, while white circles exclude that point. For instance,
f .bCc=C/ D 1, but f .x/ < 1 for all x 2 Œ0; bCc=C/. For the sake of conciseness,
parameters are omitted wherever this is appropriate.
The function f is dual-feasible for any real parameter C 1, as one
P can see as
follows. One gets for any finite set I and numbers xi 2 Œ0; 1 with xi 1 the
i2I
estimation
X X X
s WD bCc f .xi / D bCxi c Cxi C:
i2I i2I i2I
f .x/ WD d0 g.x=d/;
1 x
This function is a DFF for any k 2 N with k > 1. Hence, d0 WD k 1 yields the
discrete DFF
d0 C 1
f .x/ WD max 0; x1 :
d
The term dual-feasible comes from the alternative definition of these functions,
which relies on the dual formulation of the well-known Gilmore and Gomory model
for the cutting stock problem. In this context, a function f is said to be dual-feasible
if it maps each x (the size of an item) to its corresponding value in a valid dual
solution of the cutting stock problem. If f is a DFF, then assigning ui WD f .`i =L/
for all i 2 f1; : : : ; mg yields a feasible solution for the dual problem (1.9)–
(1.11), because (1.8) and the definition of a DFF ensure the validity of (1.10)
and (1.11).
Example 2.3 Recall Example 1.6, p. 12, with items of sizes 0.4 and 0.3. Using the
DFF g defined in (2.2), and shown in Fig. 2.3 for different values of the parameter k;
the sizes of the items are mapped into the values indicated in the following
table:
y y y
1 g2 1 g3 1 g4
1 x 1 x 1 x
Note that the DFF g2 ; g3 and g4 provide the dual feasible solutions
1 1 1
O D .0; 0/; D D ;0 and B D ; ;
2 3 3
for k > 1: t
u
Hence, dual-feasible functions lead to valid lower bounds for the cutting stock
problem, and there is always one such function whose corresponding bound is as
strong as the continuous bound achieved with the Gilmore and Gomory model (1.5)–
P
m
(1.7) presented in Sect. 1.3.1, because there is a DFF f , such that b> u D bi
iD1
f .`i =L/ equals the optimal objective function value of the continuous relaxation
of (1.5)–(1.7). To see this, assume without loss of generality that `i ¤ `j for all
i ¤ j, and let uO be an optimal solution of (1.9)–(1.11). A suitable DFF f is obtained
by setting f .x/ WD uO i for x D `i =L (i D 1; : : : ; m) and f .x/ WD 0 for all the remaining
points, i.e., f .x/ WD 0 for all x 2 Œ0; 1 n f`i =Lji 2 f1; : : : ; mgg.
2.2 Properties
2.2.1 Maximality
Despite the large number of dual-feasible functions that may be defined, only those
that are non-dominated are interesting since they yield the best lower bounds and
2.2 Properties 25
P
for any finite index set I of nonnegative real numbers xi with xi 1. t
u
i2I
where frac./ denotes the non-integer part of its argument, i.e. frac.C/ C bCc.
Function fBJ;1 is continuous and piecewise linear for all C, as illustrated in Fig. 2.4.
If C 2 N, then fBJ;1 becomes the identity function. Otherwise, there are dCe many
intervals without a slope, and the slope in the other intervals increases with frac.C/.
Note that, without the max-expression, the non-maximal staircase DFF f W
Œ0; 1 ! Œ0; 1, defined in formula (2.1), would be obtained. In the open intervals
y y y y
1 1 1 1
1 x 1 x 1 x 1 x
˚ 12 12 36 60
Fig. 2.4 Maximal dual-feasible function fBJ;1 for parameter values C 2 11
; 7
; 7 ; 11
26 2 Classical Dual-Feasible Functions
where fBJ;1 is strictly monotone, one has f .x/ < fBJ;1 .x/. Only outside these intervals
f .x/ D fBJ;1 .x/ holds. t
u
Several properties characterize the MDFF. They have to be nondecreasing,
superadditive, and also symmetric, as stated formally in the following theorem.
Theorem 2.1 A function f W Œ0; 1 ! Œ0; 1 is a MDFF if and only if the following
conditions hold:
1. f is monotonely increasing, i.e. f .x/ f .y/ if x y;
2. f is superadditive;
3. f is symmetric in the sense f .x/ C f .1 x/ D 1, 8x 2 Œ0; 1;
4. f .0/ D 0.
Some of these conditions imply others. Therefore, to prove that a given function is
a MDFF, much weaker prerequisites are in fact sufficient.
Theorem 2.2 A function f W Œ0; 1 ! RC fulfilling the following conditions is a
MDFF:
f .0/ D 0I (2.5)
f .x/ C f .1 x/ D 1; 8x 2 Œ0; 1=2I (2.6)
f .x1 C x2 / f .x1 / C f .x2 /; 8x1 ; x2 with
0 < x1 x2 < 1=2 and x1 C x2 2=3: (2.7)
Note that to prove that a real function f is a MDFF using this theorem, it is not
necessary to prove that f .x/ 1 for all x 2 Œ0; 1. Indeed, this follows from f .x/ 0
and the symmetry condition (2.6). Therefore, the range of f is only required to be
part of RC .
When verifying whether a given function f is a MDFF, the main difficulty
in the application of Theorems 2.1 and 2.2 is usually related to the test of its
superadditivity. An approach for this test consists in resorting to the following
function g W .0; 1=2/2 ! R:
The function f obeys the superadditivity condition (2.7) if and only if g.x1 ; x2 / 0
for all x1 ; x2 according to (2.7). To check this, the extreme points of g can be sought.
If g is differentiable, then only the critical points, i.e. those with rg.x1 ; x2 / D o,
may be extreme points. If f is differentiable only in some smaller intervals inside
.0; 1/, then analyzing the function (2.8) requires also to check separately points
where f 0 .x1 /, f 0 .x2 / or f 0 .x1 C x2 / does not exist.
Lemma 2.1 If the function f W Œ0; 1 ! Œ0; 1 fulfils the symmetry
condition (2.6),
and if it is differentiable in the interval (0,1), then the point 13 ; 13 is a critical point
for g.x1 ; x2 / in (2.8).
2.2 Properties 27
The superadditivity of these functions (not necessarily bounded to domain and range
Œ0; 1) is established through the following lemma.
Lemma 2.2 Let b > 0 be a constant. If a function f W Œ0; b ! R is convex on Œ0; b
and if f .0/ 0, then f is superadditive.
As a corollary, we obtain the following result that allows for the simple identification
of many MDFF.
Lemma 2.3 If f W Œ0; 1 ! Œ0; 1 with f .0/ D 0 fulfills the symmetry condition (2.6),
and if it is convex on Œ0; 1=2, then f is a MDFF.
Using Lemma 2.3, it is easy to see for C 2 that the function fBJ;1 .xI C/ in (2.4) is
a MDFF.
2.2.3 Extremality
To get quickly the strongest bounds and inequalities from dual-feasible functions,
maximality is not enough. Consider for example the case of the 1D-CSP as defined
in Sect. 1.3.1. If we are given three MDFF f ; g; h W Œ0; 1 ! Œ0; 1, such that 2f .x/ D
g.x/ C h.x/ for all x 2 Œ0; 1 (with xi D li =L, i D 1; : : : ; m), then the lower bound
obtained with these dual feasible functions equals
X
m X
m
bi g.xi / C bi h.xi / =2;
iD1 iD1
28 2 Classical Dual-Feasible Functions
and hence
( )
X
m X
m X
m
bi f .xi / max bi g.xi /; bi h.xi / :
iD1 iD1 iD1
In that case, either g or h leads to a bound, which is not worse than the one
obtained by f . Therefore, in order to avoid unnecessary calculations, the use of f is
superfluous, and it is in practice important, apart from maximality, to know whether
a dual-feasible function leads to solutions, or not, which are always dominated or
achieved by another DFF.
Given a convex set S ¤ ;, a point e 2 S is an extreme point if 2e D s1 C s2 with
s1 ; s2 2 S implies e D s1 D s2 . A similar definition can be adopted for dual-feasible
functions.
Definition 2.6 A MDFF f is an extreme maximal dual-feasible function (EMDFF),
if for any MDFF g; h with
it follows that f g.
For any non-extreme MDFF f , and any finite set I and xi 2 Œ0; 1, with i 2 I, there
is another MDFF g such that
X X
g.xi / f .xi /:
i2I i2I
P
and either both summands equal f .xi / or one is larger. Clearly, it makes no sense
i2I
to analyze the case of non-maximal DFF, since they are dominated by at least one
MDFF by definition.
Example 2.5 Consider the following MDFF fCCM;1 .xI C/ illustrated in Fig. 2.5:
8
< bCxc=bCc; if 0 x < 1=2;
fCCM;1 .xI C/ WD 1=2; if x D 1=2; (2.9)
:
1 fCCM;1 .1 xI C/; if 1=2 < x 1:
This function is extreme for all the feasible values of its parameter. As an
example, we provide the proof for 1 C < 3. Let g; h W Œ0; 1 ! Œ0; 1 be MDFF
2.2 Properties 29
y y y
1 1 1
1 x 1 x 1 x
36 60
Fig. 2.5 MDFF fCCM;1 for parameter values 1 C 2, C D 7
and C D 11
Because of the symmetry condition (2.6), we only need to show that g.x/ D h.x/
for all x 2 .0; 1=2/:
if Cx < 1, then fCCM;1 .x/ D 0, and hence g.x/ D h.x/ D 0 due to the range of g
and h;
if 2 < C < 3 and 1=C x < 1=2, then fCCM;1 .x/ D 1=2. Since g.x/; h.x/ 1=2
for x 1=2, it follows that g.x/ D h.x/ D 1=2, for 1=C x < 1=2. t
u
As discussed above, a non-extreme MDFF should not be used to obtain bounds,
because that would yield only a convex combination. However, even an extreme
MDFF needs not to yield corners of the dual polyhedron.
Example 2.6 The identity function is an EMDFF. Consider the one-dimensional
cutting stock instance that b 2 N items of length 5 have to be cut from initial
material of length 9. The identity function yields the bound 59 b according to the dual
variable u WD 5=9. However, any u 2 Œ0; 1 would have been a feasible dual variable,
and 5=9 is in the inner of this region. t
u
that g; h remain convex on Œ0; 1=2. We will have to choose an enough small > 0.
Define the functions l W R ! Œ0; 2 and g; h W Œ0; 1 ! Œ0; 1 according to
1 cos.2x/; if 0 < x < 1;
l.x/ WD
0; otherwiseI
f .x/ C l. ba
xa
/; if 0 x 1=2;
g.x/ WD
1 g.1 x/; otherwiseI
h.x/ WD 2f .x/ g.x/:
The function l is once continuously differentiable. It follows that f .x/ D g.x/ D h.x/
for all x 2 Œ0; 1 n .a; b/, all these functions are twice continuously differentiable in
.a; b/ and once in an environment U .0; 1=2/ of the closed interval Œa; b. Since
g and h are symmetric, it remains to show, according to Lemma 2.3, that g0 ; h0 are
monotonely increasing on Œa; b and f 6 g. One obtains inside the interval .a; b/ the
following derivatives:
2 2
g0 .x/ D f 0 .x/ C sin .x a/ I
ba ba
2 2 2
g00 .x/ D f 00 .x/ C cos .x a/ I
ba ba
h00 .x/ D 2f 00 .x/ g00 .x/
Proof
(a) Let a0 WD a and b0 WD b. For arbitrary 2 Œa; b it will be shown that f ./ D
f .b/f .a/
ba . a/ C f .a/. Clearly, this holds for 2 fa; bg. Assume that for
some x; y 2 Œa; b a similar equation holds. Since f .x C y/ D 2 f . xCy
2 / due to
the prerequisites, it follows that
xCy f .x/ C f .y/
f D
2 2
f .b/ f .a/
D .x a C y a/ C f .a/
2.b a/
f .b/ f .a/ xCy
D a C f .a/:
ba 2
Hence, the proposition is also true for .x C y/=2. To prove it for the given ,
an interval interlock is used. For n D 0; 1; 2; : : : , it is constructed as follows: if
2 > an Cbn then let anC1 WD .an Cbn /=2 and bnC1 WD bn , otherwise anC1 WD an
and bnC1 WD .an C bn /=2. That yields
a0 an bn b0
and lim .bn an / D 0, where the desired equation holds for all an and bn . Every
n!1
maximal dual-feasible function is monotonely increasing. Therefore,
f .b/ f .a/
.an a/ f ./ f .a/
ba
f .b/ f .a/
.bn a/
ba
for all n 2 N. The difference between the right and left part of the inequality
tends to zero. Therefore, f is continuous at , and the proposition (a) is true.
(b) If for certain numbers x; y 2 Œa; b, the equations g.x/ D f .x/ and g.y/ D f .y/
are valid, then the superadditivity condition yields
and consequently, g.x C y/ D f .x C y/ D 2 f . xCy /. Therefore, g xCy
2 2
f xCy2
and h xCy
2
f xCy2
. That implies g xCy
2
D f xCy 2
. Choose
any 2 Œa; b. The proposition g./ D f ./ can be shown analogously
to part (a). The monotone sequences .an / and .bn / are defined as above.
Since g.an / D f .an / and g.bn / D f .bn / for all n 2 N, it follows that
g./ D f ./, because g, being a MDFF, is monotone, and f is continuous in
Œa; b. t
u
The following example shows in a simplified way how part (b) of Lemma 2.4 can
be used to prove that a given maximal dual-feasible function is extreme. Moreover,
the example also demonstrates the difficulty in the analysis of a parameter dependent
maximal dual-feasible function that is extreme for some parameter values and non-
extreme for others.
Example 2.7 The function fBJ;1 .xI C/ defined in (2.4) is extreme for C 2 N and for
C 2, but not for 1 < C < 2. Only part of the proof is provided. The remaining
part is left as an exercise.
For C 2 N, the assertion follows almost immediately from Lemma 2.4, because
fBJ;1 becomes the identity function. Suppose, g; h W Œ0; 1 ! Œ0; 1 are MDFF with
2fBJ;1 gCh. Therefore, we have g.0/ D h.0/ D 0 and g.1=2/ D h.1=2/ D 1=2.
Setting a WD 0 and b WD 1=2 in Lemma 2.4(b) closes the proof.
For 1 < C < 2, the function fBJ;1 .xI C/ is a convex combination of fBJ;1 .xI C/
Q and
another continuous MDFF g W Œ0; 1 ! Œ0; 1, where CQ D 2C if 1 < C < 4=3, and
4C
CQ D CC2 if 4=3 C < 2. If 1 < C < 4=3, then
8
ˆ
ˆ 0; if 0 x 1 C1 ;
ˆ
< .CxC1C/.43C/ ; if 1 1 x 1 ;
g.x/ D .32C/.2C/ C 2C
ˆ
ˆ
4CxC23C
42C ;
1
if 2C 1
x 1 2C ;
:̂ 1
1 g.1 x/; if 1 2C x 1;
t
u
2.3 Generating One-Dimensional Dual-Feasible Functions 33
In this section, we show how to build non-trivial dual-feasible functions from simple
superadditive functions. We address first the simple case of linear combination, and
then, we explore the properties of composed dual-feasible functions. We explain
how to define a maximal dual-feasible function from a non-maximal function, while
alternative approaches are explored at the end.
Therefore,
That implies
h.x C y/ D ˛ f .x C y/ C ˇ g.x C y/
˛ f .x/ C ˛ f .y/ C ˇ g.x/ C ˇ g.y/
D h.x/ C h.y/:
t
u
34 2 Classical Dual-Feasible Functions
Note that, when functions with domain and range Œ0; 1 are considered, ˛f makes
little sense, and ˛f C ˇg (with ˛; ˇ 2 RC ) should be replaced by a convex
combination of f and g. In practice, Proposition 2.2 is meaningful mainly for
discrete functions.
Preserving superadditivity does not imply that maximality is preserved too. For
instance, while b f c and minff ; gg remain superadditive, function minf f ; gg is not
maximal, unless f D g, nor is b f c (x 7! bxc is not even a MDFF).
Extremality was only introduced for maximal dual-feasible functions with
domain and range Œ0; 1. For these functions, and by definition, if one has two
˛
different maximal dual-feasible functions f ; g W Œ0; 1 ! Œ0; 1, then h WD ˛Cˇ fC
ˇ
˛Cˇ
g with ˛; ˇ > 0 cannot yield an extreme maximal dual-feasible function. Only
the trivial combination with ˇ D 0 and f being extreme, or ˛ D 0 and g being
extreme, yields an extreme maximal dual-feasible function.
2.3.2 Composition
f .g.0// D f .0/ D 0;
f .g.1 x// D f .1 g.x// D 1 f .g.x//; and
g.x C y/ g.x/ C g.y/;
and hence
y y y y
1 1 1 1
1 x 1 x 1 x 1 x
with k D 2, and fBJ;1 .xI C/ defined in (2.4) with C D 9=2, and let f WD fFS;1 . fBJ;1 .//.
Both fFS;1 and fBJ;1 are extreme. One has fBJ;1 .10=27/ D 1=3, and fBJ;1 .x/ ¤ 1=3 for
x 2 Œ0; 1 n f10=27g.
The function fFS;1 is illustrated in Fig. 2.6 for several parameter values.
Because of fFS;1 .x/ D 0 for 0 x < 1=3, fFS;1 .1=3/ D 1=3 and fFS;1 .x/ D 1=2
for 1=3 < x < 2=3, it follows that
8
ˆ
ˆ 0; if 0 x < 10=27;
<
1=3; if x D 10=27;
f .x/ D
ˆ 1=2; if 10=27 < x < 17=27;
:̂
1 f .1 x/; if 17=27 x 1:
Then, f .x/ D 13 g.x/ C 23 h.x/, for all x 2 Œ0; 1, and hence f cannot be extreme. t
u
2.3.3 Symmetry
Let fN x .x/ D f .x/ if x ¤ x , and fN x .x / D f .x / C ", with " being a sufficiently
small positive real value. Note that if f is a dual-feasible function, fN x may or may
not be a dual-feasible function. Furthermore, for a given dual-feasible function f ,
let I1 be the set of values x whose images can be increased such that fN x is also a
dual-feasible function. Below, we assume that I1 is a discrete set.
Theorem 2.5 Let f W Œ0; 1 ! Œ0; 1 be a superadditive function. Let I1 be the set of
values x 2 Œ0; 1, for which positive " exist such that fN x is a dual-feasible function.
Assume that f is continuous from the right in the entire set I2 WD Œ0; 1 n I1 of the
remaining values and that the function g W Œ0; 1 ! Œ0; 1 is such that the following
holds:
1. f .x/ g.x/ lim f .y/, for any x in I1 ;
y#x
2. g.x/ C g.y/ g.x C y/, if x; y; x C y 2 I1 ;
3. g.x/ C f .y/ g.x C y/, if x; x C y 2 I1 and y 2 I2 .
Then the function h W Œ0; 1 ! Œ0; 1 with
g.x/; if x 2 I1 ;
h.x/ WD
f .x/; if x 2 I2 ;
is superadditive.
2.3 Generating One-Dimensional Dual-Feasible Functions 37
y y
1 f 1 g
1 x 1 x
Fig. 2.7 Increasing the values of a function at its discontinuities
Example 2.8 The function fFS;1 .I k/ W Œ0; 1 ! Œ0; 1, with k 2 N n f0g, discussed
in Sect. 2.3.2 is a MDFF. Hence, the restriction of fFS;1 to the domain Œ0; C with
1
kC1
C < 1 is superadditive and non-decreasing. Normalizing this function to
domain and range Œ0; 1 yields the following function f , which depends on k and C
and which is not necessarily symmetric:
Cx=fFS;1 .CI k/; if .k C 1/ Cx 2 N;
f .x/ WD
b.k C 1/ Cxc=.k fFS;1 .CI k//; otherwise.
1 2
This function is continuous on Œ0; 1 except at the points .kC1/C ; .kC1/C ; : : : Let
for example k WD 5 and C WD 5=8, for which fFS;1 .CI k/ D 3=5. Then,
25x=24; if 15x=4 2 N;
f .x/ D
b15x=4c=3; otherwise;
4
5
8
If
f .x C 1/ f .x/ v
for all x 2 Œ0; C 1, then the function h W Œ0; C ! Œ0; f .bCc/ C 1 defined by
t
u
On the other hand, although the ceiling function is not superadditive, it can lead
to superadditive functions if it is decreased by a suitable value. We now generalize
several results that use this kind of method.
Lemma 2.6 Let f W RC ! RC be a superadditive function. If ˇ 1, then g.x/ WD
maxf0; df .x/e ˇg is superadditive.
Proof Since f is a superadditive function with domain and range RC , it is
nondecreasing, and hence g is also nondecreasing. Choose any x; y 0 with x y.
If df .x/e ˇ, then g.x/ D 0, such that the superadditivity of g follows from its
monotonicity. If df .x/e > ˇ, then
since f is superadditive. t
u
2.4 Examples 39
2.4 Examples
The function fCCM;1 discussed in Example 2.5 (p. 28) can also be obtained by
applying Theorem 2.4 to the function x 7! bCxc, C 1:
8
< bCxc=bCc; if 0 x < 1=2;
fCCM;1 .xI C/ WD 1=2; if x D 1=2;
:
1 fCCM;1 .1 xI C/; if 1=2 < x 1:
Rounding-down violates the symmetry, such that the obtained function x 7! bCxc
is not maximal.
Another example on how applying symmetry may yield a maximal dual-feasible
function is described next. The function g.x/ described in Sect. 2.1 is a monotone
and superadditive dual-feasible function for every k 2 N n f0; 1g:
However, it is not maximal (see for example Fig. 2.2). By forcing symmetry as
described in Theorem 2.4, we get the maximal dual-feasible function fVB;2 W Œ0; 1 !
Œ0; 1 with
8
< dmaxf0; kx 1ge=.k 1/; if 0 x < 1=2;
fVB;2 .xI k/ WD 1=2; if x D 1=2; (2.11)
:
1 fVB;2 .1 xI k/; if 1=2 < x 1:
1 x
on Lemma 2.5:
n l m o
bCxc C max 0; frac.Cx/frac.C/
1frac.C/ .k 1/ =k
fLL;1 .xI C; k/ WD : (2.12)
bCc
The superadditivity of this function is due to Lemma 2.6. In the following proof,
we use the fact that dxCye D xCdye if x is integer, and that dxeCdye dxCyeC1
for any x and y. Without the maximum expression, the function fLL;1 would be the
function (2.1). The function fLL;1 is derived from another superadditive function h
by a linear transformation, namely fLL;1 .x/ D h.Cx/=bCc, where the structure of h
is of the kind f .bxc/ C g.frac.x// as in Lemma 2.5.
Proposition 2.5 Function fLL;1 is superadditive.
Proof To show the superadditivity of fLL;1 , Lemmas 2.5 and 2.6 are used. Define the
function g W Œ0; 1 ! Œ0; 1 as
x frac.C/
g.x/ WD max 0; .k 1/ =k:
1 frac.C/
xfrac.C/
The range of g is indeed part of Œ0; 1, because x 2 Œ0; 1 implies 1frac.C/ 1. In the
x
following, the superadditivity of g is proved. The function x 7! 1frac.C/ .k 1/
frac.C/
is linear, and hence it is obviously superadditive. The constant 1frac.C/
.k 1/ is
at least one, because
1 1 frac.C/
k1 1 :
frac.C/ frac.C/
The identity function is superadditive. To apply Lemma 2.5 with the above
defined function g, we must show that v 1. Choose any y; z 2 .0; 1 with
y C z > 1. Since g.y/, g.z/ and g.y C z 1/ 2 Œ0; 1, the inequality
is obviously fulfilled for g.y/ g.z/ D 0. Assume g.y/ and g.z/ > 0. Hence, we
have y and z > frac.C/. One gets
y C z 1 frac.C/
k g.y C z 1/ .k 1/
1 frac.C/
y frac.C/ C z frac.C/
D 1 .k 1/
1 frac.C/
y frac.C/ z frac.C/
.k 1/ C .k 1/ k
1 frac.C/ 1 frac.C/
D k .g.y/ C g.z/ 1/;
as needed, because k 2 N.
Finally, fLL;1 .x/ D h.Cx/=bCc with the function h according to Lemma 2.5. The
linear transformations do not affect the superadditivity. Since h is superadditive, fLL;1
is that too. t
u
The function fLL;1 is not maximal, since there are cases where it is not symmetric.
An improved version of this function can be obtained by applying Theorem 2.4.
Proposition 2.6 The following function fLL;2 .I C; k/ W Œ0; 1 ! Œ0; 1 with C > 1,
C … N and k 2 N, k d1=frac.C/e is a maximal dual-feasible function, and it
dominates fLL;1 .
8 n l m o
ˆ
ˆ bCxcCmax 0;
frac.Cx/frac.C/
.k1/ =k
< 1frac.C/
; if 0 x < 1=2;
bCc
fLL;2 .xI C; k/ WD 1=2; if x D 1=2; (2.13)
ˆ
:̂ 1 fLL;2 .1 xI C; k/; if 1=2 < x 1:
The graphs of the functions (2.12) and (2.13) are drawn in Fig. 2.9 for C 2 f3:3; 3:4g
and k D d1=frac.C/e.
y y y y
1 1 1 1
fLL,1 (·;3.3, 4) fLL,1 (·;3.4, 3) fLL,2 (·; 3.3, 4) fLL,2 (·; 3.4, 3)
1 x 1 x 1 x 1 x
dual-feasible functions:
(
1
0; if 0 x pC1 ;
uA .xI p/ WD 1 1 p 2 N n f0g;
p ; if pC1 < x 1;
8
ˆ
< 0; if 0 x ap pC1
p1
;
1 1 2
uB .xI p; a/ WD pC1 C .x pC1 /=.p ap ap /; if ap pC1 < x < ˇ;
p1
:̂ 1
p; if ˇ x 1;
p1 1
p 2 N n f0; 1g; a 2 . 2
; /;
p Cp pC1
uC .xI p/ WD maxf0; d.p C 1/x 1e=pg; p 2 N n f0g;
x x
uD .xI p; a/ WD =p C uB .x ˇ I p; a/;
ˇ ˇ
p1 1
p 2 N n f0; 1; 2g; a 2 . 2
; /;
p Cp pC1
2
where ˇ WD pC1 a for the functions uB and uD . The function uC is equivalent to the
2
function (2.2) with k D pC1. Furthermore, we have for all x 2 Œ0; pC1 that uA .x/ D
uC .x/. Similarly, uB .x/ D uD .x/ holds for all x 2 Œ0; ˇ. For larger x inside the inter-
val Œ0; 1 one gets uC .x/ > uA .x/ or uD .x/ > uB .x/, respectively. The function uA is a
simple staircase function. The dependence of uB and uD on the parameter a is shown
in Fig. 2.10 for p D 3. Since uD is continuous and piecewise linear, it looks like the
function (2.4) (p. 25) for C WD 1=ˇ. Checking this observation is left as an exercise.
The function fFS;1 W Œ0; 1 ! Œ0; 1 depending on the parameter k 2 N n f0g
discussed in Sect. 2.3.2:
x; if .k C 1/ x 2 N;
fFS;1 .xI k/ WD
b.k C 1/ xc=k; otherwise;
y y y
1 1 1
uB (·;3, 15 ) uB (·;3, 13
72 ) uB (·;3, 24
5
)
1 x 1 x 1 x
y y y
1 1 1
uD (·;3, 15 ) uD (·;3, 13
72 ) uD (·;3, 24
5
)
1 x 1 x 1 x
˚1 13 5
Fig. 2.10 Dual-feasible functions uB and uD for p D 3 and a 2 5
; ;
72 24
The following function fDG;1 is also built on Theorem 2.5. Since the proof of
superadditivity is complex and long, it is omitted.
1
Proposition 2.7 For every C 2 RnN with C > 1, and any k 2 N with k d frac.C/ e,
the following function fDG;1 .I C; k/ W Œ0; 1 ! Œ0; 1 is a MDFF:
8 frac.Cx/frac.C/ frac.Cx/frac.C/
bCxc 1 < 1frac.C/ ; if .k 1/ 1frac.C/ 2 N;
ˆ
fDG;1 .x/ D C n l m o
bCc bCc :̂
max 0; .k 1/ frac.Cx/frac.C/
1frac.C/ =k ; otherwise.
(2.14)
This function also dominates fLL;1 , if fLL;1 is not symmetric. In this case, the
functions (2.12), (2.13) and (2.14) differ at some isolated points. The graph of the
latter function is presented for C 2 f3:3; 3:4g and k D d1=frac.C/e in Fig. 2.11.
All maximal dual-feasible functions that were presented until now have a relatively
simple structure. For instance, the function fBJ;1 is Lipschitz-continuous, while all
the other discussed dual-feasible functions have a finite number of discontinuities.
44 2 Classical Dual-Feasible Functions
1 x 1 x
1 x
1 1
f .x/ f .x0 / 1 .x x0 / D 2
b 1C 2 c aCb
aCb 1
1 1
D
ba C bc a C b
1 1
D
a aCb
b
D 2 :
a C ab
Hence, the fraction is nonnegative. When x tends to 1=2, then a ! 1, and therefore
f .x/ f .x0 / 1 .x x0 / D O.x x0 /2 . This means that the derivative at x0 D 1=2
exists and has the value 1. t
u
Figure 2.12 shows a simplified version of this staircase function. Around the
point .1=2I 1=2/, there are infinitely many stairs, which cannot be drawn exactly.
Moreover, the usual black and white marks at discontinuities become meaningless,
and therefore, they are omitted.
and Wolsey (1998)]. Lueker (1983) introduced the designation. He used the dual-
feasible functions to derive lower bounds for bin-packing problems. The functions
uA .I p/, uB .I p; a/, uC .I p/ and uD .I p; a/ described in Sect. 2.4.3 are due to this
author. The notion of maximality was proposed and described initially in Carlier
and Néron (2007a), while the conditions for maximality were presented by these
authors in Carlier and Néron (2007b). In the latter, the authors defined also the
discrete version of a dual-feasible function that they call redundant functions. They
used these functions to solve scheduling problems. The conditions for maximality
were later reviewed by Rietz et al. (2010) yielding the sufficient conditions described
in Theorem 2.2. The proof of Theorem 2.4 can be found in Clautiaux et al. (2010)
for the case where the function is defined as a discrete dual-feasible function. The
extremality of dual-feasible functions was analyzed first by Rietz et al. (2012a). The
proofs of many of the results concerning extremality can be found in this reference.
Within this chapter, different dual-feasible functions were used as examples.
Most of them were taken from the literature where they were stated either explicitly
or implicitly. The letters used in the index of these functions identify the authors
that proposed them originally. We provide the original source next. The function
fBJ;1 .I C/ is based on the work of Burdett and Johnson (1977). The function
fCCM;1 .I C/ was proposed by Carlier et al. (2007), improving slightly a function
proposed before by Boschetti and Mingozzi (2003). The function fFS;1 .I k/ is due to
Fekete and Schepers (2001). The function fVB;2 .I k/ is based on a non-maximal dual-
feasible function by Vanderbeck (2000). The function fVB;2 .I k/ is maximal, and it
was described first by Clautiaux et al. (2010). The function fDG;1 .I k/ was defined by
Dash and Günlük (2006). The function fLL;1 .I C; k/ was used implicitly by Letchford
and Lodi (2002) to strengthen Chvátal-Gomory cuts (Chvátal 1973) and Gomory
fractional cuts (Gomory 1958) in linear programs. As shown in Sect. 2.4.2, this
function is superadditive but not maximal. Again, the corresponding maximal dual-
feasible function was defined by Clautiaux et al. (2010).
2.6 Exercises
f3 .x/ WD bx2 c;
f4 .x/ WD bex c:
x 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
g0 .x/ 0 0 0 0 0 2 3 4 5 0 0 0 0 0 0 0 0 0 0 10
3.1 Introduction
Classical dual-feasible functions are defined only for nonnegative arguments thus
limiting their applicability. In this chapter, we explore the extension of dual-feasible
functions to more general domains with a focus on real numbers. Other attempts
of generalizing the concept of dual-feasible function will be done later in the book.
In Chap. 4, we will discuss for instance an extension to multidimensional domains
yielding the so-called vector packing dual-feasible functions, which may be used to
compute bounds for vector packing problems.
Extending the principles of dual-feasible functions to the domain of real numbers
is not trivial. The properties that apply to dual-feasible functions, and which have
been reviewed in the previous chapter, are affected in this exercise, and some of
them are even lost. This makes the task of deriving good non-dominated functions
much more difficult. In the sequel, we will explore in depth the new properties of
general dual-feasible functions. Different examples will be brought to discussion to
illustrate the main and new ideas behind these functions.
Given the hardness in deriving dual-feasible functions that apply to the domain
of real numbers, we will devote the second part of the chapter to the presentation
of general construction principles that lead to specific instances of general dual-
feasible functions. The defining characteristics of these principles will be described
first, and followed by the analysis of specific examples for each case.
3.2.1 Definition
where frac./ denotes the non-integer part of an expression, i.e. frac.C/ C bCc.
This function can be extended to a general dual-feasible function using the same
formula. Note however that this does not work for many other functions. Figure 3.1
illustrates this function fBJ;1 for C 2 f6=5; 8=5g.
t
u
Further conditions for a function f W R ! R to be a general dual-feasible
function have already been identified. The following proposition describes two of
them. The proof of their validity is left as an exercise (see Exercise 1 at the end of
the chapter).
Proposition 3.1 Let f W R ! R be any function. If f is a general dual-feasible
function, then it has the following two properties.
3.2 Extension of Dual-Feasible Functions to General Domains 53
y y
1 1
fBJ,1 ·; 65 fBJ,1 ·; 85
−1 1 x −1 1 x
−1 −1
The property (1.) is obviously fulfilled. The same happens with (2.) because f .x/
cx for all x 2 R. However, we have that f .1/ C f .2/ D c > 1, in contradiction to
1 C 2 1, and thus a violation of the defining condition (3.1). t
u
On the other hand, the composition of general dual-feasible functions leads to
functions that are still general dual-feasible functions.
Lemma 3.1 The composition of general dual-feasible functions f ; g W R ! R is a
general dual-feasible function.
P
Proof Let I be any finite index set of real numbers xi with xi 1. One gets
P i2I
g.xi / 1, because g is a general dual-feasible function. Therefore, Definition 3.1
i2I
54 3 General Dual-Feasible Functions
yields
X
f .g.xi // 1;
i2I
3.2.2 Maximality
X
n X
n
f .xi / D c xi c:
iD1 iD1
is no more a necessary condition for the function to be maximal. The conditions for
maximality are restated as follows.
Theorem 3.1 Let f W R ! R be a given function.
(a) If f satisfies the following conditions, then f is a general MDFF:
1. f .0/ D 0;
2. f is superadditive, i.e. for all x; y 2 R, it holds that
3. there is an " > 0, such that f .x/ 0 for all x 2 .0; "/;
4. f obeys the symmetry rule (3.3);
(b) If f is a general MDFF, then the above properties (1.)–(3.) hold for f , but not
necessarily (4.);
(c) If f satisfies the above conditions (1.)–(3.), then f is monotonely increasing;
(d) If the symmetry rule (3.3) holds and f obeys the inequality (3.4) for all x; y 2 R
with x y 1x 2 , then f is superadditive.
Unlike for classical dual-feasible functions, where the range was Œ0; 1, here the
nonnegativity of the function values for nonnegative arguments must explicitly be
demanded.
Example 3.3 The following example of a continuous and piecewise linear function
f W R ! R (Fig. 3.2) shows that Theorem 3.1 would become false, if the
prerequisite (3.) in part (a) was not considered.
8
ˆ
ˆ 5x; if x 0;
ˆ
ˆ
ˆ
< x; if 0 x 1=3;
f .x/ WD 5x 2; if 1=3 x 2=3;
ˆ
ˆ
ˆ
ˆ 2 x; if 2=3 x 1;
:̂
5x 4; if x 1:
The conditions (1.) and (4.) in Theorem 3.1 can easily be checked. To verify the
superadditivity, choose any x; y 2 R with
1x
xy
2
1Cx 2
xCy :
2 3
y
f
1x
1. x > 0 yields f .x/ D x and 0 < y < 1=2. If y 1=3, then f .y/ D y and
f .x C y/ x y;
f .x C y/ D 5x C 5y 2;
d D f .x C y/ f .y/ 5x 0:
3.2.3 Extremality
The notion of extremality remains essentially the same as for classical dual-feasible
functions.
3.2 Extension of Dual-Feasible Functions to General Domains 57
f .x/ WD cx
g.x/ C h.x/ D 2x
for all x 2 R. The defining condition for dual-feasible functions implies that
g.1=q/ D 1=q:
for all p; q 2 N n f0g, and hence g.x/ D x for all x 2 QC . The monotonicity implies
g.x/ D x for all x 2 RC . If there would be an x < 0 with g.x/ > x, then g would
not be a general dual-feasible function. Therefore, g.x/ x and finally
g.x/ D x D f .x/
for all x 2 R. t
u
The set of maximal dual-feasible functions is convex in the bounded case of
domain and range Œ0; 1, while in the generalized case, since the symmetry needs
not to hold, the set of maximal general dual-feasible functions is not convex, as
58 3 General Dual-Feasible Functions
counter examples show. Until now, the properties of this set have not been explored
in depth, and issues as if it is at least connected, way connected or even star shaped,
remain to be determined. A first result concerning the set of general dual-feasible
functions is stated in the following proposition.
Proposition 3.3 The set of general dual-feasible functions f W R ! R is closed, i.e.
any converging sequence of general dual-feasible functions converges to a general
dual-feasible function.
Proof Let I be any finite index set, and .fn / be a converging sequence of general
dual-feasible functions, i.e. for each x 2 R, the limit
P
exists. For any xi 2 R (i 2 I) with xi 1, we have
i2I
X X X
f .xi / D lim fn .xi / D lim fn .xi / 1:
n!1 n!1
i2I i2I i2I
3.3 Applications
p
Column vectors a j 2 ZC represent configurations, i.e. processes scheduled for a
single processor. The balance constraint can be written as c> a d> a T, where
T > 0 is a threshold above which the machine is not considered well balanced.
Setting l WD T1 .c d/, the constraint becomes l> a 1, where the elements of c d
can be positive or negative.
p
This variant can be modelled similarly to the cutting-stock problem. Let b 2 RC
and l 2 R be some fixed vectors denoting respectively the demands and the sizes of
p
p
a set of p items. Each configuration may be represented by a column vector aj 2 ZC
and is feasible only if it obeys the capacity constraint l> aj 1. The patterns form
the matrix A. The continuous relaxation of this variant of the cutting stock problem
is given by
X
n
min xj s.to Ax D b; x 2 ZnC : (3.5)
jD1
If one item has a negative size, overproduction would make a trivial solution
with only one bin possible. Therefore, the demand constraints must be satisfied as
equalities, and the dual of (3.5) has variables unrestricted in sign
ui WD f .`i /; i D 1; : : : ; p:
Example 3.5 Suppose that C D D D 21, and that seven processes with the
following resource consumption have to be scheduled:
ci 13 9 7 6 5 4 3
di 1 5 7 8 9 10 11
Let T WD 4. Besides the constraints that the processes do not overload the machines,
the balance constraint yields
. 12 4 0 2 4 6 8 /a 4:
Therefore, the first process must be combined with the last one, while the single
balance constraint does not restrict the other processes. One may get a solution with
four machines as illustrated in Fig. 3.3. The horizontal direction shows the CPU
usage, the vertical direction the memory consumption on the machines. Here, one
has a more-dimensional vector packing problem. In the continuous relaxation, one
could use the following patterns in the quantity 1=2:
60 3 General Dual-Feasible Functions
a1 D .1; 0; 0; 0; 0; 2; 0/>
a2 D .1; 0; 0; 0; 1; 0; 1/>
a3 D .0; 1; 0; 2; 0; 0; 0/>
a4 D .0; 1; 1; 0; 1; 0; 0/>
a5 D .0; 0; 1; 0; 0; 0; 1/>
where only the last pattern does not occupy the whole machine. Therefore, the
continuous relaxation would yield a lower bound of two and a half needed
machines. For this problem, lower bounds may be obtained by general dual-feasible
functions. t
u
y y g
−b x b x
3.4.1 Structure
5 1 C .4/ 1;
such that the defining condition on general dual-feasible functions can be applied,
but
1
f30 .x/ D 1 C ln 2 > 0;
xC1
for x > 0, such that f3 is nondecreasing. The monotonicity of f30 shows that f3 is
strictly convex for positive arguments, and therefore superadditive in RC . It is easy
to see that the superadditivity of f3 holds on entire R. Therefore, f3 is a general dual-
feasible function fulfilling all the necessary conditions of Theorem 3.1, but it is not
maximal as the next proposition states.
Proposition 3.4 Let f W R ! R be a maximal general dual-feasible function and
Then, we have
f .x/
lim D t f .1/;
x!1 x
and for any x 2 R, it holds that
frac.C/ frac.C/
tx fBJ;1 .x/ t D :
C bCc
y
tx
fBJ,1 ·; 65
−1 1 x
−1
and
1x
xy ;
2
as it would be necessary according to part (d) of Theorem 3.1. The following
example shows that the prerequisite on superadditivity must not be dropped entirely.
Example 3.7 Let g be the identity function and
8
< x3 ; if x 1;
f .x/ WD x; if 1 x 2;
:
.x 1/3 C 1; if x 2:
The functions f and g fulfill all prerequisites except the superadditivity condition.
One has f .1/ D 1, f .x/ g.x/ for all x 2, both f and g are symmetric, and g is a
64 3 General Dual-Feasible Functions
Even if former proofs for the latter condition cannot be used for maximal general
dual-feasible functions, the two conditions hold for many maximal general dual-
feasible functions. Furthermore, they are not only necessary conditions, but they
allow also immediately the lower bound
f .x/ b2xc=2
for x > 1 for these functions due to the superadditivity. An insight concerning the
reasons why not all approaches for classical maximal dual-feasible functions work
is given in Exercise 3.
Proposition 3.5 If f W R ! R is a maximal general dual-feasible function and not
of the kind x 7! tx with 0 t < 1, then f .1/ D 1 and f .1=2/ D 1=2.
Proof Let t WD supff .x/=x W x > 0g 0. If t < 1, then Proposition 3.4 implies
f .x/ D tx for all x 2 R, such that the proof is complete. Otherwise t 1 and
f .x/ tx C 1 t for all x 2 R, and f .1/ 1. Therefore, we have f .1/ D 1. Since f
is a maximal general dual-feasible function, it is nondecreasing and superadditive.
Suppose f .1=2/ < 1=2 (clearly, f .1=2/ > 1=2 is impossible). Define g W R ! R by
f .x/; if x ¤ 1=2;
g.x/ WD
1=2; otherwise:
Due to the assumption, g cannot be a general dual-feasible function, i.e. there are
P
n P
n
values n 2 N n f0g and x1 ; : : : ; xn 2 R with xi 1, but g.xi / > 1. Without
iD1 iD1
loss of generality, assume xi D 1=2 for i k and xi ¤ 1=2 for i > k with a certain
k 2 N, k n. That yields
X
n
xi 1 k=2
iDkC1
3.4 Properties of Maximal General Dual-Feasible Functions 65
and
X
n X
n
1< g.xi / D k=2 C f .xi /
iD1 iDkC1
!
X
n
k=2 C f xi
iDkC1
k=2 C f .1 k=2/;
where empty sums equal zero. Since f .1/ D 1, one gets further
lim f .x/ D inf lim f .x/ lim f .x/ :
x"0 y2R x"y x#y
The idea is that if f has a gap at the point y, then the superadditivity requires for
negative x
0 that f .x/ is small enough. Meanwhile, f .x/ must also not be too
small, otherwise f will not be maximal.
Example 3.8 Define the function f W R ! R by
8
< b b2xc; if x < 1=2;
f .x/ WD 1=2; if x D 1=2; (3.7)
:
1 b b2 2xc; if x > 1=2;
The following proposition demonstrates the limits of possible convexity for maxi-
mal general dual-feasible functions.
Proposition 3.7 If f W R ! R is a maximal general dual-feasible function, then
it cannot be strict convex in an environment of the point zero and also not concave
(except linear) in an interval Œ0; b or Œb; 0 with b > 0.
Example 3.9 Let b WD 1=2 and
f .x/ WD 4x3
for x 2 Œb; b. Then, f is strictly concave in the interval Œb; 0, and hence it is not
superadditive. For instance, we have that
f .1=4/ D 1=16;
and also
t
u
8
< bc b2xc; if x < 1=2;
1=2; if x D 1=2;
:
1 bc b2 2xc; if x > 1=2;
and for x 1=2 the inequality is strict. The latter function is of the same type as
the function (3.7) with parameter bc 1, and hence it is a maximal general dual-
feasible function. The function f .g.// can also be seen as a convex combination of
the constant zero-function f 0, which is a maximal general dual-feasible function
according to Proposition 3.2, with factor 1 c and g with factor c. t
u
3.5 Examples
This function was a maximal dual-feasible function for the domain Œ0; 1. However,
for domain R, it is not even a general dual-feasible function. To see this, let
1
n WD C1
1 frac.C/
1
fCCM;1 .1 C "/ D 1 C
bCc
and
1 n n bCc 1n
fCCM;1 D n:
C bCc
Hence,
1 n n bCc 1
n fCCM;1 .1 C "/ C fCCM;1 D > 0:
C bCc
68 3 General Dual-Feasible Functions
Since
1 n n bCc 1
n .1 C "/ C D n" C .Cn C 1 n n bCc/
C C
1 n .1 frac.C//
D n" C
C
1 1 C frac.C/ 1
n" C
C
<0
for sufficiently small ", a contradiction to the point (2.) of Proposition 3.1 arises. u
t
On the other hand, we explore in the sequel a different case where the defining
formulation of a classical maximal dual-feasible function leads without any change
to a maximal general dual-feasible function. That happens in particular with the
function fDG;1 discussed in Chap. 2 (see (2.14) at page 43).
Proposition 3.9 For any C 2 R n N, C > 1 and k 2 N with k frac1.C/ , the
following function fDG;1 W R ! R is a maximal general dual-feasible function:
8 frac.Cx/frac.C/ .Cx/frac.C/
ˆ
< 1frac .C/
; if .k 1/ frac1 frac .C/
2 N;
bCxc 1
fDG;1 .x/ WD C n o
bCc bCc :̂ .Cx/frac.C/
max 0; d.k 1/ frac1 frac .C/
e=k ; otherwise.
Then, we have
frac.Cx/frac.C/
bCxc C h 1frac.C/
fDG;1 .x/ D :
bCc
First, some properties of h are derived, which will be used later to prove the
sufficient conditions of Theorem 3.1. Clearly, one obtains h.x/ D 0 for x 0
and h.1/ D 1. Moreover, h rises monotonely in the closed interval Œ0; 1, because h
is piecewise constant and one gets for any p 2 f1; : : : ; k 1g the estimations
p p p pC1
limp h.x/ D < Dh D limp h.x/:
x" k1 k k1 k1 k x# k1
3.5 Examples 69
Additionally, we have
due to the definition of h, because either h.x/ D d.k 1/xe=k or h.x/ D x. In the
latter case, it holds that d.k 1/xe D .k 1/x, such that both inequalities become
equivalent to x 0 or x 1, respectively. The function h is also symmetric inside
the interval Œ0; 1, i.e.
because
.k 1/ x 2 N ” .k 1/ .1 x/ 2 N;
and if .k 1/ x … N, then
Clearly, the function fDG;1 fulfills the conditions (1.) and (3.) of part (a) of
Theorem 3.1. To show the symmetry, choose any x 2 R. It must be verified that
frac.Cx/ frac.C/ frac.C Cx/ frac.C/
bCxcCbCCxcCh Ch D bCc:
1 frac.C/ 1 frac.C/
and hence
bCxc C bC Cxc
fDG;1 .x/ C fDG;1 .1 x/ D
bCc
D 1:
and
frac.C Cx/ frac.C/ frac.C/ frac.Cx/ C 1 frac.C/
h Dh
1 frac.C/ 1 frac.C/
frac.Cx/ frac.C/
D h 1
1 frac.C/
frac.Cx/ frac.C/
D 1h ;
1 frac.C/
3. The case frac.Cx/ > frac.C/ and frac.Cx/ C frac.Cy/ < 1 can only happen if
frac.C/ < 1=2. One gets
and hence
frac.Cx/ C frac.Cy/ frac.C/ frac.Cx/ frac.C/
dDh h
1 frac.C/ 1 frac.C/
frac.Cy/ frac.C/
h :
1 frac.C/
D 1:
due to the monotonicity of h inside the interval Œ0; 1, and hence, we have again
d 0.
72 3 General Dual-Feasible Functions
If frac.Cx/frac.C/
1frac.C/
C frac.Cy/frac.C/
1frac.C/
1, then the monotonicity of h inside the
interval Œ0; 1 and (3.10) yield
frac.Cx/ frac.C/ frac.Cy/ frac.C/
h Ch 1;
1 frac.C/ 1 frac.C/
such that
frac.Cx/ C frac.Cy/ 1 frac.C/
dh
1 frac.C/
0:
One has
frac.Cx/ C frac.Cy/ 2frac.C/ frac.Cx/ frac.C/
d D 1Ch 1 h
1 frac.C/ 1 frac.C/
frac.Cy/ frac.C/
h :
1 frac.C/
k1
.frac.Cx/ C frac.Cy/ 2frac.C/ frac.Cx/ C frac.C/
1 frac.C/
frac.Cy/ C frac.C// 1
D 1;
and the right-hand side in (3.12) is integer. If it equals 1, then the rounding
brackets will not change anything, and hence x; y 2 I1 . In this case, one gets
frac.Cx/ frac.C/
h C
1 frac.C/
frac.Cy/ frac.C/ frac.Cx/ frac.C/ frac.Cy/ frac.C/
Ch 1 D C 1
1 frac.C/ 1 frac.C/ 1 frac.C/
frac.Cx/ C frac.Cy/ 2frac.C/
D 1
1 frac.C/
frac.Cx/ C frac.Cy/ 2frac.C/
Dh 1 ;
1 frac.C/
In this section, we explore three different methods to build maximal general dual-
feasible functions by extending a given classical dual-feasible function to domain
and range R. The second and third approach simply use affine-linear expressions
outside the interval Œ0; 1 where the extended function has the form
x 7! tx C c;
with t and c being constants, and t a sufficiently large value. The additive constants
are generally different for x < 0 and x > 1.
3.6.1 Method I
applied to the non-integer part of the given real argument, while a suitable multiple
of the integer part of the argument is added. These ideas are stated formally in the
following proposition.
Proposition 3.10 Let g W Œ0; 1 ! Œ0; 1 be a maximal dual-feasible function,
and
and
1 b0 2:
y f y f
1 1
1x 1 x
Fig. 3.6 Extending fFS;1 .I k/ to a general MDFF f for k 2 f1; 2g by Proposition 3.10
76 3 General Dual-Feasible Functions
3.6.2 Method II
– every Hölder-continuous function f is uniformly continuous, i.e., for all " > 0,
there is a ı > 0 depending on " only, such that jx yj < ı implies
– every uniform continuous function is also continuous in the usual sense, where ı
may depend on x and y. Here, the converse holds under an additional prerequisite.
Every continuous function on a compact set is uniformly bounded according to
Heine’s theorem.
Examples of Lipschitz-continuous
p functions are x 7! jxj and x 7! ejxj on a bounded
set, while the function x 7! maxf0; xg, for example, is not Lipschitz-continuous.
Proposition 3.11 Let p; t 2 R and g W Œ0; 1 ! Œ0; 1 be a maximal dual-feasible
function with
jg.x/ g.y/j t jx yj
3.6 Building Maximal General Dual-Feasible Functions 77
for all x; y 2 Œ0; 1, i.e. the function g is Lipschitz-continuous with L WD t. Then, we
have t 1, and, for 1 p t, the following function f W R ! R is a maximal
general dual-feasible function:
8
< tx C 1 p; if x < 0;
f .x/ WD tx C p t; if x > 1;
:
g.x/; otherwise:
To calculate the smallest valid Lipschitz-constant for g, the largest slope is needed.
Since g is differentiable in .0; 1/, the supremum of the derivative is sought. One gets
g0 .x/ D 4x
g0 .1 x/ D g0 .x/;
y y
f f
1 1
1 x 1 x
Example 3.13 Let us again use the function (3.14) as example. One gets for p 2 N
and p k that
p
limp fFS;1 .x/ D :
x# kC1 k
kC1
tD ;
k
3.6 Building Maximal General Dual-Feasible Functions 79
because if .k C 1/ x 2 N for a certain x 2 Œ0; 1, then fFS;1 .x/ D x. With these
choices, one obtains the following maximal general dual-feasible function f W R !
R:
8
ˆ
ˆ tx C 1 t; if x < 0;
<
tx; if x > 1;
f .x/ WD
ˆ x; if .k C 1/ x 2 f0; 1; : : : ; k C 1g;
:̂
b.k C 1/ xc=k; otherwise:
y y
f
f
2 2
1 1
1 x 1 x
Fig. 3.8 Extending fFS;1 .I k/ to a general MDFF f for k 2 f1; 2g by Proposition 3.12
80 3 General Dual-Feasible Functions
3.6.4 Examples
3.6.4.1 Based on Method I
2frac.C/ 2frac.C/
xCyD D < 1;
C bCc C frac.C/
3.6 Building Maximal General Dual-Feasible Functions 81
1
the value d D 1 is not an overestimation. It yields b0 D 1 C bCc for the case
1
frac.C/ 2 .
Assume in the following that frac.C/ < 12 . In that case, we have k 3. We
want to explore the maximal possible value of d under this condition. The following
further cases arise:
1. frac.Cx/ frac.C/ < frac.Cy/ yields two subcases.
If frac.Cx/ C frac.Cy/ < 1, then
frac.Cx C Cy/ frac.C/ frac.Cy/ frac.C/
dDh h
1 frac.C/ 1 frac.C/
frac.Cy/ frac.Cy/ frac.C/
h h
1 frac.C/ 1 frac.C/
frac.Cy/ frac.Cy/ frac.C/
.k 1/ C 1 .k 1/ =k
1 frac.C/ 1 frac.C/
frac.C/
1 C .k 1/ =k
1 frac.C/
<1
and hence
frac.Cx/ C frac.Cy/ 1 frac.C/
h D 0:
1 frac.C/
2. The case where frac.Cx/ > frac.C/ and frac.Cx/ C frac.Cy/ < 1 yields
frac.Cx/ C frac.Cy/ frac.C/ frac.Cx/ frac.C/
dDh h
1 frac.C/ 1 frac.C/
frac.Cy/ frac.C/
h :
1 frac.C/
Hence,
frac.Cx/ C frac.Cy/ frac.C/
dk .k 1/ C1
1 frac.C/
frac.Cx/ frac.C/ frac.Cy/ frac.C/
.k 1/ .k 1/
1 frac.C/ 1 frac.C/
k1
.frac.Cx/ C frac.Cy/ frac.C/ frac.Cx/ C frac.C/
1 frac.C/
frac.Cy/ C frac.C// C 1
frac.C/
D .k 1/ C 1;
1 frac.C/
and hence d 2k .
If frac.Cx/ C frac.Cy/ 1 C frac.C/, then
frac.Cx/ C frac.Cy/ 1 frac.C/
h D 0;
1 frac.C/
and hence
frac.Cx/ frac.C/ frac.Cy/ frac.C/
d D 1h h
1 frac.C/ 1 frac.C/
k2
:
k
For the sake of shortness, further details are omitted.
The final result
1
b0 D 1 C
bCc
for fDG;1 .I C; k/ with frac.C/ 12 , and a smaller b0 in the case 0 < frac.C/ < 12 can
be compared with the according b0 for the function fBJ;1 .I C/, which was defined
as a classical maximal dual-feasible function in Chap. 2 (see (2.4) at page 25). The
latter is also a general maximal dual-feasible function as described in Example 3.1
on page 52. One gets similarly for this function the result
frac.C/
b0 D 1 C min 1; =bCc;
1 frac.C/
1
which is in the case frac.C/ 2
the same value as for fDG;1 .
C
t1 WD ;
bCc .1 frac.C//
1 frac.Cx/
is the smallest valid Lipschitz-constant. That can be seen in the term bCc 1frac.C/
in the definition of fBJ;1 in the intervals with slope, i.e. where frac.Cx/ > frac.C/.
The additional additive terms in the definition of the function do not influence the
84 3 General Dual-Feasible Functions
y y
f
f
1 1
−1 1 x −1 1 x
−1 −1
Fig. 3.9 Function f obtained from fBJ;1 according to Proposition 3.11 for p D 1 and p D t
slope. Hence, for any t t1 and p 2 Œ1; t, a general maximal dual-feasible function
f W R ! R can be obtained by Proposition 3.11, namely
8
ˆ
< tx C 1 p; if x < 0;
f .x/ WD ntx C p t;o if x > 1;
:̂ bCxc C max 0I frac.Cx/frac.C/ =bCc; otherwise:
1frac.C/
Example 3.14 Figure 3.9 shows the resulting functions, when Proposition 3.11 is
used with fBJ;1 .I 65 /, t WD t1 D 32 and p 2 f1; tg.
In the left case, one has for all x > 0 the strict inequality f .x/ < tx with
f .x/ f .x/
t D sup D lim ;
x>0 x x!1 x
t < f .1/
Proposition 3.12 can be used for every maximal dual-feasible function g W Œ0; 1 !
Œ0; 1 due to the weaker prerequisites. It is only necessary to choose
The value of t0 is calculated in the sequel for fDG;1 .I C; k/ with C 2 R n N, C > 1
1
and k 2 N with k frac.C/ , such that k 2.
Since fDG;1 is a staircase function, only the right limits
fDG;1 .y/
lim
y#x y
frac.Cx/ frac.C/
.k 1/ 2 N:
1 frac.C/
frac.Cx/ frac.C/
b WD .k 1/ :
1 frac.C/
such that
C
t : (3.15)
bCc
k1
bD .Cx a frac.C//:
1 frac.C/
86 3 General Dual-Feasible Functions
Hence, we have
b .1 frac.C//
xD C a C frac.C/ =C
k1
and
.a C bC1
/=bCc
t k
: (3.16)
b.1frac.C//
k1 C a C frac.C/ =C
Between the two lower bounds (3.15) and (3.16) for t, we keep the largest one. Since
frac.C/ k 1 and b k 2, the following equivalent inequalities are obtained:
k1
C a C frac.C/
Finally, the bound (3.15) is at least as large as (3.16), such that the result is
C
t0 D :
bCc
The extension of dual-feasible functions to more general domains is recent. The first
contributions in this field are due to Rietz et al. (2012b); Rietz et al (2014); Rietz
et al. (2015). In Rietz et al. (2012b), the authors discussed the first results of the
extension of dual-feasible functions to the domain of real numbers. In Rietz et al
(2014), they explored the properties of these general dual-feasible functions. The
proof of Lemma 3.2 can be found in this paper, while Lemma 3.3 is an extension
of Lemma 3 of Rietz et al. (2010) to domain and range R. The proofs of the other
3.8 Exercises 87
y y
f f
1 1
1 x 1 x
12 12 6
Fig. 3.10 Applying Proposition 3.12 to fDG;1 .I 5
; 3/ and fBJ;1 .I 5
/ for t D 5
properties described in Sect. 3.4 can also be found in Rietz et al (2014). The methods
for constructing maximal general dual-feasible functions were first discussed in
Rietz et al. (2015).
3.8 Exercises
Suppose there is an x1 > 1=2 with f .x1 / C f .1 x1 / < 1. Then, h must hurt the
defining condition on dual-feasible functions, since f is a maximal dual-feasible
function and h.x/ f .x/, for all x. That requires that there are n 2 N and
x2 ; : : : ; xn 2 R with
X
n X
n
xi 1 and h.xi / > 1;
iD1 iD1
88 3 General Dual-Feasible Functions
i.e.
X
n
f .xi / > f .1 x1 /;
iD2
X
n X
n
f .xi / f . xi / f .1 x1 /:
iD2 iD2
That contradiction proves that the assumed x1 > 1=2 with f .x1 / C f .1 x1 / < 1
does not exist. Similar considerations yield f .1=2/ D 1=2.”
4. Consider an instance of the one-dimensional cutting stock problem with a stock
length L equal to 122, and six different item lengths l D .62; 61; 50; 30; 20; 12/>
with corresponding demands equal to b D .2; 1; 1; 1; 2; 4/>.
>
(a) Calculate the material bound zM WD l Lb .
(b) Provide a feasible integer solution such that the items are cut from no more
than dzM e C 1 pieces of the initial material.
(c) Suppose that we may use once one unit length more in the initial material,
i.e. one pattern may use the length 123 instead of 122. Provide an optimal
integer solution for this case.
(d) Calculate lower bounds for the given instance (both without and with the
extra length naccording
o to task (c)) using the function fBJ;1 for all parameter
`i ; L`i
L L
values C 2 . What is the maximum among these bounds for the
two cases?
5. Which of the following statements is true?
(a) Any general dual-feasible function is the sum of a linear and a bounded
function.
(b) Any general dual-feasible functions is bounded from above by a linear
function.
6. Let F be the set of all functions f W R ! R with the following two properties:
i 2 R W i 2 Ig of real numbers, the implication (3.2) holds,
P finite set fxP
– For any
i.e., xi 0 H) f .xi / 0.
i2I i2I
– Any function g W R ! R with g.x/ f .x/ for all x 2 R, which obeys a similar
implication as (3.2), is necessarily identical to f , such that f is maximal in this
sense.
Let L be the set of linear functions with nonnegative slope, i.e. L is the set of
functions f W R ! R with f .x/ D cx, where c 0 is a constant. Show that
F D L.
3.8 Exercises 89
Which of these functions are a general dual-feasible function? Which are a maximal
general dual-feasible function? Which properties of part (a) of Theorem 3.1 do these
functions possess?
8. Show using the following function g W Œ0; 1 ! Œ0; 1 that Hölder-continuity
instead of Lipschitz-continuity is not enough for the construction method II
described in Sect. 3.6.2:
p
.1 p1 2x/=2; if x 1=2;
g.x/ WD
.1 C 2x 1/=2; otherwise.
Chapter 4
Applications for Cutting and Packing Problems
4.1 Introduction
Dual-feasible functions have been designed specifically for the cutting-stock prob-
lem. As shown in Chap. 1, they arise naturally from the dual of the classical
formulation of Gilmore and Gomory for this problem. Since many problems can
be modeled using a similar formulation, it makes sense to explore the concept of
dual-feasible function within a more general class of applications. A first approach
is to consider multi-dimensional dual-feasible functions, which can be used to derive
lower bounds for the vector packing problem. Here, we also consider different
packing problems with more complicated subproblems such as multi-dimensional
orthogonal packing and packing with conflicts. Dual-feasible functions can still be
derived in these cases.
From a linear programming point of view, it means that the function must
produce the values related to a dual solution that is valid for any cutting-stock
instance. Clearly, when one particular instance is considered, one may want to relax
this strong constraint, and generate a dual solution that is valid for this particular
instance.
The data dependency can be introduced by choosing a finite ground set I0 , fixing
the numbers xi 2 Œ0; 1 for all i 2 I0 , and demanding the implication (4.1) only for
any subset I I0 . In this way, the order demands of bin-packing instances may be
considered also in the function f as illustrated in the following example.
Example 4.1 Consider an instance of the 1-dimensional bin-packing problem with
four item lengths `1 D 10, `2 D 9, `3 D 6 and `4 D 4 and the respective order
demands b WD .1; 1; 2; 1/>. The length 6 is demanded twice, while the other lengths
are needed only once. The items have to be packed into the minimal number of bins
of length L D 18.
One wants to define the function f , such that
4
X
bi f .`i =L/
iD1
is a valid lower bound for the optimal objective function value. If f should be a
classical dual-feasible function, then among others
I0 WD f1; : : : ; 5g;
and
is feasible for the data dependent function, as it can be checked easily. That yields
the lower bound 7=3. This bound could not be found by a classical dual-feasible
function, because the continuous relaxation of the given instance allows to use the
patterns
.1; 0; 0; 2/>;
.0; 2; 0; 0/>;
.0; 0; 3; 0/>;
.1; 0; 1; 0/>;
4.2 Set-Covering Dual-Feasible Functions 93
each in the quantity 1=2, such that the bound becomes 2. Only the last pattern was
proper, and even this pattern did not use the given length L completely. t
u
In the next section, we define formally this concept of data-dependent set-covering
dual-feasible function, i.e. functions that are designed for a specific instance.
X
min vc c (4.2)
c2C.DP /
X
s.t. aic c bi ; 8i 2 I (4.3)
c2C.DP /
c 0; 8c 2 C.DP / (4.4)
For two different instances of the same problem, a SC-DDFF can be valid for
one and not for the other. Unlike classical dual-feasible functions, set-covering dual-
feasible functions apply to indices i instead of the sizes `i . This is due to the fact
that in the cutting-stock problem, the size `i of an item is sufficient to characterize
the element, whereas in a more general context, elements may be more complex
(several dimensions, a vertex in a graph, for example). In this formalism, geometric
constraints of packing applications are modeled as a set of feasible patterns (the set
94 4 Applications for Cutting and Packing Problems
P
SC-DDFF dependent on instance DP , then i2I bi f .i/ is a valid lower bound for
model (4.2)–(4.4) applied to instance DP .
Once a set-covering dual-feasible function is designed, it is possible to generate
a large number of other set-covering by applying a cutting-stock dual-feasible
function to the values obtained.
Proposition 4.3 (Composition of SC-DFF and CS-DFF) The composition of a
SC-DFF g and a CS-DFF f is a SC-DFF, i.e. f .g.// is a SC-DFF.
In the remainder of this chapter we give several examples of dual-feasible functions
for different hard combinatorial problems.
L> a w:
Fig. 4.1 Optimal solution of a 2-dimensional vector packing problem instance (Example 4.2)
and demands b D .1; 2; 1/> , such that the second item is demanded twice. The only
optimal solution consists in using the two feasible patterns
each in the quantity 1. This solution is illustrated in Fig. 4.1. The remaining space
in the two bins is
such that one additional instance of the third item would have fit into the first bin.
This example demonstrates also the following difference between the vector
packing problem and the 1-dimensional cutting-stock problem. In the latter, the
material bound is generally weak, but it is always above half of the optimal objective
function value, and it can be used for some theoretical estimations. However, in the
vector packing problem, the material bound becomes even weaker, because it is only
subadditive and no longer additive, when more items are added. The material bound
would yield in this example 1516 usage of the first bin in the first dimension, and 4
3
for the second bin in the second dimension, while the total material bound is only
23 15 3
16 < 16 C 4 . t
u
A straightforward way of computing lower bounds for the mD-VPP (m > 1)
would be to consider each dimension of the multi-dimensional problem indepen-
dently and to compute a lower bound for each of the related m instances of the
1-dimensional bin-packing problem separately, for example by applying a dual-
feasible function to the obtained data. However, this may lead to arbitrarily bad
results. Consider as an example an instance, where each item i has a size equal to 1
on dimension i and " on the other dimensions, " > 0 sufficiently small. Any bound
based on that decomposition into m independent 1-dimensional problems yields the
optimal value 2 for all the m problems, such that finally at most two is obtained as
lower bound, although one needs m bins to pack all the items. By increasing the
value of m, the ratio between the optimal objective function value and the obtained
lower bound can become arbitrarily bad.
4.3 Vector Packing Dual-Feasible Functions 97
X
n
ai f .l>
i / 1:
iD1
A bound 8 could be achieved, if we would have e.g. a VP-DFF f W Œ0; 12 ! Œ0; 1
with
7 3 1 3 1 1 5
f ; D 1; f ; D ; f ; D 0;
8 8 4 8 2 16 16
because
1
b1 1 C b2 C b3 0 D 8:
2
Note that such a function f can be constructed according to Proposition 4.9 (p. 106),
which will be introduced later as a general construction principle, with u WD . 14 ; 0/>
and with g being the function (4.8) (p. 100) with parameter C D 8=3. t
u
We now show that some properties for the 1-dimensional case can be generalized
to the multidimensional case, and we give a complete characterization of maximal
functions for the m-dimensional case. Finally, we show how to build such maximal
functions from non-maximal superadditive ones by forcing symmetry.
The necessary conditions from the 1-dimensional case for a function to be
maximal are still valid for the higher-dimensional case. However, it has to be
checked how the higher-dimensional case can be described and if stronger sufficient
conditions are needed. These ideas led to the following theorems.
Theorem 4.1 Any VP-MDFF f W Œ0; 1m ! Œ0; 1 has necessarily the following
properties:
1. f is superadditive, i.e. for all x; y 2 Œ0; 1m with x C y w, it holds that
2. f is non-decreasing:
f .x/ f .y/; if o x y wI
f .x/ C f .w x/ D 1; (4.6)
can be derived too, as stated in Theorem 4.2. This theorem may help to simplify the
proofs of maximality in the next sections. Before introducing these new sufficient
conditions, first in Lemma 4.1 an additional assertion is described that is useful to
prove the maximality of a VP-DFF.
Lemma 4.1 If a VP-DFF f W Œ0; 1m ! Œ0; 1 satisfies the symmetry condition (4.6),
then f is a VP-MDFF.
This is obvious since for such a symmetric VP-DFF f , if there is a VP-DFF g
such that g.x/ > f .x/ for a given x, then g.w x/ < f .w x/ must hold (otherwise
g.x/ C g.w x/ > 1).
Example 4.4 Consider the 2-dimensional case, and let f W Œ0; 12 ! Œ0; 1 be the
following function:
8
ˆ 1 1
< 0; if x 2 w ^ x ¤ 2 w;
f .x/ WD 1; if x 2 w ^ x ¤ 12 w;
1 (4.7)
:̂ 1 ; otherwise:
2
This function is a VP-DFF, because one has, for any finite set of vectors x1 ; : : : ; xn 2
P
n P
n
Œ0; 12 with xi w; that f .xi / 1, as it can be checked easily. Additionally,
iD1 iD1
since the symmetry condition holds, f is maximal, and hence it is a VP-MDFF. t
u
The following theorem restricts the needed sufficient conditions for maximality
proofs. The idea came from the 1-dimensional case, and here twice a certain dimen-
sion can be chosen, where the weaker sufficient conditions of the 1-dimensional
cutting stock problem are applied.
Theorem 4.2 Given two constants r; s 2 f1; : : : ; mg and a function f W Œ0; 1m !
Œ0; 1, the following conditions are sufficient for f to be a VP-MDFF:
1. Equation (4.6) is true for all x 2 Œ0; 1m with xr 1=2;
2. Inequality (4.5) holds for all x; y 2 Œ0; 1m with x C y w, and xs ys 1=2
and xs C ys 2=3.
The following propositions state that the functions resulting from the convex
combination of VP-MDFF or from the composition of a VP-MDFF with a maximal
dual-feasible function remain maximal.
Proposition 4.4 Any convex combination of VP-MDFF is a VP-MDFF.
Example 4.5 Consider again the 2-dimensional case. The following function g W
Œ0; 12 ! Œ0; 1 is a VP-MDFF:
x1 C x2
g.x/ WD :
2
100 4 Applications for Cutting and Packing Problems
The convex combination of g with the function (4.7) with the weights 2 Œ0; 1 and
1 , respectively, yields another VP-MDFF h W Œ0; 12 ! Œ0; 1, namely
8
ˆ
< 2
.x1 C x2 /; if x 12 w ^ x ¤ 12 w;
h.x/ D 2
.x1 C x2 / C 1 ; if x 12 w ^ x ¤ 12 w;
:̂
.x1 C x2 / C 1 ; otherwise:
2 2
t
u
Proposition 4.5 The composition of a VP-MDFF f with a MDFF g, i.e. g.f .//, is
a VP-MDFF.
Example 4.6 Let f W Œ0; 1m ! Œ0; 1 be simply
kxk1 x1 C C xm
f .x/ WD D
m m
t
u
We now describe how a VP-MDFF can be built from a superadditive m-
dimensional vector function by forcing symmetry. This result generalizes Theo-
rem 2.4.
Proposition 4.6 Let f W Œ0; 1m ! Œ0; 1 be a superadditive function, and M be any
subset of Œ0; 1m n f 21 wg such that:
1. for all x 2 Œ0; 1m n f 21 wg, the following equivalence holds:
x 2 M ” w x … MI
x C y 6 w: (4.9)
4.3 Vector Packing Dual-Feasible Functions 101
Example 4.7 There are various ways to choose the set M in Proposition 4.6, for
instance as the union of m parts according to
1
M WD Œ0; 1 Œ0; 1 Œ0; 1 ;1 [
2
1 1
[ Œ0; 1 Œ0; 1 ;1 [ [
2 2
1 1 1
[ ;1 :
2 2 2
˚
For m D 2, this set becomes Œ0; 1 12 ; 1 [ 12 ; 1 12 , i.e. the upper half of the
unit square, where the border belongs only partially to M. t
u
Example 4.8 Additionally, M could be chosen for example as follows, where the
inner of M is a triangle:
1
M WD fx 2 Œ0; 12 W x1 C x2 > 1g [ fx 2 . ; 1 Œ0; 1 W x1 C x2 D 1g:
2
The resulting function obtained by applying Proposition 4.6 to a random superaddi-
tive VP-DFF is depicted in Fig. 4.2. t
u
x2 x2 x2
1 1 1
0.8 f (x) = 1
2
0.5 0.5
0.4 f (x) = 1
0.2 0.2
x1 x1 x1 f (x) = 0
0.5 1 1/2 1 0.2 0.5 0.8 1
Set M f (x) g(x)
Fig. 4.2 Using Proposition 4.6 to build a VP-DFF. For set M, the dashed line and the point
.1=2; 1=2/ do not belong to M, while the solid line does
102 4 Applications for Cutting and Packing Problems
In this section, several general classes of VP-MDFF are described. When general
schemes for generating VP-MDFF are proposed, some specific functions that can
be obtained from these schemes are described and analyzed.
The first set of VP-MDFF is built from the projection of the m-dimensional data
into 1-dimensional domains. A formal definition of these VP-MDFF is given in
Proposition 4.7.
C with u w D 1.
Proposition 4.7 Let g W Œ0; 1 ! Œ0; 1 be a MDFF and u 2 Rm >
fI .xI g; u/ WD g.u> x/
is a VP-MDFF.
Using the MDFF fFS;1 in Proposition 4.7 yields the function described in
Corollary 4.1.
Corollary 4.1 Let v 2 Rm C such that v w 2 N n f0; 1g. The following function
>
(
v> x
v> w
; if v> x 2 N;
fI;FS;1 .x; v/ WD bv> xc
v> w1
; otherwise:
Applying Proposition 4.7 with the MDFF fBJ;1 leads to the following function.
Corollary 4.2 Let v 2 Rm C be any vector with v w 1. Then, the function
>
frac.v> x/ frac.v> w/
fI;BJ;1 .xI v/ WD bv> xc C max 0; =bv>wc
1 frac.v> w/
is a VP-MDFF.
Note that the function fI;BJ;1 .I v/ of Corollary 4.2 is only a convex combination of
the projections f1 ; : : : ; fm if v> w 2 N.
Example 4.9 To demonstrate how a function f of Class I can be built, consider
the simple function fMT;0 .I 12 /, defined in Formula (2.16), p. 48, applied to each
dimension, with different vectors u. We restrict the example to two dimensions.
4.3 Vector Packing Dual-Feasible Functions 103
1
f .x/ D 0; if x1 < ;
2
1 1
f .x/ D ; if x1 D ;
2 2
1
f .x/ D 1; if x1 > :
2
3
1 >
If u D 4; 4 , then
3 1 1
f .x/ D 0; if x1 C x2 < ;
4 4 2
1 3 1 1
f .x/ D ; if x1 C x2 D ;
2 4 4 2
3 1 1
f .x/ D 1; if x1 C x2 > :
4 4 2
1
1 >
If u D 2; 2 , then
f .x/ D 0; if x1 C x2 < 1;
1
f .x/ D ; if x1 C x2 D 1;
2
f .x/ D 1; if x1 C x2 > 1:
The behaviour of these functions, and other examples of parameters, are illustrated
in Fig. 4.3. t
u
x2 x2 x2 x2
1 1 1 1
f (x) = 1
2
1/2 f (x) = 1
f (x) = 0
x1 x1 x1 x1
0.5 1 1/3 2/3 1 1 1
Fig. 4.3 Behaviour of class I functions for respectively u D .1; 0/> ; . 34 ; 14 /> ; . 12 ; 12 /> and
.0; 1/> when function fMT;0 .LI 12 / is used
104 4 Applications for Cutting and Packing Problems
Example 4.10 Recall Example 1.7, p. 14. After scaling the values of the weights
and volumes, the 2-dimensional items have the following sizes:
Item 1 2 3 4 Vehicle
Volume .x1 / 0.5 0.75 0.25 0.5 1
Weight .x2 / 0.6 0.4 0.8 0.2 1
Using the function fMT;0 .I 12 / for different values of u; the sizes of the items are
mapped into the values indicated in the following table:
Note that the several DFF in the family provide the dual feasible solutions uO 1 D
. 12 ; 1; 0; 12 /> ; uO 2 D .1; 1; 0; 0/>; uO 3 D .1; 1; 1; 0/> and uO 4 D .1; 0; 1; 0/>; indicated
in Example 1.7.
As all the demands are equal to 1, the value of the lower bound is equal to the
sum of the elements of u: O The function with u D . 12 ; 12 /> that provides the dual
feasible solution uO 3 D .1; 1; 1; 0/> is the one that yields the best lower bound, equal
to 3. t
u
4.3.3.2 Class II
Some of the ideas of the 1-dimensional MDFF can be adapted for the m-dimensional
vector packing problem, for instance, the function which maps small items to zero
and large ones to 1, while the other items remain unchanged, can be generalized
in the following way. Note that the difficulty in this generalization lies in finding a
suitable definition of small and large items when vectors are involved.
Proposition 4.8 Let h W Œ0; 1m ! R be non-decreasing with h.x/ C h.w x/ > 0
for all x 2 Œ0; 1m , and let g W Œ0; 1m ! Œ0; 1 be a VP-MDFF. The following
functions fII1 ; fII2 W Œ0; 1m ! Œ0; 1 are VP-MDFF:
8 8
< 0; if h.x/ 0 < 0; if h.x/ < 0
fII1 .x/ WD 1; if h.w x/ 0 ; fII2 .x/ WD 1; if h.w x/ < 0 :
: :
g.x/; otherwise g.x/; otherwise
4.3 Vector Packing Dual-Feasible Functions 105
m
Corollary 4.3 Let u 2 0; 12 , and let g W Œ0; 1m ! Œ0; 1 be a VP-MDFF. The
following functions fII3 .I g; u/; fII4 .I g; u/ W Œ0; 1m ! Œ0; 1 are also VP-MDFF:
8 1
< 0; if x u and x ¤ 2 w;
fII3 .xI g; u/ WD 1; if x w u and x ¤ 12 w;
:
g.x/; otherwiseI
8
< 0; if x < u;
fII4 .xI g; u/ WD 1; if x > w u;
:
g.x/; otherwise:
Let g W Œ0; 1 ! Œ0; 1 be a VP-MDFF and " 2 .0; kwkp =2/. The following function
fII5 W Œ0; 1m ! Œ0; 1 is a VP-MDFF:
8
< 0; if kxkp ";
fII5 .x/ WD 1; if kw xkp ";
:
g.x/; otherwise:
Corollary 4.5 Let g W Œ0; 1m ! R be any non-decreasing function, and let r 2
f1; : : : ; mg. The following function fII6 W Œ0; 1m ! Œ0; 1 is a VP-MDFF:
8
< 0; if 2w> x < m and g.x/ < 0;
fII6 .x/ WD 1; if 2w> x > m and g.w x/ < 0;
:
xr ; otherwise:
Using u WD .1=3; 1=3/> results in the function fII3 .I g; u/ depicted in the right part
of Fig. 4.4. t
u
106 4 Applications for Cutting and Packing Problems
1 1
f(x)=0
2/3
f(x)=1
1/3
f(x)=x1
1/4 3/4 1 1/3 2/3 1
Fig. 4.4 Applying Corollary 4.3 to the left-hand VP-MDFF, using u D . 13 ; 13 />
The following proposition describes another VP-MDFF. First the general function
for the m-dimensional case is defined, then a special case for m D 2 is given in
Corollary 4.6. The rationale behind this function consists in assigning the value
0 (or 1) to items that are very small (respectively large) on some dimensions,
unless they are very large (or small) on another dimension. The remaining items
are mapped via any other VP-MDFF of appropriate dimension.
Proposition 4.9 Let g W Œ0; 1m ! Œ0; 1 be a VP-MDFF and u 2 Œ0; 1=2m . The
following function fIII1 W Œ0; 1m ! Œ0; 1
8
< 0; if 9i 2 f1; : : : ; mg W xi < ui ^ 6 9j 2 f1; : : : ; ig with xj > 1 uj ;
fIII1 .x/ WD 1; if 9i 2 f1; : : : ; mg W xi > 1 ui ^ 6 9j 2 f1; : : : ; ig with xj < uj ;
:
g.x/; otherwise,
is a VP-MDFF.
Corollary 4.6 The function fIII2 .I u1 ; u2 ; r; q/ W Œ0; 12 ! Œ0; 1 with 0 u1 ; u2
1=2 and r; q 2 f1; 2g, which is defined as
8
< 1; if xq > 1 u1 or .xq u1 and x3q > 1 u2 /;
fIII2 .xI u1 ; u2 ; r; q/ WD xr ; if 1 u1 xq u1 and 1 u2 x3q u2 ;
:
0; otherwise,
is a VP-MDFF.
Example 4.12 Consider an example in dimension 2 and choose r D 1, q D 2,
u1 D 1=4 and u2 D 1=3.
8
< 1; if x2 > 3=4 or .x2 1=4 and x1 > 2=3/;
1 1
fIII2 xI ; ; 1; 2 WD x1 ; if 3=4 x2 1=4 and 2=3 x1 1=3;
4 3 :
0; otherwise,
4.3 Vector Packing Dual-Feasible Functions 107
f (x) = 1
1/4 f (x) = 0
x1
1/3 2/3 1
4.3.3.4 Class IV
In this subsection, another class of VP-MDFF in its general form and a special-
ization for the 2-dimensional case are described. This VP-MDFF depends on a
non-decreasing function g whose properties are stated in the next proposition. How
g should be chosen is explained at the end of the section.
Proposition 4.10 Let m 2 N n f0g, k 2 .1=3; 1=2, g W Œ0; 1m ! Œ0; 1 be a non-
decreasing function with
The following VP-MDFF is similar, but it differs for x1 2 f0; 1g, i.e. in these
cases it may get other function values.
Proposition 4.11 Let g be a non-decreasing function defined from Œ0; 1 to Œ0; 1,
such that
is a VP-MDFF.
In the following proposition, we show how the g function in Proposition 4.11
should be defined so as to get the best lower bound that can be obtained from the
corresponding VP-MDFF fIV3 .I g; k/ for the 2D-VPP.
Proposition 4.12 Given an instance of the 2D-VPP, the best lower bound based on
the function f˚IV3 .I g;k/ of Proposition 4.11 can be found with the following function
g W Œ0; 1 ! 0; 12 ; 1 that depends on the parameters s; t 2 R with 0 s 12 and
s t 1 s:
8
< 0; if x < s;
g.x/ WD 1=2; if s x t;
:
1; if x > t:
4.3 Vector Packing Dual-Feasible Functions 109
f (x) = 1
2
3/5
1/2 f (x) = 1
2/5
f (x) = 0
x1
1/4 1/2 3/4 1
Example 4.13 For a 2-dimensional example let us choose s D 1=4, t D 1=2 and
k D 2=5. The obtained function is depicted in Fig. 4.6. t
u
4.3.3.5 Class V
Let s; t 2 .0; 1m be two constant vectors. Let u1 ; u2 be feasible (but not necessarily
optimal) dual values for an instance of the m-dimensional vector packing problem
with two items of sizes s and t, each demanded at least once. The following proposi-
tion describes a new family of superadditive VP-DFF. Recall that these functions can
be transformed into VP-MDFF by enforcing symmetry via Proposition 4.6, p. 100.
Proposition 4.13 The function fV .I s; t; u1 ; u2 / W Œ0; 1m ! Œ0; 1 is a superadditive
VP-DFF:
max si max ti :
i2f1;:::;mg i2f1;:::;mg
Since one has only two items s; t > o, the function value can be easily calculated by
trying all possible numbers a2 , i.e. a2 2 N and
f (x) = 1
0.4
f (x) = 0
0.2
x1
0.5 1
and setting
a1 WD min .xi a2 ti /=si :
i2f1;:::;mg
These calculations have the same effort as dynamic optimization with exactly two
different items. Hence, the complexity to calculate the appropriate a1 and a2 is
pseudo-polynomial.
Example 4.14 Consider the 2-dimensional example with s D .0:5; 0:2/> and t D
.0:5; 0:6/> . Pick the feasible dual values u1 D 0:5 and u2 D 0:5.
The optimization problem is
The resulting superadditive but not maximal function is depicted in Fig. 4.7. It
can be formulated as follows.
8
ˆ
<1;
ˆ if x1 D 1 and x2 0:4
f .x/ D 0; if x1 < 0:5 or x2 < 0:2
ˆ
:̂1=2; otherwise
t
u
Y
m
g.x/ WD fj .xj =wj /
jD1
Example 4.15 Let m D 2, w D .10; 10/ and consider four items of size .6; 6/. The
trivial lower bound is equal to d4 .36=100/e D 2. By taking f1 D fMT;0 .I 12 / and
f2 D fMT;0 .I 12 /, one obtains a bound equal to d4 .1 1/e D 4. t
u
If the identity function is used for fj ; j D 1; : : : ; m, the classical bound based on
the surface/volume of the bins is obtained. No actual m-dimensional dual-feasible
functions were derived in the literature (i.e. dual-feasible functions that would not
112 4 Applications for Cutting and Packing Problems
consider the problem dimension by dimension). This can be explained by the fact
that characterizing the set of feasible patterns is hard. Even verifying that a pattern
is feasible is NP-complete.
The first result derives from a simple fact. For a set of CS-DFF f1 ; : : : ; fm , if a
lower bound for the oriented case based on these functions is run for all possible
orientations of the items, and if the minimum is recorded, a valid lower bound is
obtained. Of course, the bound obtained would need an exponential time, since it
would take .mŠ/n lower bounds to compute. Nevertheless a lower bound can be
computed by considering the following relaxation: for each item i, keep the smallest
image that it can have for its possible orientations. This leads to n mŠ values to
compute, which can lead to a practical method for lower bounding. Let S be the set
of the mŠ permutations D .1/; : : : ; .m/ representing all possible orientations in
m dimensions.
Proposition 4.15 Let fj W j D 1; : : : ; m be CS-DFF. The following function '1 W
Rm 7! Œ0; 1 is a m-OPP-R-DFF.
8 9
<Ym =
'1 .x/ WD min fj .x .j/ =wj /
2SWx .j/ wj ;jD1;:::;m : ;
jD1
A better m-OPP-R-DFF, that dominates the previous one if the fj are increasing
and superadditive, and if the container has equal size on each dimension, is now
described.
Proposition 4.16 Let fj W j D 1 : : : ; m be m CS-DFF. If the instance of m-OPP-R is
such that w1 D w2 D : : : D wm then the following function '2 W Rm 7! Œ0; 1 is a
m-OPP-R-DFF.
( Qm )
X jD1 fj .x .j/ =wj /
'2 .x/ WD
2S
mŠ
The result is not intuitive, but it becomes obvious when the following relaxation
is considered. From a m-OPP-R instance, construct a m-OPP-O instance I 0 of size
mŠ n where each item is repeated once for each of its orientations. Clearly, the
value of an optimal solution for this new problem cannot be more than mŠ times the
value of an optimal solution for the original m-OPP-R instance (see Fig. 4.8).
Example 4.16 Take m D 2, w D .10; 10/ and four identical items .8; 3/, and choose
the identity function for f1 and f2 D fMT;0 .I 0:25/, where the latter function was
4.5 Bin-Packing 113
7
3 4
4 6
2 6 2 7
1
1 5 3 5
7
3 4
2 6
1 5
Fig. 4.8 An example of the relaxation of the OPP-R for m D 2. Note that for any solution for the
OPP-R using z bins, there is always a solution of the duplicated OPP-O problem using 2z bins
'1 ..8; 3// D minf0:8fMT;0 .0:3I 0:25/; 0:3fMT;0 .0:8I 0:25/g D minf0:80:3; 0:3
1g D 0:24. The bound obtained is equal to d4 24=100e D 1.
'2 ..8; 3// D .0:8 fMT;0 .0:3I 0:25/ C 0:3 fMT;0 .0:8I 0:25//=2 D .0:24 C
0:3/=2 D 0:27. The bound obtained is equal to d4 0:27e D 2. t
u
4.5 Bin-Packing
Similarly to Cutting-Stock DFF (CS-DFF) that lead to lower bounds for the
cutting-stock problem, one can define the notion of Bin-Packing DDFF (BP-
DDFF) by specifying the subproblem of Definition 4.1 as a binary knapsack
problem. The difference between the two classes of functions is that the polyhedral
subproblem used in the definition is not the same (unbounded knapsack problem
for the cutting-stock, and binary knapsack problem for bin-packing). Actually, any
CS-DFF is a BP-DDFF, but the converse is generally not true.
Proposition 4.17 Let D D .I; l/ be BP instance, J I a set of pairwise
incompatible items (`i C `j > 1; 8i; j 2 J) and let ˛ 2 RnC . The following function
g1 W I ! Œ0; 1 is a BP-DDFF defined for D.
8
ˆ
ˆ 1 if i 2 J and KP–01.1; I n J; l; ˛/ D 0
ˆ
ˆ
ˆ
ˆ
<1 KP–01.1 `i ; I n J; l; ˛/=
ˆ
g1 .i/ WD KP–01.1; I n J; l; ˛/ if i 2 J and KP–01.1; I n J; l; ˛/ ¤ 0
ˆ
ˆ
ˆ
ˆ
ˆ0
ˆ if i 2 I n J and ˛i D 0
:̂˛ =KP–01.1; I n J; l; ˛/ if i 2 I n J and ˛i ¤ 0
i
The percentage of the bin taken by each small item i is equal to ˛i , and the sizes
of the large items are computed by solving the knapsack problem described above.
Note that in some degenerate cases, the image of an item in J may be smaller than
the image of an item in J n I.
Example 4.17 Consider a BP-instance .I; l/, with I D f1; : : : ; 10g and
l D .0:1; 0:1; 0:1; 0:2; 0:3; 0:3; 0:3; 0:3; 0:6; 0:6/:
KP–01.f1; : : : ; 8g; l; ˛/ D 3:
g1 .1/ D g1 .2/ D g1 .3/ D 0;
g1 .4/ D g1 .5/ D g1 .6/ D g1 .7/ D g1 .8/ D 1=3;
g1 .9/ D g1 .10/ D 1 1=3 D 2=3:
4.6 Bin-Packing Problem with Conflicts 115
Note that the obtained values cannot be computed by the means of a dual-feasible
function, since g1 .4/ D 1=3, which would not be possible because `4 D 0:2 and
for any dual-feasible function g, g.0:2/ 1=5 (otherwise the dual-feasible function
condition would not hold for five items of size 0:2). t
u
The knapsack problems involved are NP-hard in the general case. However, they
can be solved in pseudo-polynomial time using dynamic programming. When the
size of the bin is large, it may entail a large computing time. In this case, the set of
parameters ˛ should be chosen in a way to re-enable the resolution of the knapsack
problem in a polynomial time (for example ˛i D 1; 8i 2 J).
KPC.J; l; ˛; E/
( )
X X
D max ˛i xi W `i xi W; xi C xj 1; 8.i; j/ 62 E; xi 2 f0; 1g; 8i 2 I
i2J i2J
In the following proposition, and in the remainder of the book, for a vertex i,
N.i/ D fig [ f j W .i; j/ 2 Eg is the neighbourhood of i.
Proposition 4.18 Let D D .II lI E/ be a BPC instance, J a set of pairwise
incompatible items, and f˛i 2 RC ; i 2 I n Jg a list of coefficients. Function
g1 .I J; ˛/ W I ! Œ0; 1 is a BPC-DDFF.
(
1 KPC.1 `i ; N.i/; l; ˛/=KPC.1; I n J; l; ˛/ if i 2 J
g1 .i/ WD
˛i =KPC.1; I n J; l; ˛/ if i 2 I n J
and
KPC.W; J; l; ˛/ D 2:
g1 .3I J; ˛/ D g1 .4I J; ˛/ D 1=2;
g1 .1I J; ˛/ D 1 0 D 1;
g1 .2I J; ˛/ D 1 1=2 D 1=2:
compute all maximal cliques of the conflict graph, and then to solve for each clique
the associated knapsack problem. The maximum value obtained for all cliques is
the optimal value for the knapsack problem with conflicts. This solution is tractable
only if the cliques are in small number, and they can be computed with a small
complexity. Neither of the two conditions are fulfilled when a random graph is
considered. For this method to be tractable, the problem can be relaxed by adding
edges to the compatibility graph in such a way that it becomes triangulated. A graph
G is triangulated if for every cycle of length k > 3, there is a chord joining two non-
consecutive vertices. Any triangulated graph G has at most n maximal cliques. In
addition, they can be computed in linear time. Finding the minimum set of edges
to add in order to obtain a triangulated graph is a NP-hard problem, so a heuristic
should be used.
Suppose the set I of items can be decomposed into two sets I1 and I2 of pairwise
incompatible items. In this case, two different dual-feasible functions f and g can
be applied to I1 and I2 , since the instance can be decomposed into two distinct sub-
instances. Now, if there is a third set I3 where each item is compatible with some
items of I1 and I2 , each item of I3 will be packed either with items of I1 , items of
I2 , or neither of these items, but not both. This leads to the following BPC-DDFF,
which depends on two CS-DFF f and g.
Proposition 4.19 Let D D .II lI E/ be an instance of BPC, and let also .I1 ; I2 ; I3 / be
a partition of I such that E \ f.i1 ; i2 / W i1 2 I1 ; i2 2 I2 g D ;. Let also f and g be two
CS-DFF. Function h.I f ; g; I1; I2 / W I ! Œ0; 1 defined as follows is a BPC-DDFF.
8
ˆ
<f .`i /
ˆ if i 2 I1
i 7! g.` / if i 2 I2
ˆ i
:̂minff .` /; g.` /g otherwise
i i
i 7! minffs .`i /g
s2Si
and
E D f.1; 2/; .1; 3/; .1; 4/; .2; 3/; .2; 4/; .3; 4/; .3; 5/; .4; 5/; .4; 6/; .5; 7/; .6; 7/g
1 3 3
5
2 4 4
1 3
5
5
4 7
2 4
7 6
Fig. 4.9 Compatibility graph for Example 4.19 and a possible tree-decomposition for this graph
4.7 Related Literature 119
Let f1 D id and f2 D f3 D fMT;0 .I 0:25/. The values obtained are reported in
Table 4.1.
The obtained bound is equal to
Dual-feasible functions for vector packing problems were proposed by Alves et al.
(2014). Applying dual-feasible functions to orthogonal packing problem has been
done for the first time by Fekete and Schepers (2004). It has also been done
implicitly by Boschetti and Mingozzi (2003). In Carlier et al. (2007), the functions
implicitly used by Boschetti and Mingozzi (2003) were described and slightly
improved. Caprara et al. (2005) show that applying a dual-feasible function on each
dimension of an 2-dimensional orthogonal packing problem often leads to a bound
of excellent quality. The notion of data-dependent functions was proposed by Carlier
et al. (2007). Clautiaux et al. (2007) show that dual-feasible functions can be used
to produce bounds for the orthogonal packing problem with rotation. Dual-feasible
functions for the case with conflicts were introduced by Khanafer et al. (2010).
Everything the reader needs to know about treewidth and graph triangulation is
respectively available in Rose et al. (1976) and Robertson and Seymour (1986).
120 4 Applications for Cutting and Packing Problems
The general framework for set-covering dual-feasible functions has been proposed
by Clautiaux (2010).
4.8 Exercises
1. For each of the two following functions f1 and f2 defined from Œ0; 12 to Œ0; 1,
indicate whether or not the function is a VP-DFF, and if it is a VP-MDFF. If the
function is a VP-DFF and not a VP-MDFF, propose a VP-MDFF that dominates it.
1 if x1 > 1=2 and x2 > 1=2
f1 .x/ WD
0 otherwise
1 if x1 > 1=2 or x2 > 1=2
f2 .x/ WD
0 otherwise
2. Is function f3 a 2-OPP-O-DFF?
8 ˘
ˆ
ˆ 1 w2 x 2
; if x1 > 2w1 =3 and x2 > w2 =2;
ˆ
ˆ 4
ˆ
ˆ
ˆ
ˆ 1=2; if x1 > 2w1 =3 and x2 D w2 =2;
ˆ
ˆ ˘
ˆ
ˆ x42 ; if x1 > 2w1 =3 and x2 < w2 =2;
< ˘
f3 .x/ WD 1 w2 x 2
; if 2w1 =3 x1 w1 =3 and x2 > w2 =2;
ˆ
ˆ 4
ˆ
ˆ =2 if 2w1 =3 x1 w1 =3 and x2 D w2 =2;
ˆ
ˆ x 1
ˆ
ˆ x2 ˘
ˆ
ˆ if 2w1 =3 x1 w1 =3 and x2 < w2 =2;
ˆ 1 4
x
:̂
0; if x1 < w1 =3:
L D ..5; 4/; .5; 4/; .6; 4/; .6; 4/; .8; 8/; .2; 2//:
L D ..7; 7/; .8; 8/; .4; 7/; .4; 7/; .7; 3/; .7; 3//:
5. Consider the 2-OPP-R. Let f and g be two CS-MDFF. For w > h, let
i 1 2 3 4 5 6 7
li 0.2 0.2 0.3 0.3 0.3 0.6 0.7
f1 .i/ 0.3 0.3 0.2 0.2 0.2 0.4 0.7
f2 .i/ 0.2 0.3 0.25 0.25 0.25 0.5 0.7
f3 .i/ 0.2 0.3 0.2 0.25 0.25 0.5 0.7
i 1 2 3 4 5 6 7
li 0:3 0:3 0:6 0:6 0:8 0:8 0:9
f .i/ 0:33 0:33 0:5 0:5 1 1 1
122 4 Applications for Cutting and Packing Problems
9. Show that increasing the size of an item may decrease the value of the bound
obtained using the BP-DDFF defined in Proposition 4.17. Can it happen with a
CS-MDFF?
10. Apply Proposition 4.18 to obtain a BPC-DDFF for the following instance of
BPC: W D 10, I D f1; : : : ; 14g,
l D .0:8; 0:8; 0:8; 0:8; 0:2; 0:7; 0:3; 0:6; 0:3; 0:6; 0:3; 0:5; 0:5; 0:4/
E D I I n f.8; 14/; .9; 14/; .10; 14/; .11; 14/; .12; 13/; .12; 14/; .13; 14/g
the arcs. Choose your coefficients ˛i in such a way that the lower bound produced
is optimal.
11. Apply Proposition 4.20 to obtain a BPC-DDFF for the following instance of
BPC: W D 1, I D f1; : : : ; 12g,
l D .0:4; 0:3; 0:3; 0:3; 0:3; 0:3; 0:7; 0:7; 0:7; 0:7; 0:3; 0:3/;
E D f.1; 2/; .1; 3/; .1; 4/; .2; 3/; .2; 4/; .3; 4/; .3; 5/; .3; 8/; .2; 10/;
.1; 7/; .1; 9/; .9; 10/; .5; 6/.6; 11/; .11; 12/; .4; 12/; .7; 8/g
(see below). Choose your CS-DFF in such a way that the lower bound produced is
optimal.
7 8
9 1 3 6
10 2 4 11
12
P
12. Consider an instance of BPC such that i2I `i 1, and G D .I; E/ is co-
interval. Propose a BPC-DDFF for this problem that always leads to an optimal
solution.
Hint: the problem is a well-known polynomial case of one of the most famous hard
combinatorial problems.
Chapter 5
Other Applications in General Integer
Programming
When good dual-feasible functions are sought, they are frequently characterized by
superadditivity and monotonicity. For the sake of clarity, these properties are briefly
recalled in the sequel for general domains. Given X Rm , a function F W X ! R is
superadditive if for all x; y 2 X with x C y 2 X, it holds that
x y H) F.x/ F.y/:
where aj is the j-th column of the matrix A, and o denotes as usual the zero vector.
As mentioned before, the relationship between the two problems is given by the
strong duality theorem. If the problem (5.1)–(5.3) is solvable, i.e. a feasible x exists
and the objective function value is bounded from above, then the optimal objective
function values of the primal and the dual problem are equal. If F.b/ is unbounded
from below (1), then there is no x 2 ZnC fulfilling (5.2), and if (5.1)–(5.3) yields
an unbounded objective function value (C1) then F does not exist. Furthermore,
if xO is an optimal solution of (5.1)–(5.3) then any optimal F in the dual problem
necessarily obeys the equation
X
n
F.aj / xj F.b/:
jD1
General dual-feasible functions can be used not only to compute fast lower bounds,
but also to generate valid inequalities for integer problems defined over sets of the
kind fx 2 ZnC W Ax bg, as stated formally in the sequel.
5.3 Examples 127
5.3 Examples
In this section, we show through different examples how to apply several generalized
dual-feasible functions to derive valid inequalities, which may be better than the
well-known Chvátal-Gomory cuts.
Given the inequality system Ax b, x 2 Nn , any nonnegative linear
combination of the inequalities may be used to derive cuts. Choosing u o yields
u> Ax u> b, hence the following scalar inequality
d> x r; (5.7)
X
n
bdj c xj brc: (5.8)
jD1
128 5 Other Applications in General Integer Programming
If r > 0, then dividing the inequality (5.7) by r and applying a maximal general
dual-feasible function f W R ! R with f .1/ D 1 leads to the valid inequality
Xn
dj
f xj 1: (5.9)
jD1
r
In some situations it may happen that the inequality (5.9) is stronger than (5.8),
but not if 0 < r < 1, as the right-hand sides show immediately. The following
example demonstrates the strength of a maximal general dual-feasible function for
the construction of valid inequalities, even in the case r D 1.
Example 5.1 We use the function (3.7), p. 65, with the parameter b WD 1, hence
8
< b2dc; if d < 1=2;
f .d/ D 1=2; if d D 1=2; (5.10)
:
d2de 1; if d > 1=2:
1:6x1 0:4x2 1:
and is again illustrated in Fig. 5.1. Next, we show the result of applying the function
fFS;1 with parameters k 2 f1; 2; 3g to a given knapsack inequality (after dividing it
by the right-hand side):
t
u
5.3 Examples 129
1 x 1 2 x
2 3 3
fF S,1 (x;3) fFS,1 (x;4)
1 1 3 x 1 2 3 4 x
4 2 4 5 5 5 5
Fig. 5.1 MDFF fFS;1 .I k/ for k 2 f1; : : : ; 4g
Given multiple knapsack constraints like in the vector packing problem, one may
use a VP-MDFF to get cuts.
Example 5.3 Neglecting the integrality constraints in
and
However, taking e.g. the VP-MDFF (4.9), p. 106, with the parameter u WD . 12 ; 12 /> ,
which considers all constraints simultaneously, yields the valid inequality
x1 C x2 C 0:5x3 1;
t
u
The following example illustrates the generation of valid inequalities with dual-
feasible functions obtained from bin packing problems with conflicts.
Example 5.4 Consider the following system of inequalities:
The first inequality does not restrict anything. However, a BPC-DFF yields the
valid inequality
The DFF used is the one defined in Proposition 4.17, with J D f2; 3g, ˛2 D 0:1 and
˛3 D 0:1. t
u
In Nemhauser and Wolsey (1998), the relationship between the integer linear opti-
mization problem (5.1)–(5.3) and its dual (5.4)–(5.6) and a revision of superadditive
valid inequalities are provided, together with the basic function underlying the
Chvátal-Gomory procedure. Previous results on the (explicit or implicit) use of dual-
feasible functions to generate valid inequalities for integer programs were reported
by Vanderbeck (2000), Alves (2005), Rietz et al (2014), Letchford and Lodi (2002),
and Dash and Günlük (2006).
5.5 Exercises 131
5.5 Exercises
P
n
1. Let be given the feasible region aij xj bi , i D 1; : : : ; m, x 2 Nn for an
jD1
integer linear optimization problem. Which of the following assertions are true?
Justify your answer.
P
n
(a) f .aij /xj f .bi / is a valid inequality for every general DFF f W R ! R.
iD1
Pn
(b) f .aij /xj f .bi / is a valid inequality for every superadditive function
iD1
f W R ! R.
2. Consider the set of all ordered triplets .x1 ; x2 ; x3 / 2 Z3C , for which
2x1 C 2x2 C x3 4
using the VP-MDFF fI;FS;1 of Corollary 4.1, p. 102, with the parameter choice v WD
.3; 2/> .
3. Let w WD .1; : : : ; 1/> 2 Rm . Given the inequality system
X
n
aj xj wI x o;
jD1
where aj 2 Œ0; 1m for all j, suppose that a VP-MDFF f W Œ0; 1m ! Œ0; 1 yields
f .aj0 / D 1 for a certain j0 2 f1; : : : ; ng. Why does this imply f .aj / D 0 for all those
j, for which aj0 C aj w? What is the conclusion for the possible usage of the
functions of Classes II, III and IV?
Appendix A
Hints and Solutions to Selected Exercises
Chapter 1
1.1. As stated in the text, the matrix A is just the vector .1; 3/ and the vector c D
.1; 1/> .
(a) The extreme points of X are x1 D .0; 0/> , x2 D . 52 ; 0/> , x3 D .0; 4/>
with X D fx 2 R2 W x D 1 .0; 0/> C 2 . 52 ; 0/> C 3 .0; 4/> ; 1 C
2 C 3 D 1; 1 ; 2 ; 3 0g. Therefore, Œc> x1 ; c> x2 ; c> x3 D Œ0; 52 ; 4 and
ŒAx1 ; Ax2 ; Ax3 D Œ0; 52 ; 12. The DW-model is: zDW WD maxf01 C 52 2 C43 W
01 C 52 2 C 123 6; 1 C 2 C 3 D 1; 1 ; 2 ; 3 0g. The optimal
solution is .1 ; 2 ; 3 / D .0; 12 ; 7 /, which maps to the solution in the original
19 19
30 28 >
space x D . 19 ; 19 / .
1.2. Check Fig. A.1 to identify the extreme points of the sets X 1 and X 2 , respec-
tively. The matrices and vectors in the reformulated model are as follows:
1> 1 1>
1 1 1 1 0220
c X Dc x1 x2 x3 x4 D 3 5 D 0 6 11 10
0012
1 1 1 1 12 0220 0244
A1 X1 1
D A x1 x2 x3 x4 D D
32 0012 0684
> > 010
c 2 X2 D c2 x21 x22 x23 D 1 2 D 016
003
21 010 023
A2 X2 D A2 x21 x22 x23 D D :
11 003 013
x12 x22
x23 =
0
3
x14 =
0
2
x13 = 2
1
X1 X2
x11 = 0
x12 = 2 x11 x21 = 0
x22 = 1 x21
0 0 0 0
The original model with a block angular structure is reformulated into the
following DW-model:
The optimal solution is .11 ; 12 ; 13 ; 14 ; 21 ; 22 ; 23 / D .0; 0; 12 ; 12 ; 13 ; 0; 23 /;
and zDW D 29 2 : In terms of the original variables, the optimal solution
is .x11 ; x12 ; x21 ; x22 / D .1; 32 ; 0; 2/; and z D 29
2
:
1.3. First, we address the decomposition with the constraints (1.18) in the subprob-
lem.
(a) Let X i D fxi1 ; : : : ; xiki g be the set of all feasible assignments of jobs to
machine i; 8i 2 I: A feasible assignment is a vector xik D .xi1k ; xi2k ; : : : ; xijJjk / 2
BjJj ; whose elements indicate if the job is in the plan. The variables yik ; k 2
Ki ; 8i 2 I in the reformulated model are defined as:
1; if feasible assignment xik is used in machine i
yik D
0; otherwise:
X
s: to xijk yik D 1; j 2 J
i2I;k2Ki
X
yik 1; i 2 I
k2Ki
X p
k 2 f0; 1g; 8k 2 K (A.5)
p2P
p
k 0; 8p 2 P; 8k 2 K; (A.6)
p
where the decision variables k denote the number of times pattern p is cut in roll k:
Constraints (A.2) guarantee that the demand for each item is satisfied by the
patterns selected for the jKj rolls. The convexity constraint for roll k (A.3) enforces
that the solution is a convex combination of the extreme points of the knapsack
polytope. The null solution is also an integer extreme point. As the corresponding
column has the structure of a slack column, the convexity constraint is of the type :
This is consistent with the model, because some rolls may not be cut. When applying
the decomposition, the integrality requirements on the yk ; 8k 2 K; in the original
model, which are translated into constraints (A.5), cannot be dropped, as otherwise
the integrality requirements (A.4) may not be sufficient to provide integer solutions
to the cutting stock problem [see Valério de Carvalho (2002)].
The integrality requirements ensure that we pick a feasible cutting pattern for
each roll k; and, as the rolls have equal length, the index k can be dropped and the
p p p
solutions described by a vector .x1 ; : : : ; xi ; : : : ; xjIj /T ; 8p 2 P; leading to Gilmore
and Gomory model [see Vance (1998) for further details].
Chapter 2
8
ˆ
ˆ 0; if 0 x < 3=19;
ˆ
ˆ
ˆ
< .19x 3/=10; if 3=19 x 8=19;
(c) f6 .x/ D 1=2; if 8=19 < x < 11=19;
ˆ
ˆ
ˆ
ˆ .19x 6/=10; if 11=19 x 16=19;
:̂
8 1; if 16=19 < x 1;
ˆ
ˆ 0; if 0 x < 4=19;
ˆ
ˆ
ˆ
< b19x 3c=10; if 4=19 x 8=19;
(d) f7 .x/ D 1=2; if 8=19 < x < 11=19;
ˆ
ˆ
ˆ d19x 6e=10; if 11=19 x 16=19;
ˆ
:̂
1; if 16=19 < x 1:
2.5.
(a) False.
There are many counter examples. For instance, if f fFS;1 .I 2/ and `i < L=2
P
m
for i D 1; : : : ; m and b > o, then bi f .`i =L/ D 0, which is less than the
iD1
P
m
continuous bound bi `i =L, which is obtained by the identity function for g.
iD1
(b) True.
.kC1/kL
Setting C WD kLC1 leads to fBJ;1 . Ln I C/ D fFS;1 . Ln I k/ for n D 0; 1; : : : ; L.
Proof Both functions fFS;1 and fBJ;1 are maximal dual-feasible functions. Hence, the
assertion needs to be verified for 0 < Ln < 12 only. Since L; n 2 N n f0g, that allows
the assumption L 3.
p
Let p 2 N with p k and Ln
kC1 kLk
. Since C D k C kLC1 2 .k; k C 1/, we obtain
( )!
n .k C 1/ kn .kL C 1/ frac. .kC1/kn / kL C k
kLC1
fBJ;1 D C max 0; =k:
L kL C 1 1Ck
.k C 1/ kn kLp kL C 1 p
D D p1C 2 .p 1; p/;
kL C 1 kL C 1 kL C 1
and hence
n
kL C 1 p kL C k
fBJ;1 D p 1 C max 0; =k
L kC1
kC1p
D p1C =k
kC1
138 Appendix A
kp k C p 1 C k C 1 p
D
.k C 1/ k
p
D
kC1
p
D fFS;1
kC1
n
D fFS;1 :
L
2. n < Lp
kC1
: Since k; L; n; p are integers, it follows that n Lp1
kC1
. We get
Lp 1 k .Lp 1/ kL C 1 k p
C D D p1C 2 .p 1; p/;
.k C 1/ L kL C 1 kL C 1
k .L 1/ C 1 p 2k C 1 p > 0:
Therefore,
n
Lp 1
fBJ;1 fBJ;1
L .k C 1/ L
kL C 1 k p kL C k
D p 1 C max 0; =k
kC1
1p
D p 1 C max 0; =k
kC1
p1
D
k
Lp 1
D fFS;1 :
.k C 1/ L
Lp LpC1
3. n > kC1 : we get n kC1 and
Lp C 1 k .Lp C 1/
C D
.k C 1/ L kL C 1
kp
D pC 2 Œp; p C 1/;
kL C 1
Appendix A 139
and hence
n
Lp C 1
fBJ;1 fBJ;1
L .k C 1/ L
k p kL C k
D p C max 0; =k
kC1
k .2 L/ p
D p C max 0; =k
kC1
p
D
k
Lp C 1
D fFS;1 :
.k C 1/ L
Lp
We get for kC1 < n < L.pC1/ from the combination of the second and third case
p pC11
kC1
k fBJ;1 . L /
n
k , and hence fBJ;1 Ln D pk and analogously pk D fFS;1 Ln
because of the monotonicity. t
u
(c) True.
1
k
2 N with k 2
and any L 2 N n f0g, the use of C WD k
For any L leads to
fVB;2 Ln I k D fCCM;1 Ln I C for all n 2 N with n L.
Proof Since fVB;2 and fCCM;1 are maximal dual-feasible functions, it is enough to
verify the proposition for 0 < n < L=2. Then, it follows that
n
kn
fVB;2 D 1 =.k 1/
L L
and
n
Cn
fCCM;1 D =.k 1/:
L L
and
Cn kn
D 1;
L L
i.e.,
n n
kn
fCCM;1 D 1 =.k 1/ D fVB;2 :
L L L
n
kn ˘ kn1 ˘ n
and
p 1
fVB;2 I 3 D fBJ;1 IC :
3p 3
i.e.
C
1 C < 3; and frac.C/:
3
( )
pC1 pC1 frac. pC1
3p C/ C C 1
fBJ;1 IC D C C max 0;
3p 3p 2C
( pC1 )
3p
CCC1
D 0 C max 0; :
2C
Appendix A 141
or one has frac.Cx/ > frac.C/, and consequently frac.C Cx/ D frac.C/ C 1
frac.Cx/ > frac.C/ and
frac.Cy/frac.C/
If frac.Cx/ frac.C/, then d > bCc 1 0, because 1frac.C/
<
1frac.C/
1frac.C/
D 1. If frac.Cx/ > frac.C/, then
t
u
Appendix A 143
2.7. The function f is a maximal dual-feasible function, but not extreme, because
it is symmetric and strict convex on Œ0; 1=2 and also repeatedly differentiable in
.0; 1=2/.
2.8. The proof is given next.
Proof Let g; h W Œ0; 1 ! Œ0; 1 be maximal dual-feasible functions with 2
fFS;1 .x/ D g.x/ C h.x/ for all x 2 Œ0; 1. One has to show that fFS;1 .x/ D g.x/
for all x 2 .0; 1=2/ with 0 < fFS;1 .x/ < 1=2. Therefore assume k > 1. Since g; h are
dual-feasible functions, it follows that
1
.k C 1/ g 1
kC1
and
1 1
h :
kC1 kC1
1
1
1
1
p
Since fFS;1 kC1 D kC1 , one gets g kC1 D kC1 , implying g kC1 p for any
p
kC1 p
p 2 f1; 2; : : : ; kg due to the monotonicity of g. Analogously h kC1 kC1 , and
p p
hence g. kC1 / D f . kC1 /. It remains to show g.x/ D f .x/ if .k C 1/ x … N. There
is a p 2 N with
and p < k=2, such that fFS;1 .x/ D p=k. Let x0 WD p=k. Because of 0 < p < k=2 < k,
it holds that
i.e. fFS;1 .x0 / D x0 . Similarly to the other case g 1k D 1k D fFS;1 1k and g.x0 / D x0
follows. Since x0 is in the inner of an open interval on which fFS;1 is constant, g must
be constant in the same interval. Otherwise the monotonicity of g and h would yield
a contradiction. This implies
f2 .g.// are different functions. Moreover, for all x 2 Œ0; 1, it holds that
2f .g.x// f1 .g.x// f2 .g.x// D 0.
(b) True.
If f is continuous, then the image of the interval Œ0; 1, which is a connected
set, must be connected, and hence an interval. Since f .0/ D 0 and f .1/ D 1,
this image is the interval Œ0; 1, such that f is surjective. The opposite direction
consists in showing the continuity of f , provided that f is surjective. The
function f is continuous at the point zero, because for all x 2 .0; 1, it holds
that
0 f .x/ 1=b1=xc:
Hence, lim f .x/ D 0. The symmetry of f implies also the continuity at 1. Let
x#0
xN 2 .0; 1/ be an arbitrary constant. Let .xn / and .yn / be any sequences with
0 x1 x2 xN y2 y1 1
and
lim xn D lim yn D xN :
n!1 n!1
Every monotone and bounded sequence converges, hence the left and right
limits of f at xN exist, and it holds that
Since f is surjective, it cannot happen that lim f .x/ < f .Nx/ or lim f .x/ > f .Nx/.
x"Nx x#Nx
(c) False.
The function fFS;1 .I 1/ defined in (2.10) (p. 35), is a counter example. This
function is an extreme maximal dual-feasible function and convex on [0,1/2],
but not continuous.
2.10.
(a) The function fMT;0 is obviously symmetric, monotone and nonnegative. Accord-
ing to Theorem 2.2, only the superadditivity remains to be proved. Choose
any x; y 2 .0; 1=2/ with x y. If x < , then fMT;0 .x/ D 0, such that the
Appendix A 145
(b) Let g; h W Œ0; 1 ! Œ0; 1 be any maximal dual-feasible functions with 2fMT;0
g C h. We show g.x/ D h.x/ for all x. If x < , then fMT;0 .x/ D 0 implies
immediately g.x/ D h.x/ D 0 due to the nonnegativity of g and h. It holds for
all x; y 2 Œ; 1
2 that
Since fMT;0 .1=4/ D 1=4 and fMT;0 .1=3/ D 1=3, it follows that g.1=4/ D
h.1=4/ D 1=4 and g.1=3/ D h.1=3/ D 1=3, because larger function values
would violate the definition of a dual-feasible function. Now Lemma 2.4 can
be applied with a WD 1=4 and b WD 1=3, yielding g.x/ D h.x/ D x for all
x 2 Œ 14 ; 13 . The superadditivity of g and h implies g.2x/ 2g.x/ D 2x and
h.2x/ 2x for all these x, andhence D h.2x/ D 2x. The symmetryyields
g.2x/
g.x/ D h.x/ D x for all x 2 13 ; 12 [ 23 ; 34 too, and hence for all x 2 14 ; 34 .
3
Moreover, for x 2 4 ; 1 , one obtains g.x=2/ D h.x=2/ D x=2 and again
due to the superadditivity g.x/ D h.x/ D x. Finally the symmetry yields
g h. t
u
2.11. The proof is given next.
Proof Assume without loss of generality ˛ ˇ. If ˛ D ˇ, then, according to
Definition 2.6, the different maximal dual-feasible functions f ; g immediately yield
that h is not extreme. Otherwise define the function h1 W Œ0; 1 ! Œ0; 1 by
2˛ ˇ˛
h1 .x/ WD 2h.x/ g.x/ D f .x/ C g.x/:
˛Cˇ ˛Cˇ
2˛ 2˛
f .x0 / D g.x0 /
˛Cˇ ˛Cˇ
and
8
< 2=5; if x D 1=2;
g.x/ WD 2=3; if 1=2 < x 1;
:
0; otherwise.
Both f and g are dual-feasible functions, but neither symmetric nor superadditive.
Their composition yields
8
< 0; if 0 x < 1=2;
f .g.x// D fFS;1 .xI 1/ D 1=2; if x D 1=2;
:
1; if 1=2 < x 1:
bCxc C bC Cxc
fLL;1 .x/ C fLL;1 .1 x/ D
bCc
bCxc C bCc bCxc
D
bCc
D 1:
1frac.C/
If k > 2, then let x WD frac.C/ C k1 =C. Then x > 0 is obvious. It holds
1frac.C/
also that k1
< 1, and hence x < .frac.C/ C 1/=C 1. We show
1 frac.C/ 1 frac.C/
Cx D frac.C/ C frac.C/ C
k1 2
1 C frac.C/
D <1
2
1 frac.C/
0 < frac.C Cx/ frac.C/ D 1 frac.Cx/ D 1 frac.C/
k1
1 frac.C/
D .k 2/
k1
and
frac.Cx/ frac.C/
fLL;1 .x/ C fLL;1 .1 x/ D bCxc C bC Cxc C .k 1/ =k
1 frac.C/
frac.C Cx/ frac.C/
C .k 1/ =k =bCc
1 frac.C/
k2
D bCxc C bCc 1 bCxc C 1=k C =bCc
k
1
D 1 < 1:
k bCc
t
u
2.14. The proof is given next.
Proof Let x WD 1=k. Then, x 2 Œ0; 1 and fVB;1 .x/ D 0, and hence fVB;1 .1 x/ D
k2
k1
< 1. t
u
2.15.
(a) False.
Let for example C WD 1:9, x WD 0:3 and g be the maximal dual-feasible
function (2.10) with k D 2. Then, f .x/ D g.x/ D 1, f .2x/ D 1 < 2 f .x/.
(b) False.
148 Appendix A
X
Cn X
n
xi D C xi 1:
iD1 iD1
X
Cn X
n
f .xi / D C xi 1:
iD1 iD1
Chapter 3
x1 WD x2 WD WD xn WD x
yields
X
n
xi D nx 1
iD1
and
X
n
f .xi / D n f .x/ 1
iD1
X
n
f .xi / D " > 0;
iD1
Appendix A 149
xnC1 WD x1 ; : : : ; xnp WD xn ;
one gets
X
np
X
n
xi D p xi 0;
iD1 iD1
but
X
np
f .xi / D p " > 1
iD1
X
n
f .xi / > f .1 x1 /;
iD2
one gets
X
n
1< h.xi /
iD1
X
n
D k h.x1 / C h.xi /
iDkC1
X
n
D k k f .1 x1 / C f .xi /;
iDkC1
3.4.
(a) 353=122.
(b) Consider for instance the following four patterns that are cut once each:
.1; 0; 1; 0; 0; 0/>;
.1; 0; 0; 1; 0; 2/>;
.0; 1; 0; 0; 2; 1/>;
.0; 0; 0; 0; 0; 1/>:
.1; 1; 0; 0; 0; 0/>
needs the length 123. Use this and for example the following two patterns once:
x 7! ejxj :
(b) True.
Let f W R ! R be any general dual-feasible function and g W R ! R a
maximal general dual-feasible function dominating f . (If f is already maximal,
Appendix A 151
f .x/ g.x/ tx
such that f fulfills the first condition. Assume that f is dominated by a real function
g, i.e. f .x/ g.x/ for all x 2 R and there is an y 2 R with g.y/ > f .y/. Then,
g.y/ f .y/
and
f .x/ 0 f .x/
for all x 2 RC , and also, analogously to part (b) of Theorem 3.1, that f is
superadditive. Moreover,
f .x/
c WD lim
x!1 x
for all x > 0. Of course, we have c 0. It follows that f .x/ cx for all x > 0,
otherwise the superadditivity of f would contradict the definition of c. It can also
not happen that f .x/ > cx for a certain x < 0. A detailed proof could be done like in
Proposition 3.4. Therefore, the linear function x 7! cx dominates f , but f is maximal
in the sense of the condition (2.), hence f .x/ D cx for all x 2 R. t
u
3.7. The functions f0 ; : : : ; f3 fulfill obviously the conditions (1.) and (3.) of
Theorem 3.1. Moreover, f0 is a superadditive general dual-feasible function, but not
152 Appendix A
symmetric, and f0 is not a maximal general dual-feasible functions. The reasons are
the following:
f0 .1/ D 1;
which rises strictly monotonely. Therefore, f0 is strict convex for x > 0 and hence
superadditive for positive arguments. The superadditivity holds without restriction,
because if x < 0 y, then
and if x; y < 0, then f0 .x C y/ D f0 .x/ C f0 .y/. The strict convexity implies also
f0 .x/ < x for 0 < x < 1, such that f0 cannot be symmetric and is dominated by
g W R ! R with
8
< .1 C tanh 1/ x; if x 0;
g.x/ WD x; if 0 x 1;
:
.1 C tanh 1/ x tanh 1; if x 1;
but
and
1
f2 .1 C "/ D 1 C ;
k
Appendix A 153
yielding
1
f2 . / C f2 .1 C "/ > 1
kC1
in spite of
1
1C" < 1:
kC1
The functions f2 and f3 are symmetric, but not f1 . One has for example f1 .2/ D 2,
but f1 .1/ D 1 1=k, and hence
Regarding f2 , if .k C 1/ x … Z, then
We show that f1 is a generalP dual-feasible function. Let any finite index set I
of real numbers xi (i 2 I) with xi 1 be given. If all xi are non-positive, then
P i2I
f1 .xi / 0 is immediately clear. Otherwise, one obtains
i2I
X X
f1 .xi / < .k C 1/ xi =k 1 C 1=k
i2I i2I
and also
X X
k f1 .xi / b.k C 1/ xi c:
i2I i2I
it follows that
X
k f1 .xi / k:
i2I
154 Appendix A
Since f1 is a general dual-feasible function, and f3 .x/ D lim f1 .y/ for any x 2 R,
y"x
the function f3 is also a general dual-feasible function. Since f3 dominates f1 , the
function f1 is not a maximal general dual-feasible function. f3 is symmetric and can
therefore not be dominated by another general dual-feasible function. Hence, f3 is a
maximal general dual-feasible function.
3.8. The given function g is a Hölder continuous classical maximal dual-feasible
function, because there is a constant c > 0, such that it holds for every x; y 2 Œ0; 1
that
p
jg.x/ g.y/j c jx yj;
and g is strict convex on Œ0; 1=2, symmetric and therefore superadditive, and g.0/ D
0. Let p WD 1, y WDp 1=2 and x
0 with x < 0. Then, g.y/ D 1=2, f .x/ D tx and
g.x C y/ D .1 2x/=2. Hence, we have
p
f .x C y/ f .x/ f .y/ D tx x=2 < 0
for x > 1
2t2
.
Chapter 4
4.1. Check the different classes of VP-MDFF, or show that there are two elements
x and y such that x C y w and f .x/ C f .y/ > 1.
4.2. Check the different classes of VP-MDFF.
4.3. Immediate.
4.4. Immediate.
4.5. First, note that applying the function to each item is equivalent to cutting
the pieces into squares and applying two maximal dual-feasible functions on the
resulting 2-OPP-O instance.
Appendix A 155
Chapter 5
5.1.
(a) False.
Without superadditivity, counter examples exist like the following one: 1=2x
1 is given, and hence x WD 2 is feasible. Suppose f .1=2/ D 1=2 and f .1/ D 0.
That yields the contradiction f .1=2/ 2 f .1/ or 1 0.
(b) False.
A superadditive function f with f .0/ < 0 may also yield contradictions. For
instance, applying such a function to the inequality x 0, which allows x WD 0,
could yield the false conclusion x < 0.
5 3
> 4 1
>
5.2. This VP-MDFF maps the vectors 11 ; 8 and 11 ; 2 to 1=2 and the vector
3 1
>
;
11 4
to 1=4. That yields the valid inequality
x1 x2 x3
C C 1;
2 2 4
which is equivalent to the demanded one.
(Remark: The demanded inequality could also be obtained by adding the two
inequalities, dividing by 4 and applying the rounding procedure due to Chvátal and
Gomory.)
156 Appendix A
5.3. The definition of the VP-DFF implies f .aj0 / C f .aj / 1, and hence f .aj / 0.
Because of the range of f , it follows that f .aj / D 0. That implies also that if a column
vector aj0 gets the special mapping to 1 due to the property “large argument vector”
then every “small argument vector” aj with aj0 Caj w will get the special mapping
to 0, and the other VP-MDFF in the considered construction principles will not be
applied to these vectors. Moreover, the special mapping to 1 can be applied only to
vectors aj with 2aj 6 w.
References
Fekete S, Schepers J (2004) A general framework for bounds for higher-dimensional orthogonal
packing problems. Math Meth Oper Res 60:311–329
Geoffrion A (1974) Lagrangian relaxation and its uses in integer programming. Math Program
Study 2:82–114
Gilmore P, Gomory R (1961) A linear programming approach to the cutting stock problem (part I).
Oper Res 9:849–859
Gomory R (1958) Outline of an algorithm for integer solutions to linear programs. Bull Am Math
Soc 64:275–278
Johnson D (1973) Near optimal bin packing algorithms. Dissertation, Massachussetts Institute of
Technology, Cambridge, MA
Khanafer A, Clautiaux F, Talbi E (2010) New lower bounds for bin packing problems with
conflicts. Eur J Oper Res 206:281–288
Letchford A, Lodi A (2002) Strengthening Chvával-Gomory cuts and Gomory fractional cuts. Oper
Res Lett 30:74–82
Lueker G (1983) Bin packing with items uniformly distributed over intervals [a,b]. In: Proceedings
of the 24th annual symposium on foundations of computer science (FOCS 83). IEEE Computer
Society, Silver Spring, MD, pp 289–297
Martello S, Toth P (1990) Knapsack problems - algorithms and computer implementation. Wiley,
Chichester
Nemhauser G, Wolsey L (1998) Integer and combinatorial optimization. Wiley, New York
Rietz J, Alves C, Valério de Carvalho J (2010) Theoretical investigations on maximal dual feasible
functions. Oper Res Lett 38:174–178
Rietz J, Alves C, Valério de Carvalho J (2012a) On the extremality of maximal dual feasible
functions. Oper Res Lett 40:25–30
Rietz J, Alves C, Valério de Carvalho J, Clautiaux F (2012b) Computing valid inequalities for
general integer programs using an extension of maximal dual-feasible functions to negative
arguments. In: Proceedings of the 1st international conference on operations research and
enterprise systems (ICORES 2012)
Rietz J, Alves C, Valério de Carvalho J, Clautiaux F (2014) On the properties of general
dual-feasible functions. In: Murgante B, Misra S, Rocha AMAC, Torre C, Rocha JG, Falcão MI,
Taniar D, Apduhan BO, Gervasi O (eds) Computational science and its applications – ICCSA
2014. Lecture notes on computer science, vol 8580. Springer, pp 180–194. doi:10.1007/978-3-
319-09129-7_14. http://dx.doi.org/10.1007/978-3-319-09129-7_14
Rietz J, Alves C, Valério de Carvalho J, Clautiaux F (2015) Constructing general dual-feasible
functions. Oper Res Lett 43:427–431
Robertson N, Seymour P (1986) Graph minors. II algorithmic aspects of tree-width. J Algorithms
7:309–322
Rose D, Tarjan E, Lueker G (1976) Algorithmic aspects of vertex elimination on graphs. SIAM J
Comput 5:146–160
Spieksma F (1994) A branch-and-bound algorithm for the two-dimensional vector packing
problem. Comput Oper Res 21:19–25
Valério de Carvalho J (1999) Exact solution of bin packing problems using column generation and
branch-and-bound. Ann Oper Res 86:629–659
Valério de Carvalho J (2002) A note on branch-and-price algorithms for the one-dimensional
cutting stock problems. Comput Optim Appl 21:339–340
Vance P (1998) Branch-and-Price algorithms for the one-dimensional cutting stock problem.
Comput Optim Appl 9:211–228
Vanderbeck F (2000) Exact algorithm for minimizing the number of setups in the one-dimensional
cutting stock problem. Oper Res 46(6):915–926
Index
Extremality, 28, 56
Tree-decomposition (of a graph), 117
Triangulated (graph), 117
General dual-feasible function, 52
Gilmore and Gomory model, 7, 23
Vector packing (mD-VPP), 95
Vector packing dual-feasible function
Integrality constraints, 1 (VP-DFF), 97