Applications of Operations Research
Study Set 1: Lectures 1 and 2
1. Consider the 0-1 knapsack problem where ai = 1 for all i = 1, . . . , n:
n
X
max z = ci x i
i=1
n
X
s.t. xi ≤ b
i=1
xi ∈ {0, 1} i = 1, . . . , n
Prove that the greedy heuristic as discussed in the first lecture always finds an optimal
solution.
Solution: We give two examplary proofs that are both correct (other correct proofs
exist as well). Note that the second proof even proves a stronger statement, namely
that the solution is optimal if ai = d for all i = 1, . . . , n for all positive integer values
d, not only of d = 1.
Proof 1:
As every item has weight 1, we know that b is the maximum number of items that
fit into the knapsack, and an optimal solution or the greedy heuristic will never take
less items. The greedy heuristic takes the b items with the highest utility. Due to the
identical weights, these items also have the highest ratios acii and would therefore also
be taken in the LP relaxation. Thus, the value of the LP relaxation (an upper bound)
is equal to the value of the greedy solution. As we found a solution which takes the
same value as the dual bound from the LP relaxation, the greedy solution must be
optimal.
Proof 2 (by contradiction):
Let SALG be the set of chosen items in the greedy heuristic and SOP T the set of items
in an optimal solution. Assume now, that the greedy heuristic did not find an optimal
solution, so SALG ̸= SOP T and the solution value of SOP T is larger.
Because of the same weight of all items, and as both SALG and SOP T have as many
items as possible (nothing else fits), we know that the two sets have equal size, so
|SALG | = |SOP T |. But, since SALG and SOP T are different and with different solution
value, there must be items i ∈ SALG \ SOP T and j ∈ SOP T \ SALG such that ci ̸= cj .
1
As SALG consists of all items with the highest utility, we know that ci > cj . However,
that means that we can increase the value of SOP T by removing item j and including
item i. That means that SOP T is not optimal, and this contradiction shows, that the
assumption that the greedy heuristic did not find an optimal solution was wrong.
2. Consider the 0-1 knapsack problem where ci = 1 for all i = 1, . . . , n:
n
X
max z = xi
i=1
Xn
s.t. ai x i ≤ b
i=1
xi ∈ {0, 1} i = 1, . . . , n
Prove that the rounding heuristic as discussed in the first lecture always finds an
optimal solution. Hint: The optimal value is an integer.
Solution: Let z R be the value of the rounding heuristic. As the rounding heuristic
solution differs from the LP relaxation solution xLP only by the critical item s with
weight cs = 1, we can state:
z R ≤ z LP = z R + cs · xLP
s < z R + 1.
| {z }
<1
This means that z R corresponds to the largest integer which is smaller or equal than
the LP bound, so z R is the optimal value.
3. For an instance of the 0-1 knapsack problem, you are given a feasible solution with
value 14 and an upper bound of value 31.5.
Which of the following holds? Multiple answers may be correct. Explain your answers.
(a) If we find another feasible solution with value 28, then it must be optimal.
(b) 31 is an upper bound to the problem.
(c) If the feasible solution was computed by using greedy 2.0 as seen in class, 30 is
an upper bound to the problem as well.
(d) If the feasible solution was computed by using greedy 2.0 as seen in class, a
solution with value 26 can’t be optimal.
(e) If 31.5 was the optimal value of the LP relaxation, we cannot find any better
upper bounds.
2
Solution:
(a) False. As long as we don’t know anything about the way how the feasible solution
was computed, we could still expect higher optimal solution values. However, if
the feasible solution was computed by using greedy 2.0 as seen in class, then the
statement would be correct, since the performance guarantee of 2 means that
2 · 14 = 28 is an upper bound; so if we find a solution which equals that bound,
it must be optimal.
(b) Correct. Since we know that the optimal solution value is integer, an upper bound
of 31.5 implies that 31 is also an upper bound.
(c) Correct. As stated in (a), 28 is actually an upper bound in that case. This implies
that 30 is also a (weaker) upper bound.
(d) False. Even though we have the theoretical upper bound of 28, it does not mean
that a solution with value 28 or 27 exists. Therefore, a solution with value 26
could be optimal.
(e) False. 31 is an upper bound as well. Moreover, through other techniques (or even
LP relaxations of other correct formulations of the problem), we could still find
better upper bounds.
4. Consider the following knapsack problem.
max z =16x1 + 22x2 + 12x3 + 8x4
s.t. 5x1 + 7x2 + 4x3 + 3x4 ≤ 14
x1 , x2 , x3 , x4 ∈ {0, 1}
a) Use a greedy heuristic to find an initial feasible solution where each time we select
the item with the highest ratio of utility divided by weight until no further items
can be added to the knapsack.
b) Use Langrangian relaxation to obtain an upper bound on the optimal value.
c) Compare the upper bound obtained from the Lagrangian dual to the upper bound
provided by the LP relaxation.
d) Compute the gap of your heuristic solution relative to the Lagrange upper bound.
e) Find the optimal integer solution to the problem. What is the optimality gap of
your heuristic solution relative to the optimal value?
Solution:
a) The heuristic solution is given by x1 = x2 = 1, x3 = x4 = 0 and z = 38.
3
b) For λ ≥ 0, the Lagrangian relaxation is
z(λ) = max 16x1 + 22x2 + 12x3 + 8x4 + λ (14 − 5x1 − 7x2 − 4x3 − 3x4 )
s.t. x1 , x2 , x3 , x4 ∈ {0, 1}
which can be written as
z(λ) = max (16 − 5λ)x1 + (22 − 7λ)x2 + (12 − 4λ)x3 + (8 − 3λ)x4 + 14λ
s.t. x1 , x2 , x3 , x4 ∈ {0, 1}
In our Lagrangian subproblem, we only set a variable equal to one in the optimal
solution if its objective function coefficient is positive. So
z(λ) = max{16−5λ, 0}+max{22−7λ, 0}+max{12−4λ, 0}+max{8−3λ, 0}+14λ
or equivalently
58 − 5λ if 0 ≤ λ ≤ 83
8 12
50 − 2λ if 3 ≤ λ ≤ 4
z(λ) = 38 + 2λ if 124
≤ λ ≤ 227
16 + 9λ if 22 ≤ λ ≤ 16
7 5
14λ if 16 ≤ λ
5
Then z LD = minλ≥0 z(λ) = min{44.6666, 44, 44, 44.2857, 44.8} = 44. 44 is thus
an upper bound on the optimal value.
c) The optimal solution to the LP relaxation is found by ordering the items from most
efficient (utility divided by weight) to least efficient and adding items (possibly
a fraction) until the knapsack is full. This gives the solution x1 = x2 = 1, x3 =
0.5, x4 = 0 with z = 44. The upper bound provided by the LP relaxation is thus
the same as the one provided by the Lagrangian dual.
d) The optimality gap of the heuristic solution compared to the upper bound is
44−38
44
= 13.6%.
e) The optimal integer solution is given by x1 = 0, x2 = x3 = x4 = 1 with z = 42.
The optimality gap of the heuristic solution compared to the optimal solution is
42−38
42
= 9.52%
5. Consider the following algorithm for the 0-1 knapsack problem:
4
ci
Sort items in non-increasing order of ai
z := +∞
for all Pk = 0 to n − 1 do P
s := kj=1 cj + ack+1
k+1
(b − kj=1 aj )
if s < z then
z := s
end if
end for
return z
a) Apply the algorithm to the following instance:
c 5 3 3
b = 5, i
ai 4 2 3
b) Describe what the algorithm does. Does the return value correspond to something
seen in class?
c) Suppose that, for the instance from question a), some heuristic yields a solution
with utility 6. Based on your answer in question b), what can you say about the
quality of this solution?
Solution:
a) The algorithm proceeds as follows:
c 3 5 3
New ordering: i
ai 2 4 3
z := ∞
k := 0
s := 32 · 5 = 7.5
z := 7.5
k := 1
s := 3 + 45 · (5 − 2) = 6.75
z := 6.75
k := 2
s := (3 + 5) + 33 · (5 − (2 + 4)) = 8 + 1 · (−1) = 7
return 6.75
b) For each k, the value s equals to the utility of taking the first k items in the
solution, and filling the remaining capacity with the (fractional) item k + 1. As
the items are sorted, we start with the most efficient items, so the solution is
always an upper bound. The bound gets always lower, until we reach the case
that k + 1 is the critical item, so this corresponds exactly to the LP relaxation.
After further increasing k, it corresponds to ”overpacking” the knapsack with the
5
first k items, and subtracting the utility of the fractional (less-efficient) item k + 1
from the ”overpacked” utility, which is also an upper bound (but not as good as
the LP bound).
The return value z is eventually the LP relaxation value (as the minimum over
these upper bounds).
c) As the algorithm has returned the upper bound 6.75, and as the given solution
has integer value 6, it must be optimal.
6. Consider the following knapsack problem.
max 30x1 + 11x2 + 6x3 + 12x4 + 10x5
s.t. 10x1 + 4x2 + 4x3 + 4x4 + 4x5 ≤ 12
x1 , x2 , x3 , x4 , x5 ∈ {0, 1}
Show that 37 is an upper bound on the optimal value of the problem.
Solution: The solution of the LP relaxation is (0.8, 0, 0, 1, 0). Thus, it gives an upper
bound of 36. That means in particular that 37 is an upper bound to the problem as
well.
7. Consider the following variant of the knapsack problem:
You have n packs of different types of rice and other grains, and a capacity of b units.
Pack i = 1, . . . , n has weight ai and utility ci . Next to taking an entire pack, you can
also decide to open a pack of grain1 and take an arbitrary partial quantity of that
grain. Once its pack has been opened, the utility of grain i drops to a strictly smaller
utility c′i < ci . By taking a partial quantity, the utility decreases proportionally from
this base c′i . For example, taking half a package of grain i yields a utility of 0.5c′i . We
would like to choose which grains to select (either a full pack, or a partial one) so that
their weight does not exceed the capacity and their utility is maximum.
a) Model the problem.
b) How does this problem compare to the standard 0-1 knapsack with the same items
(without unpacking)? Does the optimal value of one bound the other?
c) Consider the problem for the following instance:
c 5 6 3
b = 5, i , and c′i = ci − 1 for i = 1, 2, 3
ai 4 5 4
Can you solve it optimally? Could you use a technique we have seen before?
1
Note that the packaging itself has no weight.
6
d) Consider now this instance:
c 3 1
b = 5, i , and c′i = ci − 1 for i = 1, 2
ai 4 2
What is the difference here?
Solution: We refer in the following to the new variant as ExKP (Extended Knapsack
Problem), and to the Standard Knapsack Problem as KP (with the same weight ai and
utility ci as packed items).
a) Let N = {1, . . . , n} be the set of all grains. We use an integer variable xi defined
via (
1 Take the entire pack i
xi =
0 otherwise
und a fractional variable yi which corresponds to the fractional/unpacked quantity
that is taken of item i. We obtain the following extended knapsack model:
X
max (ci xi + c′i yi )
i∈N
s.t. xi + yi ≤ 1 i∈N
X
ai (xi + yi ) ≤ b
i∈N
xi ∈ {0, 1} i∈N
yi ≥ 0 i∈N
b) Each feasible solution of KP is also a feasible solution for ExKP when setting
yi = 0 for all i ∈ N . As this holds in particular for the optimal solution of KP,
OP T
the optimal value of ExKP zExKP is not smaller than the optimal solution value
OP T
of KP zKP . Therefore, it is an upper bound for the KP.
Note that we can see as well that the LP relaxation of ExKP would result in the
same as for KP, because the y-variables would not be used in the LP relaxation
due to their smaller utility.
In total, we obtain:
OP T OP T LP LP
zKP ≤ zExKP ≤ zKP = zExKP
c) By using the rounding heuristic and taking the critical item partially (after
unpacking), we obtain the solution: x = (1, 0, 0), y = (0, 0.2, 0) with value
5 + 0.2 · (6 − 1) = 6.
Due to the higher utility of entire packs, we know that an optimal solution of
ExKP takes entire packs and at most one unpacked item, which would then be
c′
one of the remaining ones with the largest ratio aii (similar to the LP relaxation
of KP)2 . Thus, we see that the computed solution is optimal by comparing it
2 c′i
If there are multiple remaining items with the same maximum ratio ai , it could be arbitrarily splitted
between them, however, it is always sufficient to only pick one.
7
with the other candidates x2 = (0, 1, 0), y2 = (0, 0, 0) (also optimal), and x3 =
(0, 0, 1), y3 = (0.25, 0, 0) or x4 = (0, 0, 1), y4 = (0, 0.2, 0) (with value 4).
d) In this instance, we see that item 2 has no more value after unpacking.
However, the rounding heuristic still produces the optimal solution: x = (1, 0), y =
(0, 0.5) with value 3 + 0.5 · (1 − 1) = 3 which is better than the other candidate
x = (0, 1), y = (0.75, 0) with value 1 + 0.75 · (3 − 1) = 2.5.
Remark: There was a mistake in the instance, we actually wanted to show
an instance where the item which is taken entirely is different than the one in
c 1 2
the rounding heuristic. For that, consider the instance b = 6, i , and
ai 2 5
c′i = ci − 1 for i = 1, 2.
Here, the rounding heuristic would suggest the solution x = (1, 0), y = (0, 0.8)
with value 1 + 0.8 · (2 − 1) = 1.8, whereas the solution x = (0, 1), y = (0.5, 0) with
value 2 + 0.5 · (1 − 1) = 2 does not take the first item, but is better.
Exercises seen in class:
8. (One-dimensional Cutting Stock Problem) Materials such as lumber, pipe, or cable
are supplied in large blocks, called master pieces, of a standard length C. There
is a demand of n (smaller) pieces to be cut from the master pieces, where demand
i ∈ {1, . . . , n} asks for a piece with a given length li not exceeding C. The goal is to
minimize the number of standard length master pieces needed to satisfy the demand.
a) Model the problem.
b) Develop a heuristic method for producing a good solution for this problem.
c) Give an example of an instance which leads the heuristic to a provably non-optimal
solution.
d) Can you find a heuristic with performance guarantee 2 (i.e., which uses at most
twice as many master pieces than the minimum amount)? [Hint: have a look at
the waste in each master piece, that is how much of each master piece we are not
going to use.]
e) Apply your heuristic on the numerical problem in which C = 100, and the de-
mands have the following lengths: 84, 63, 14, 28, 71, 94, 54, 39, 56, 41, 37.
Solution:
a) In the worst case we use n master pieces. We define yj to be 1 if the jth master
piece is used and 0 otherwise and xij to be 1 if demand i is put in master piece j
and 0 otherwise.
8
n
X
min yj
j=1
Xn
s.t xij = 1 i = 1, . . . , n
j=1
Xn
li xij ≤ Cyj j = 1, . . . , n
i=1
xij ∈ {0, 1} i, j = 1, . . . , n
yj ∈ {0, 1} j = 1, . . . , n
You can make this model stronger by adding inequalities xij ≤ yj for all i, j =
1, . . . , n.
b) A very simple heuristic for this problem is FIRST-FIT. In an arbitrary order,
process the next-in-line demand piece on the first available master piece which
has enough length left. If there are no such master pieces left, then use a new
one.
c) Take the following demand pieces (in length):
piece a b c d e f g h i
demand 4 3 4 3 2 2 2 2 2
with master pieces length of 8. By processing the demand pieces in this exact
order, we need 4 master pieces, while we could fit the demands in just 3.
d) FIRST-FIT never uses more than twice the minimum amount of master pieces
required. Indeed, let m be the number of master piecesPused by FIRST-FIT and
n
consider the total net amount of length needed, i.e., i=1 li . Notice that there
is at most one master piece which is used for less than one half of its length: if
not, then there would be two different master pieces which are used for less than
one half and FIRST-FIT would have put their demand pieces on a unique master
piece. Then, at least m − 1 master pieces are being used for more than one half
Pnleast m −m−1
of their length. In other words, one half of the length of at 1 master
pieces is used without waste and therefore we can say that i=1 li P
> C 2 . Now
denote the optimal objective with OPT. We know that OPT ≥ C1 ni=1 li and we
get
C m−1
Pn
i=1 li m−1
OPT ≥ C
> C
2
= 2
that is 2OPT > m − 1 and 2OPT ≥ m.
Alternative (longer) proof. Consider the relaxation where it is allowed to
satisfy a demand by cutting a piece fractionally from multiple master pieces. This
corresponds to relaxing the constraint xij ∈ {0, 1} for i, j = 1, . . . , n in the model
of Question a). Consider the variant of FIRST-FIT for this relaxation where, in
9
an arbitrary order, we again process the next-in-line demand piece on the first
available master piece that has enough remaining length. Whenever there is no
such master piece, however, we now use the remaining length of the master piece
to satisfy the demand of a piece fractionally, and we use a new master piece to
satisfy the other fraction of the piece’s demand.
The following figure illustrates this for the instance of Question c), where piece c
is split fractionally over master pieces 1 and 2.
master pieces
3 f g h i
2 c d e
1 a b c
0 1 2 3 4 5 6 7 8 length
Observe that we need exactly ⌈ C1 ni=1 li ⌉ master pieces in this (fractional) solu-
P
tion.
For every demand that has been split over two master pieces, we now introduce
a new master piece that only satisfies the demand of the corresponding fractional
piece. For the instance above, this gives:
master pieces
4 f g h i
3 d e
2 c
1 a b
0 1 2 3 4 5 6 7 8 length
Since there are at most ⌊ C1 ni=1 li ⌋ pieces whose demand has been split over
P
two master pieces (i.e., at most one per completely filled master piece), P we thus
obtain a feasible solution for the original problem using at most 2⌈ C1 ni=1 li ⌉
master pieces.
Now observe that FIRST-FIT yields a solution that is at least as good as the one
obtained above. Indeed, the FIRST-FIT solution results from going through the
list of pieces in the solution obtained above, and putting each piece on the lowest-
indexed master piece where there is sufficient remaining length for it. Clearly,
doing so will never increase the number of needed master pieces.
Since OPT ≥ ⌈ C1 ni=1 li ⌉, where OPT again denotes the optimal objective func-
P
tion value of the original problem, this proves the factor two performance guar-
antee of FIRST-FIT.
e) FIRST-FIT will work as follows:
put 84 in master piece 1
put 63 in master piece 2
put 14 in master piece 1
put 28 in master piece 2
10
put 71 in master piece 3
put 94 in master piece 4
put 54 in master piece 5
put 39 in master piece 5
put 56 in master piece 6
put 41 in master piece 6
put 37 in master piece 7
This solution uses seven master pieces in total.
9. (Multi-dimensional Knapsack Problem, from A. Volgenant and J. A. Zoon, Oct. 1990)
Suppose you have a finite set of objects you want to take with you. Each object has
its value, and you would like to take with you the maximum amount of value. But
each object has also its own mass, and you can safely take with you only a total mass
which is below a given threshold. This is the very famous Knapsack Problem.
Now suppose that you also have limitations on the total volume of the objects you can
carry: analogously to the mass, we have another constraint to deal with. This is the
2-dimensional Knapsack Problem.
We can in general add an arbitrary (finite) number of constraints and we get what is
referred to as the multi-dimensional Knapsack Problem.
a) Can you give a heuristic algorithm for the multi-dimensional Knapsack Problem?
b) Can you give explicitly an instance where your heuristic fails to reach the optimal
solution and why?
c) Apply your heuristic to the following instance:
max 4x1 + 3x2 + x3 + 6x4 + 5x5
s.t. x1 + 3x2 + 4x3 + 3x4 + 2x5 ≤ 8
8x1 + x2 + 9x3 + x5 ≤ 10
x1 , x2 , x3 , x4 , x5 ∈ {0, 1}
Solution:
a) Multiply both the right-hand and left-hand sides of the constraints in such a way
that we obtain constraints with the same right-hand side. Compute for any object
i and any dimension j, the ratio aciji where ci is the objective coefficient and aij
is the coefficient of object i in dimesion j. If aij = 0 then set the ratio to some
arbitrarily large positive value M . Now for each object i, take the average of its
ratios over all dimensions. Put the objects in a nonincreasing sequence of these
averages (break ties randomly). Then for every next-in-line object, check if taking
the object is feasible for all constraints (dimensions). If so, put the object in the
knapsack. Otherwise discard the current object. Go to the next-in-line object
and repeat. Once there are no more objects left, stop.
b) Consider the problem
11
max x1 + 3x2 + x3
s.t. 9x1 + 2x2 ≤ 10
2x2 + 9x3 ≤ 10
x1 , x2 , x3 ∈ {0, 1}.
Since x3 and x1 don’t respectively appear in the first and in the third constraints,
the heuristic above would select the solution (1, 0, 1), while the optimal solution
is easily proven to be (0, 1, 0).
c) First we transform the first constraint into an equivalent one with right-hand side
10:
1.25x1 + 3.75x2 + 5x3 + 3.75x4 + 2.5x5 ≤ 10.
The first constraint gives now the following ratios: 3.2, 0.8, 0.2, 4.8, 2. And the
second constraint gives 0.5, 3, 0.111, M, 5.
So the average is 1.85, 1.9, 0.155, 0.5 M + 2.4, 3.5. Thus, we will try to include the
variables in the solution in the following order: x4 , x5 , x2 , x1 , x3 . We finally end
up taking the solution (0, 1, 0, 1, 1).
12