0% found this document useful (0 votes)
22 views

Pattern Minimisation in Cutting Stock Problems: Colin Mcdiarmid

This document summarizes a research paper about minimizing patterns in cutting stock problems. The paper addresses the special case where any two customer reels fit into a jumbo roll, but no three do. This special case is of interest because it is the simplest non-trivial case and may arise in practice during iterative improvement. The paper proves that even in this special case, minimizing patterns is strongly NP-hard. It focuses on "balanced subsets," where a subset can be partitioned into two sets with equal total demand. The key finding is that the minimum number of patterns equals the number of customer types minus the maximum number of disjoint balanced subsets. Heuristics for finding good solutions are suggested, such as repeatedly seeking and deleting

Uploaded by

Nopparut
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Pattern Minimisation in Cutting Stock Problems: Colin Mcdiarmid

This document summarizes a research paper about minimizing patterns in cutting stock problems. The paper addresses the special case where any two customer reels fit into a jumbo roll, but no three do. This special case is of interest because it is the simplest non-trivial case and may arise in practice during iterative improvement. The paper proves that even in this special case, minimizing patterns is strongly NP-hard. It focuses on "balanced subsets," where a subset can be partitioned into two sets with equal total demand. The key finding is that the minimum number of patterns equals the number of customer types minus the maximum number of disjoint balanced subsets. Heuristics for finding good solutions are suggested, such as repeatedly seeking and deleting

Uploaded by

Nopparut
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Discrete Applied Mathematics 98 (1999) 121130

Pattern minimisation in cutting stock problems


Colin McDiarmid
Department of Statistics, University of Oxford, 1 South Parks Road, Oxford OX1 3TG, UK
Received 21 October 1997; accepted 8 February 1999

Abstract
In the cutting stock pattern minimisation problem, we wish to satisfy demand for various
customer reels by cutting as few as possible jumbo reels, and further to minimise the number
of distinct cutting patterns used. We focus on the special case in which any two customer reels
t into a jumbo, but no three do: this case is of interest partly because it is the simplest case
that is not trivial, and partly because it may arise in practice when one attempts to improve a
solution iteratively.
We nd that the pattern minimisation problem is strongly NP-hard even in this special case,
when the basic problem of nding a minimum waste solution is trivial. Our analysis focusses
on balanced subsets, and suggests an approach for heuristic methods involving searching for
balanced subsets. ? 1999 Elsevier Science B.V. All rights reserved.
Keywords: Cutting stock; Cutting patterns; Partition; NP-hard; Dynamic programming

1. Introduction
Some materials such as paper may be manufactured in wide jumbo rolls, which
are later cut into much narrower rolls to satisfy customer demands. To minimise waste,
cutting patterns should be chosen so as to use as few jumbos as possible (see [4,7,8]).
Thus the basic cutting stock problem has input a positive integer J , distinct positive
integers r1 ; : : : ; rn , and positive integers d1 ; : : : ; dn ; and the required task is to use as few
as possible jumbos of width J to satisfy the demand for di customer reels of width ri ,
for each i = 1; : : : ; n. This is one of the classical OR problems. It contains the strongly
NP-complete problem 3-PARTITION: thus it is NP-hard even if the jumbo size J is
bounded by a polynomial in n, and each customer reel size ri satis es J=4 ri J=2
see [6], p. 224. Thus we cannot expect always to be able to nd optimal solutions
to such problems within a reasonable time.

Fax: + 44- 1865- 272- 595.


E-mail address: [email protected] (C. McDiarmid)

0166-218X/99/$ - see front matter ? 1999 Elsevier Science B.V. All rights reserved.
PII: S 0 1 6 6 - 2 1 8 X ( 9 9 ) 0 0 1 1 2 - 2

122

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

Each time a di erent pattern of customer reels is to be cut, the knives on the cutting
machine need to be re-set. A problem presented by C.N. Goulimis and investigated
at the 29th European Study Group with Industry in March 1996 concerned how to
nd solutions to the above cutting stock problem, that further minimise the number
of distinct cutting patterns used see [1,2,9]. This is of course going to be hard in
general, since the basic problem is hard. In order to investigate the added diculty of
the extended problem, we consider here a special case in which the basic problem of
minimising the number of jumbos (minimising waste) is trivial.
PATTERN MINIMISATION
Input: positive integers d1 ; : : : ; dn .
Task: in the cutting stock problem where the demand is for di reels of type i, and
any two reels t into a jumbo but no three do, nd a minimum waste solution which
further minimises the number of distinct patterns used.
This very restricted special case is of interest partly because it seems to be the
simplest case that is not completely trivial, and partly because it may arise in practice
when one attempts to improve a solution iteratively. For example, if a collection of
some of the currently used patterns agree on the large reels and di er only on small
reels, and any two small reels t in the width left by the large reels, then when
we attempt to re-allocate the small reels we face exactly this special case [16]. We
investigate whether the pattern minimisation problem remains hard in this special case,
and brie y consider approaches to nding good solutions.
P
It is clear that the least number of jumbos needed is d i di =2e, the round up of half
the total demand; and it is trivial to nd a corresponding minimum waste solution. But,
how easy is it to nd, amongst the minimum waste solutions, one which minimises the
number of patterns used? For the variant of the problem when no three customer reels
t into a jumbo but also some pairs may not, it was shown in [1] that the problem is
strongly NP-hard. The theorem below strengthens this negative result.
Theorem 1. The problem PATTERN MINIMISATION is strongly NP-hard.
The key to understanding the above problem is the concept of a balanced subset.
Given a family d = (d1 ; : : : ; dn ) of non-negative integers, denote by (d ) the minimum
number of patterns used in any minimum waste solution. Also, call a non-empty subset
P
of {1; : : : ; n} balanced if it can be partitioned into two sets A and B such that iA di =
P
iB di . Thus if some di = 0 then the singleton set {i} is balanced. Let (d ) be the
maximum number of pairwise disjoint balanced subsets.
Lemma 1. If

P
i

di is even; then (d ) = n (d).

P
[If i di is odd, then (d ) = (d 0 ), where d 0 is obtained from d by adding an extra
co-ordinate dn+1 = 1.] We shall prove this lemma in the next section.

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

123

When faced with a pattern minimisation problem, we are led by Theorem 1 and
Lemma 1 above to consider heuristic approaches to nding good packings of balanced
subsets. Unfortunately it is NP-complete even to test if a given family a1 ; : : : ; an of
positive integers has a balanced subset. This is the problem called WEAK PARTITION
is David Johnsons NP-completeness column [10], where three independent proofs of
its NP-completeness are cited, the earliest being in [13].
We wish to nd a good packing of balanced subsets, but we know that it is very
hard to nd a best packing, and indeed it is hard to nd any balanced subset. A natural
heuristic approach is repeatedly to seek and delete a balanced subset, preferably a small
one. One method for seeking a balanced subset is to use di erencing, where we repeatedly replace two numbers by the absolute value of their di erence see [5,12,15,17].
This approach is currently being investigated in the context of pattern minimisation [16].
Another method is to use a tolerably fast algorithm that is guaranteed to nd a balanced
subset or a smallest balanced subset: we shall see that we can use a straightforward
dynamic programming method to test if there is a balanced subset, and nd a smallest
balanced subset if there is one, in pseudo-polynomial time. Heuristic approaches for
more general cases of pattern minimisation in cutting stock problems are considered in
[1,2,9,11].
The plan of the rest of the paper is as follows. In the next section we establish
the relationship between numbers of patterns and packings of balanced subsets. Next
we prove our main result, that the problem PATTERN MINIMISATION is strongly
NP-hard. After that, we consider brie y how to search for balanced subsets, and nally
we make a few concluding remarks.

2. Patterns, degrees and balanced sets


In this section we shall prove Lemma 1, which relates numbers of patterns to packings of balanced subsets. The problem PATTERN MINIMISATION can be rephrased
in terms of graphs. A pattern involving reel type i and reel type j will correspond to
an edge between vertex vi and vertex vj . We shall allow our graphs to contain a loop
at any vertex but not to contain multiple edges.
Given a graph G = (V; E) on the vertex set V = {v1 ; : : : ; vn }, together with a family
w of non-negative integer weights we on the edges e, the weighted degree of vertex vi
is the sum over the edges e incident with vi of the weight we , with any loop counting
twice. Given a vector d = (d1 ; : : : ; dn ) of positive integers, we call the pair G; w a
representing network for d if each vertex vi has weighted degree di . Consider the
following problem.
DEGREES
P
Input: positive intergers d1 ; : : : ; dn with i di even.
Task: nd a representing network with as few edges as possible.

124

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

P
Given a family d = (d1 ; : : : ; dn ) of positive integers with i di even, there is a natural
correspondence between the solutions in DEGREES and those in PATTERN MINIMISATION, and in particular the minimum number of edges possible in the former equals
(d ).
P
Proof of Lemma 1. Suppose that
i di is even. Let G; w be any representing network, and consider a component of G with set K of k nodes. It must of course
have at least k 1 edges, and if it has this number and thus is a tree on K, then
the two-colouring of the vertices show that K is balanced. Thus the number of edges
in G is at least n minus the number of components on balanced sets. Hence
(d)n (d).
To prove the reverse inequality, consider any partition of {v1 ; : : : ; vn } into sets
(Ki : i I ) of which at most one is not balanced. We will show that there is representing network G; w such that the graph G has components (Gi : i I ) where Gi
has vertex set Ki , and these components are such that, if Ki is balanced then Gi is a
tree and if not then Gi is a tree with one added loop. This will complete the proof of
the lemma.
P
P
Consider a balanced set K, with partition A B such that iA di = iB di . We
must show that there is a tree T on K and non-negative weights we on the edges e
of T such that, for each node v K, the sum of the weights of the incident edges
equals dv (where loops count double). We use induction on |K|. If either A or B
is empty, the result is trivial, since we must have dv = 0 for each v K. Suppose
then that both A and B are non-empty. Pick any a A and b B, and without
loss of generality suppose that da db . Reduce da by db . Now K \ {b} is balanced,
and inductively we can nd an appropriate weighted tree. Then add the edge ab with
weight db .
Finally, consider a set K which is not balanced, but is such that the corresponding
sum of demands is even. As above, we can always replace two demands by their
di erence at the cost of using one edge. Thus we can satisfy all but one demand by
using edges forming a tree on K, and then one added loop completes the component.

3. PATTERN MINIMISATION is strongly NP-hard


In this section we prove Theorem 1, that the problem PATTERN MINIMISATION is
strongly NP-hard. A summing triple (or Schur triple) is a set of three distinct integers
such that the sum of two equals the third. The following problem could be more fully
described as partition of distinct integers into summing triples.
SUMMING TRIPLES
Input: distinct positive integers s1 ; : : : ; s3n .
Question: can the input be partitioned into summing triples?

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

125

This problem is similar to NUMERICAL MATCHING WITH TARGET SUMS in


Garey and Johnson [6], p. 224, but with the extra (surprisingly troublesome) condition
that the numbers involved must be distinct.
Lemma 2. The problem SUMMING TRIPLES is strongly NP-complete.
Most of this section will be devoted to proving the above lemma, but rst let us see
that it will yield Theorem 1.
Proof of Theorem 1 (assuming Lemma 2). We give a strightforward polynomial time
reduction of SUMMING TRIPLES to DEGREES. Consider an instance of SUMMING
TRIPLES as above. Take d = (2s1 ; : : : ; 2s3n ) as an instance of DEGREES. Since the
si are distinct positive integers, there are no balanced sets of size less than 3. Thus
(d)6n. Hence by Lemma 1, (d)2n and (d ) = 2n if and only if s1 ; : : : ; s3n can be
partitioned into summing triples.
Now consider the problem SUMMING TRIPLES, which is clearly in NP. We shall
show that it is strongly NP-complete by giving a reduction from the NP-complete
problem RESTRICTED X3C, described below, to SUMMING TRIPLES with each
interger si in O(n3 ).
RESTRICTED X3C
Input: a set X of 3q elements and a collection C of triples contained in X , such
that each element of X is in exactly 3 triples.
Question: can X be partitioned into triples in C?
Lemma 3. The problem RESTRICTED X3C is NP-complete.
Proof. It is known that the problem is NP-complete if each element is constrained to
be in at most 3 triples rather than exactly 3 see Garey and Johnson [6], p. 221. It
is easy to tidy up an instance X; C so that each element is in exactly 3 triples.
Clearly we can insist that each element is in either 2 or 3 triples. We can partition
the elements which are in exactly two triples into blocks of size three. For each block
{x; y; z}, add three new elements x0 ; y0 ; z 0 and four new triples {x; y0 ; z 0 }; {x0 ; y; z 0 };
{x0 ; y0 ; z}; {x0 ; y0 ; z 0 }. Call the new instance X 0 ; C 0 . Clearly, each element of X 0 is in
exactly 3 triples in C 0 ; and X can be partitioned into triples in C if any only if X 0
can be partitioned into triples in C 0 .
Proof of Lemma 2. Consider an instance X; C of RESTRICTED X3C, where |X | =
|C| = 3q = n. Let Y = X {1; : : : ; 7}. We shall construct an expanded collection
D of triples contained in Y , such that X can be partitioned into triples in C if and
only if Y can be partitioned into triples in D; and then we shall construct an instance
(s(y): y Y ) of SUMMING TRIPLES, where each size s(y) is O(n2 ), such that the
summing triples correspond precisely to the triples in D.

126

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

Form a bipartite graph G = (C; X ; E) with vertex parts C and X and with vertices
T C and x X adjacent (that is, edge T x E) exactly when x T . Since each
vertex degree in G is three, we may nd in polynomial time a proper 3-edge-colouring
 : E {1; 2; 3}. We now split each element x X into three copies (x; 1); (x; 2) and
(x; 3). Given a triple T = {x; y; z} C, let T 0 be the triple
T 0 = {(x; (Tx)); (y; (Ty)); (z; (Tz))}:
Observe that the elements in the triple T 0 have distinct rst co-ordinates (in X ), and
distinct second co-ordinates (which must be 1,2,3); and that the family C 0 =(T 0 : T C)
partitions the set X {1; 2; 3}.
Next, for each x X , let Fx be the collection consisting of the four triples {(x; 1),
(x; 4); (x; 5)}; {(x; 3); (x; 4); (x; 5)}; {(x; 2); (x; 6); (x; 7)} and {(x; 3); (x; 6); (x; 7)}. Let D
be the collection consisting of C 0 together with all the triples in the collections Fx for
x X . Thus D contains |C| + 4|X | = 5n triples.
If some subcollection D0 of D is a partition of Y , then for each x X , exactly one
of the elements (x; 1); (x; 2); (x; 3) is not covered by triples in D0 Fx and so must be
covered by triples in D0 C 0 . It follows easily that X may be partitioned into triples in
C if and only if Y may be partitioned into triples in D. This completes the rst part
of the construction.
Next we shall see how to assign a size s(y) to each element y Y so that the
summing triples of elements in Y are precisely the triples in D. We shall use a family
of almost k-wise independent random variables de ned on a small sample space.
Let l=d3 log2 ne+10, let t=2nl, let k=9l, and let = 12 . There is a subset
of {0; 1}t ,
of size 2(1+o(1))k , with the following property: if a point ! = (!1 ; : : : ; !t ) is picked
uniformly at random from
, then (!1 ; : : : ; !t ) is -away from k-wise independent
see [3]. Further such a set
can be (explicitly) constructed in time bounded by a
polynomial in n.
Given a point !
, for each i = 1; : : : ; 2n, let Si = Si (!) be the non-negative integer with binary expansion !(i1)l+1 !il . When a point ! = (!1 ; : : : ; !t ) is picked
uniformly at random from
, then S1 ; : : : ; S2n are random variables, taking values
in 0; 1; : : : ; N 1, where N = 2l , and they have the following property. For any
I {1; : : : ; 2n} with 0 |I |69, and for any set E {0; 1; : : : ; N 1}I , we have
|P((Si : i I ) E) |E|=N |I | |61=2:
We can now de ne the element sizes s(y) for our instance of SUMMING TRIPLES.
For clarity we shall write s(x; 1) rather than s((x; 1)) and so on. Enumerate the elements
of X as x1 ; x2 ; : : : ; x n . Given a sample point !
, for each i =1; : : : ; n we let s(xi ; 1)=
2S2i1 (!) + 2N and s(xi ; 2) = 2S2i (!) + 2N . Let x X . Suppose that (x; 3) is in the
triple T 0 C 0 , where T 0 also contains (x0 ; 1) and (x00 ; 2). Then we let s(x; 3) = s(x0 ; 1)
+ s(x00 ; 2) (4N ). This de nes s(x; i) for each x X and i {1; 2; 3}. Now, for each
x X , let s(x; 4) = (s(x; 1) + s(x; 3))=2, s(x; 5) = (s(x; 1) + s(x; 3))=2; s(x; 6) = (s(x; 2)
+ s(x; 3))=2, and s(x; 7) = (s(x; 2) + s(x; 3))=2. We have now de ned a positive integer
size s(y) for each y Y , and each triple in D is always summing.

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

127

Let B
be the bad event that either the values s(y) for y Y fail to be distinct,
or some triple other than those in D is summing. We shall show that P(B) 1. It
will follow that there is a sample point !
\ B, and we can nd such a point in
polynomial time by exhaustive search. This will then complete the proof of the lemma.
To prove that P(B) 1, it suces for us to suppose that the random variables
S1 ; : : : ; S2n are precisely 9-wise independent, with each uniformly distributed on
{0; 1; : : : ; N 1}, and then to prove that P(B) 1=2. Observe that, from the de nition
of s(y), for each y there is a vector a(y) {1; 0; 1; 2}2n , with support of size at most
P
3, such that s(y) i a(y)i Si is a constant (that is, does not depend on !).
Let y1 and y2 be distinct elements of Y . Let a = a(y1 ) a(y2 ). It is easy to see
that a is a non-zero integer vector with support of size at most 6, and there is a
P
constant c such that s(y1 ) = s(y2 ) if and only if
i ai Si = c. But the probability of
this last event is at most 1=N
.
Thus
the
probability
that
the values s(y) for y Y are
 
(7n)2
|Y |
1
not all distinct is at most 2 N 6 2N .
We may argue similarly for the unwanted triples. Let y1 ; y2 and y3 be distinct
elements of Y , which form a triple T which is not in D. Consider the event that
s(y1 ) + s(y2 ) = s(y3 ). Let a = a(y1 ) + a(y2 ) a(y3 ). Then a has support of size at
P
most 9, and there is a constant c such that s(y1 )+s(y2 )=s(y3 ) if and only if i ai Si =c.
We claim that the vector a is non-zero.
will then follow that the probability that some
 It 
3
(7n)2 (8n)
triple not in D is summing is at most |Y3 | 3 N1 6 (7n)
2N ; and hence P(B)6 210 n3 1=2,
as required.
It remains only to establish the above claim. For each element y = (x; i) Y , let
P
1 (y) = x and 2 (y) = i. Also, denote i a(y)i by (y). Assume that a = 0: we must
obtain a contradiction. Note rst that if 2 (y) = 3 then (y) = 4. Hence neither 2 (y1 )
P
nor 2 (y2 ) can equal 3, since we would than have i ai 4 (y3 )0.
Now suppose that 2 (y1 ) {1; 2}, that is (y1 ) = 2. Then (y2 ) must be 1 or 2.
We consider these two cases.
(i) Suppose rst that (y2 ) = 1. Then (y3 ) = 3. Now a(y1 ) has only one non-zero
co-ordinate 2, a(y2 ) has 1; 1; 1 and a(y3 ) has 1; 1; 1. Then 1 (y1 ) = 1 (y2 ) = 1 (y3 )
and the triple T = (y1 ; y2 ; y3 ) is in D, a contradiction.
(ii) Now suppose that (y2 ) = 2. Then (y3 ) = 4. Now a(y1 ) has one 2, a(y2 ) has
one 2 and a(y3 ) has 2,2. Again the triple T is in D, a contradiction.
We have now shown that 2 (y1 ) 6 {1; 2; 3}, and similarly for y2 . Thus both 2 (y1 )
and 2 (y2 ) are in {4; 5; 6; 7}. So (y1 ) and (y2 ) are 1 or 3, and (y3 ) is 2 or 4.
Suppose rst that (y1 ) = (y2 ) = 1, and so (y3 ) = 2. Then both a(y1 ) and a(y2 )
have non-zero co-ordinates 1; 1; 1 and a(y3 ) has one 2. But then a(y1 ) and a(y2 )
must have the same support. It follows that 1 (y1 ) = 1 (y2 ), and this is not possible.
Without loss of generality, we may now assume that (y1 ) = 1 and (y2 ) = 3.
Then (y3 ) = 4; and a(y1 ) has non-zero co-ordinates 1; 1; 1; a(y2 ) has 1; 1; 1; and
a(y3 ) has 2,2. Then again we nd that a(y1 ) and a(y2 ) must have the same support,
and so 1 (y1 ) = 1 (y2 ) = 1 (y3 ). But now the triple T is in D, a contradiction.

128

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

4. Finding balanced subsets


We noted earlier that it is NP-complete to test if a given family a1 ; : : : ; an of positive integers has a balanced subset, but that we may nevertheless wish to search for
balanced subsets in certain heuristic approaches to pattern minimisation. In this section
we see that a straightforward approach based on dynamic programming will solve such
problems in pseudo-polynomial time.
We rst see how to test if there is a balanced subset, and if so to nd one (with
P
smallest corresponding sum), in O(n i ai ) steps. After that we shall see how to nd
P
a smallest balanced subset, in O(n2 i ai ) steps. It is not obvious which of these
algorithms would be better within a heuristic method for pattern minimisation.
P
Let s0 = i ai =2. For each s = 0; 1; : : : ; s0 , and each j = 0; 1; : : : ; n, let f(s; j) be
T if there is a subset of {1; : : : ; j} with corresponding sum s, and let f(s; j) equal F
otherwise. Then f(0; j)=T for each j =0; 1; : : : ; n, and f(s; 0)=F for each s=1; : : : ; s0 .
We can calculate all the values f(s; j) in turn, in O(1) steps per value, as follows.
for s = 1; : : : ; s0
for j = 1; : : : ; n
f(s; j) f(s; j 1) f(s aj ; j 1):
(If s 0 we interpret f(s; j) as F.)
If, in the recurrence above, it is never the case that both terms on the right are T , then
there is no balanced subset. But suppose that this case does occur, and the rst time we
meet it is at s0 ; j0 . Then there are two distinct subsets A and B with corresponding sum
s (one containing j0 and one not). Further A and B must be disjoint, by the minimality
of s0 . Clearly we can nd such sets A and B quickly, and their union is the desired
balanced set.
Now suppose that we wish to nd a smallest balanced subset if there is one. We
P
describe a method based on dynamic programming which takes O(n2 i ai ) steps.
P
As before, let s0 = i ai =2. For each s=0; 1; : : : ; s0 , and each j; k=0; 1; : : : ; n with k6j,
let f(s; j; k) be T if there is a subset of {1; : : : ; j} of size at most k with corresponding
sum s, and let f(s; j; k) equal F otherwise. Then the recurrence.
f(s; j; k) = f(s; j 1; k) f(s aj ; j 1; k 1);
together with appropriate boundary conditions, allows us to determine all the values
f(s; j; k) in O(1) steps per value.
For each s = 1; : : : ; s0 let as be the smallest size of a subset of {1; : : : ; n} with
corresponding sum s if there is such a subset, and let as = 0 if not; and let bs be the
second smallest size of a subset of {1; : : : ; n} with corresponding sum s if there are at
least two such subsets, and let bs = 0 if not. We have seen that we can calculate all
the values f(s; j; k) in O(n2 s0 ) steps. From these values f(s; j; k), we can calculate all
the values as and bs within the same time bound, as follows.

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

129

To calculate as , note that as = 0 if f(s; n; n) = F and if not then as is the smallest k


such that f(s; n; k) = T . Now suppose that as 6= 0, and consider how to calculate bs .
Note that, if f(s; j; k) = T then we can nd a subset of {1; : : : ; j} of size at most k
with corresponding sum s, by backtracking through the recurrence. We can also tell
if there is more than one such subset by checking if the right side in the recurrence
ever has both terms T . If corresponding to f(s; n; n) (which we know is T ) there is
a unique solution, then bs = 0. Otherwise, bs is the least k such that corresponding to
f(s; n; k) there is more than one solution.
If bs = 0 for each s then there can be no balanced subset. Suppose now that there is
at least one non-zero value bs , and let t be a value s which minimises as + bs over all
s with bs 6= 0. Let At and Bt be distinct sets each with corresponding sum t and such
that |At | = at and |Bt | = bt . (We can nd such sets quickly.) The following claim will
complete our proof.
Claim. The sets At and Bt are disjoint; and Ct = At Bt is a smallest balanced set.
Proof of claim. Suppose that the distinct sets At and Bt meet. Denote the sum for At \Bt
by u. Then this is also the sum for Bt \ At . But now au + bu at + bt , contradicting
our choice of t. Thus At and Bt are disjoint as claimed, and Ct is balanced.
Now consider any balanced set C, with parts the disjoint sets A and B, each summing
to s. Then A and B are distinct sets each summing to s. Hence |C| = |A| + |B|as
+ bs at + bt = |Ct |.

5. Concluding remarks
We have seen that, even for a very restricted case of the cutting stock problem,
it is strongly NP-hard to minimise the number of distinct patterns used, and thus we
cannot expect to be able to solve such problems even in pseudo-polynomial time. The
key notion was that of a balanced subset, and we were led to consider heuristics for
packing balanced subsets, and thus to consider the NP-hard problem of seeking such
subsets.

6. For Further Reading


The following reference is also of interest to the reader: [14].

Acknowledgements
I would like to acknowledge helpful and enjoyable discussions with the other members of the Study Group listed in the rst reference below.

130

C. McDiarmid / Discrete Applied Mathematics 98 (1999) 121130

References
[1] C. Aldridge, J. Chapman, R. Gower, R. Leese, C. McDiarmid, M. Shepherd, H. Tuenter, H. Wilson, A.
Zinober, Pattern Reduction in Paper Cutting, Report of the 29th European Study Group with Industry,
University of Oxford, March 1996.
[2] J.M. Allwood, C.N. Goulimis, Reducing the number of patterns in the 1-dimensional cutting stock
problem, Internal Report of Control Section, Electrical Engineering Department, Imperial College, 1988.
[3] N. Alon, O. Goldreich, J. Hastad, R. Peralta, Simple constructions of almost k-wise independent random
variables, Random Structures and Algorithms 3 (1992) 289304.
[4] V. Chvatal, Linear Programming, Freeman, San Francisco, 1983, pp. 195 212.
[5] E.G. Co man, G.S. Lueker, Probabilistic Analysis of Packing and Partitioning Algorithms, Wiley, New
York, 1991.
[6] M.R. Garey, D.S. Johnson, Computers and Intractability, Freeman, San Francisco, 1979.
[7] P.C. Gilmore, R.E. Gomory, A linear programming approach to the cutting-stock problem, Oper. Res.
9 (1961) 849859.
[8] P.C. Gilmore, R.E. Gomory, A linear programming approach to the cutting-stock probelem Part II,
Oper. Res. 11 (1963) 863888.
[9] C.N. Goulimis, Optimal solutions for the cutting stock problem, European J. Oper. Res. 44 (1990)
197208.
[10] D. Johnson, The NP-completeness column: an ongoing guide, J. Algorithms 3 (1982) 182195.
[11] R.E. Johnston, Rounding algorithms for cutting stock problems, J. AsianPaci c Oper. Res. Soc. 3
(1986) 166171.
[12] N. Karmarkar, R.M. Karp, The di erencing method of set partitioning, Technical Report UCB=CSD
82=113, Computer Science Division (EECS), University of California, Berkeley, 1982.
[13] A. Shamir, On the cryptocomplexity of knapsack systems, Proc. 11th Ann. ACM Symp. on Theory of
Computing, 1979, pp. 118129.
[14] P.E. Sweeney, E.R. Paternoster, Cutting and packing problems: a categorized, application-orientated
research bibliography, J. Oper. Res. Soc. 43 (1992) 691706.
[15] L-H. Tsai, The modi ed di erencing method for the set partitioning problem with cardinality conditions,
Discrete Appl. Math. 63 (1995) 175180.
[16] H. Tuenter, Personal communication, 1996.
[17] B. Yakir, The di erencing algorithm LDM for partitioning: a proof of a conjecture of Karmarkar and
Karp, Math. Oper. Res. 21 (1996) 8599.

You might also like