Pattern Minimisation in Cutting Stock Problems: Colin Mcdiarmid
Pattern Minimisation in Cutting Stock Problems: Colin Mcdiarmid
Abstract
In the cutting stock pattern minimisation problem, we wish to satisfy demand for various
customer reels by cutting as few as possible jumbo reels, and further to minimise the number
of distinct cutting patterns used. We focus on the special case in which any two customer reels
t into a jumbo, but no three do: this case is of interest partly because it is the simplest case
that is not trivial, and partly because it may arise in practice when one attempts to improve a
solution iteratively.
We nd that the pattern minimisation problem is strongly NP-hard even in this special case,
when the basic problem of nding a minimum waste solution is trivial. Our analysis focusses
on balanced subsets, and suggests an approach for heuristic methods involving searching for
balanced subsets. ? 1999 Elsevier Science B.V. All rights reserved.
Keywords: Cutting stock; Cutting patterns; Partition; NP-hard; Dynamic programming
1. Introduction
Some materials such as paper may be manufactured in wide jumbo rolls, which
are later cut into much narrower rolls to satisfy customer demands. To minimise waste,
cutting patterns should be chosen so as to use as few jumbos as possible (see [4,7,8]).
Thus the basic cutting stock problem has input a positive integer J , distinct positive
integers r1 ; : : : ; rn , and positive integers d1 ; : : : ; dn ; and the required task is to use as few
as possible jumbos of width J to satisfy the demand for di customer reels of width ri ,
for each i = 1; : : : ; n. This is one of the classical OR problems. It contains the strongly
NP-complete problem 3-PARTITION: thus it is NP-hard even if the jumbo size J is
bounded by a polynomial in n, and each customer reel size ri satises J=4 ri J=2
see [6], p. 224. Thus we cannot expect always to be able to nd optimal solutions
to such problems within a reasonable time.
0166-218X/99/$ - see front matter ? 1999 Elsevier Science B.V. All rights reserved.
PII: S 0 1 6 6 - 2 1 8 X ( 9 9 ) 0 0 1 1 2 - 2
122
Each time a dierent pattern of customer reels is to be cut, the knives on the cutting
machine need to be re-set. A problem presented by C.N. Goulimis and investigated
at the 29th European Study Group with Industry in March 1996 concerned how to
nd solutions to the above cutting stock problem, that further minimise the number
of distinct cutting patterns used see [1,2,9]. This is of course going to be hard in
general, since the basic problem is hard. In order to investigate the added diculty of
the extended problem, we consider here a special case in which the basic problem of
minimising the number of jumbos (minimising waste) is trivial.
PATTERN MINIMISATION
Input: positive integers d1 ; : : : ; dn .
Task: in the cutting stock problem where the demand is for di reels of type i, and
any two reels t into a jumbo but no three do, nd a minimum waste solution which
further minimises the number of distinct patterns used.
This very restricted special case is of interest partly because it seems to be the
simplest case that is not completely trivial, and partly because it may arise in practice
when one attempts to improve a solution iteratively. For example, if a collection of
some of the currently used patterns agree on the large reels and dier only on small
reels, and any two small reels t in the width left by the large reels, then when
we attempt to re-allocate the small reels we face exactly this special case [16]. We
investigate whether the pattern minimisation problem remains hard in this special case,
and brie
y consider approaches to nding good solutions.
P
It is clear that the least number of jumbos needed is d i di =2e, the round up of half
the total demand; and it is trivial to nd a corresponding minimum waste solution. But,
how easy is it to nd, amongst the minimum waste solutions, one which minimises the
number of patterns used? For the variant of the problem when no three customer reels
t into a jumbo but also some pairs may not, it was shown in [1] that the problem is
strongly NP-hard. The theorem below strengthens this negative result.
Theorem 1. The problem PATTERN MINIMISATION is strongly NP-hard.
The key to understanding the above problem is the concept of a balanced subset.
Given a family d = (d1 ; : : : ; dn ) of non-negative integers, denote by (d ) the minimum
number of patterns used in any minimum waste solution. Also, call a non-empty subset
P
of {1; : : : ; n} balanced if it can be partitioned into two sets A and B such that iA di =
P
iB di . Thus if some di = 0 then the singleton set {i} is balanced. Let (d ) be the
maximum number of pairwise disjoint balanced subsets.
Lemma 1. If
P
i
P
[If i di is odd, then (d ) = (d 0 ), where d 0 is obtained from d by adding an extra
co-ordinate dn+1 = 1.] We shall prove this lemma in the next section.
123
When faced with a pattern minimisation problem, we are led by Theorem 1 and
Lemma 1 above to consider heuristic approaches to nding good packings of balanced
subsets. Unfortunately it is NP-complete even to test if a given family a1 ; : : : ; an of
positive integers has a balanced subset. This is the problem called WEAK PARTITION
is David Johnsons NP-completeness column [10], where three independent proofs of
its NP-completeness are cited, the earliest being in [13].
We wish to nd a good packing of balanced subsets, but we know that it is very
hard to nd a best packing, and indeed it is hard to nd any balanced subset. A natural
heuristic approach is repeatedly to seek and delete a balanced subset, preferably a small
one. One method for seeking a balanced subset is to use dierencing, where we repeatedly replace two numbers by the absolute value of their dierence see [5,12,15,17].
This approach is currently being investigated in the context of pattern minimisation [16].
Another method is to use a tolerably fast algorithm that is guaranteed to nd a balanced
subset or a smallest balanced subset: we shall see that we can use a straightforward
dynamic programming method to test if there is a balanced subset, and nd a smallest
balanced subset if there is one, in pseudo-polynomial time. Heuristic approaches for
more general cases of pattern minimisation in cutting stock problems are considered in
[1,2,9,11].
The plan of the rest of the paper is as follows. In the next section we establish
the relationship between numbers of patterns and packings of balanced subsets. Next
we prove our main result, that the problem PATTERN MINIMISATION is strongly
NP-hard. After that, we consider brie
y how to search for balanced subsets, and nally
we make a few concluding remarks.
124
P
Given a family d = (d1 ; : : : ; dn ) of positive integers with i di even, there is a natural
correspondence between the solutions in DEGREES and those in PATTERN MINIMISATION, and in particular the minimum number of edges possible in the former equals
(d ).
P
Proof of Lemma 1. Suppose that
i di is even. Let G; w be any representing network, and consider a component of G with set K of k nodes. It must of course
have at least k 1 edges, and if it has this number and thus is a tree on K, then
the two-colouring of the vertices show that K is balanced. Thus the number of edges
in G is at least n minus the number of components on balanced sets. Hence
(d)n (d).
To prove the reverse inequality, consider any partition of {v1 ; : : : ; vn } into sets
(Ki : i I ) of which at most one is not balanced. We will show that there is representing network G; w such that the graph G has components (Gi : i I ) where Gi
has vertex set Ki , and these components are such that, if Ki is balanced then Gi is a
tree and if not then Gi is a tree with one added loop. This will complete the proof of
the lemma.
P
P
Consider a balanced set K, with partition A B such that iA di = iB di . We
must show that there is a tree T on K and non-negative weights we on the edges e
of T such that, for each node v K, the sum of the weights of the incident edges
equals dv (where loops count double). We use induction on |K|. If either A or B
is empty, the result is trivial, since we must have dv = 0 for each v K. Suppose
then that both A and B are non-empty. Pick any a A and b B, and without
loss of generality suppose that da db . Reduce da by db . Now K \ {b} is balanced,
and inductively we can nd an appropriate weighted tree. Then add the edge ab with
weight db .
Finally, consider a set K which is not balanced, but is such that the corresponding
sum of demands is even. As above, we can always replace two demands by their
dierence at the cost of using one edge. Thus we can satisfy all but one demand by
using edges forming a tree on K, and then one added loop completes the component.
125
126
Form a bipartite graph G = (C; X ; E) with vertex parts C and X and with vertices
T C and x X adjacent (that is, edge T x E) exactly when x T . Since each
vertex degree in G is three, we may nd in polynomial time a proper 3-edge-colouring
: E {1; 2; 3}. We now split each element x X into three copies (x; 1); (x; 2) and
(x; 3). Given a triple T = {x; y; z} C, let T 0 be the triple
T 0 = {(x; (Tx)); (y; (Ty)); (z; (Tz))}:
Observe that the elements in the triple T 0 have distinct rst co-ordinates (in X ), and
distinct second co-ordinates (which must be 1,2,3); and that the family C 0 =(T 0 : T C)
partitions the set X {1; 2; 3}.
Next, for each x X , let Fx be the collection consisting of the four triples {(x; 1),
(x; 4); (x; 5)}; {(x; 3); (x; 4); (x; 5)}; {(x; 2); (x; 6); (x; 7)} and {(x; 3); (x; 6); (x; 7)}. Let D
be the collection consisting of C 0 together with all the triples in the collections Fx for
x X . Thus D contains |C| + 4|X | = 5n triples.
If some subcollection D0 of D is a partition of Y , then for each x X , exactly one
of the elements (x; 1); (x; 2); (x; 3) is not covered by triples in D0 Fx and so must be
covered by triples in D0 C 0 . It follows easily that X may be partitioned into triples in
C if and only if Y may be partitioned into triples in D. This completes the rst part
of the construction.
Next we shall see how to assign a size s(y) to each element y Y so that the
summing triples of elements in Y are precisely the triples in D. We shall use a family
of almost k-wise independent random variables dened on a small sample space.
Let l=d3 log2 ne+10, let t=2nl, let k=9l, and let = 12 . There is a subset
of {0; 1}t ,
of size 2(1+o(1))k , with the following property: if a point ! = (!1 ; : : : ; !t ) is picked
uniformly at random from
, then (!1 ; : : : ; !t ) is -away from k-wise independent
see [3]. Further such a set
can be (explicitly) constructed in time bounded by a
polynomial in n.
Given a point !
, for each i = 1; : : : ; 2n, let Si = Si (!) be the non-negative integer with binary expansion !(i1)l+1 !il . When a point ! = (!1 ; : : : ; !t ) is picked
uniformly at random from
, then S1 ; : : : ; S2n are random variables, taking values
in 0; 1; : : : ; N 1, where N = 2l , and they have the following property. For any
I {1; : : : ; 2n} with 0 |I |69, and for any set E {0; 1; : : : ; N 1}I , we have
|P((Si : i I ) E) |E|=N |I | |61=2:
We can now dene the element sizes s(y) for our instance of SUMMING TRIPLES.
For clarity we shall write s(x; 1) rather than s((x; 1)) and so on. Enumerate the elements
of X as x1 ; x2 ; : : : ; x n . Given a sample point !
, for each i =1; : : : ; n we let s(xi ; 1)=
2S2i1 (!) + 2N and s(xi ; 2) = 2S2i (!) + 2N . Let x X . Suppose that (x; 3) is in the
triple T 0 C 0 , where T 0 also contains (x0 ; 1) and (x00 ; 2). Then we let s(x; 3) = s(x0 ; 1)
+ s(x00 ; 2) (4N ). This denes s(x; i) for each x X and i {1; 2; 3}. Now, for each
x X , let s(x; 4) = (s(x; 1) + s(x; 3))=2, s(x; 5) = (s(x; 1) + s(x; 3))=2; s(x; 6) = (s(x; 2)
+ s(x; 3))=2, and s(x; 7) = (s(x; 2) + s(x; 3))=2. We have now dened a positive integer
size s(y) for each y Y , and each triple in D is always summing.
127
Let B
be the bad event that either the values s(y) for y Y fail to be distinct,
or some triple other than those in D is summing. We shall show that P(B) 1. It
will follow that there is a sample point !
\ B, and we can nd such a point in
polynomial time by exhaustive search. This will then complete the proof of the lemma.
To prove that P(B) 1, it suces for us to suppose that the random variables
S1 ; : : : ; S2n are precisely 9-wise independent, with each uniformly distributed on
{0; 1; : : : ; N 1}, and then to prove that P(B) 1=2. Observe that, from the denition
of s(y), for each y there is a vector a(y) {1; 0; 1; 2}2n , with support of size at most
P
3, such that s(y) i a(y)i Si is a constant (that is, does not depend on !).
Let y1 and y2 be distinct elements of Y . Let a = a(y1 ) a(y2 ). It is easy to see
that a is a non-zero integer vector with support of size at most 6, and there is a
P
constant c such that s(y1 ) = s(y2 ) if and only if
i ai Si = c. But the probability of
this last event is at most 1=N
.
Thus
the
probability
that
the values s(y) for y Y are
(7n)2
|Y |
1
not all distinct is at most 2 N 6 2N .
We may argue similarly for the unwanted triples. Let y1 ; y2 and y3 be distinct
elements of Y , which form a triple T which is not in D. Consider the event that
s(y1 ) + s(y2 ) = s(y3 ). Let a = a(y1 ) + a(y2 ) a(y3 ). Then a has support of size at
P
most 9, and there is a constant c such that s(y1 )+s(y2 )=s(y3 ) if and only if i ai Si =c.
We claim that the vector a is non-zero.
will then follow that the probability that some
It
3
(7n)2 (8n)
triple not in D is summing is at most |Y3 | 3 N1 6 (7n)
2N ; and hence P(B)6 210 n3 1=2,
as required.
It remains only to establish the above claim. For each element y = (x; i) Y , let
P
1 (y) = x and 2 (y) = i. Also, denote i a(y)i by (y). Assume that a = 0: we must
obtain a contradiction. Note rst that if 2 (y) = 3 then (y) = 4. Hence neither 2 (y1 )
P
nor 2 (y2 ) can equal 3, since we would than have i ai 4 (y3 )0.
Now suppose that 2 (y1 ) {1; 2}, that is (y1 ) = 2. Then (y2 ) must be 1 or 2.
We consider these two cases.
(i) Suppose rst that (y2 ) = 1. Then (y3 ) = 3. Now a(y1 ) has only one non-zero
co-ordinate 2, a(y2 ) has 1; 1; 1 and a(y3 ) has 1; 1; 1. Then 1 (y1 ) = 1 (y2 ) = 1 (y3 )
and the triple T = (y1 ; y2 ; y3 ) is in D, a contradiction.
(ii) Now suppose that (y2 ) = 2. Then (y3 ) = 4. Now a(y1 ) has one 2, a(y2 ) has
one 2 and a(y3 ) has 2,2. Again the triple T is in D, a contradiction.
We have now shown that 2 (y1 ) 6 {1; 2; 3}, and similarly for y2 . Thus both 2 (y1 )
and 2 (y2 ) are in {4; 5; 6; 7}. So (y1 ) and (y2 ) are 1 or 3, and (y3 ) is 2 or 4.
Suppose rst that (y1 ) = (y2 ) = 1, and so (y3 ) = 2. Then both a(y1 ) and a(y2 )
have non-zero co-ordinates 1; 1; 1 and a(y3 ) has one 2. But then a(y1 ) and a(y2 )
must have the same support. It follows that 1 (y1 ) = 1 (y2 ), and this is not possible.
Without loss of generality, we may now assume that (y1 ) = 1 and (y2 ) = 3.
Then (y3 ) = 4; and a(y1 ) has non-zero co-ordinates 1; 1; 1; a(y2 ) has 1; 1; 1; and
a(y3 ) has 2,2. Then again we nd that a(y1 ) and a(y2 ) must have the same support,
and so 1 (y1 ) = 1 (y2 ) = 1 (y3 ). But now the triple T is in D, a contradiction.
128
129
5. Concluding remarks
We have seen that, even for a very restricted case of the cutting stock problem,
it is strongly NP-hard to minimise the number of distinct patterns used, and thus we
cannot expect to be able to solve such problems even in pseudo-polynomial time. The
key notion was that of a balanced subset, and we were led to consider heuristics for
packing balanced subsets, and thus to consider the NP-hard problem of seeking such
subsets.
Acknowledgements
I would like to acknowledge helpful and enjoyable discussions with the other members of the Study Group listed in the rst reference below.
130
References
[1] C. Aldridge, J. Chapman, R. Gower, R. Leese, C. McDiarmid, M. Shepherd, H. Tuenter, H. Wilson, A.
Zinober, Pattern Reduction in Paper Cutting, Report of the 29th European Study Group with Industry,
University of Oxford, March 1996.
[2] J.M. Allwood, C.N. Goulimis, Reducing the number of patterns in the 1-dimensional cutting stock
problem, Internal Report of Control Section, Electrical Engineering Department, Imperial College, 1988.
[3] N. Alon, O. Goldreich, J. Hastad, R. Peralta, Simple constructions of almost k-wise independent random
variables, Random Structures and Algorithms 3 (1992) 289304.
[4] V. Chvatal, Linear Programming, Freeman, San Francisco, 1983, pp. 195 212.
[5] E.G. Coman, G.S. Lueker, Probabilistic Analysis of Packing and Partitioning Algorithms, Wiley, New
York, 1991.
[6] M.R. Garey, D.S. Johnson, Computers and Intractability, Freeman, San Francisco, 1979.
[7] P.C. Gilmore, R.E. Gomory, A linear programming approach to the cutting-stock problem, Oper. Res.
9 (1961) 849859.
[8] P.C. Gilmore, R.E. Gomory, A linear programming approach to the cutting-stock probelem Part II,
Oper. Res. 11 (1963) 863888.
[9] C.N. Goulimis, Optimal solutions for the cutting stock problem, European J. Oper. Res. 44 (1990)
197208.
[10] D. Johnson, The NP-completeness column: an ongoing guide, J. Algorithms 3 (1982) 182195.
[11] R.E. Johnston, Rounding algorithms for cutting stock problems, J. AsianPacic Oper. Res. Soc. 3
(1986) 166171.
[12] N. Karmarkar, R.M. Karp, The dierencing method of set partitioning, Technical Report UCB=CSD
82=113, Computer Science Division (EECS), University of California, Berkeley, 1982.
[13] A. Shamir, On the cryptocomplexity of knapsack systems, Proc. 11th Ann. ACM Symp. on Theory of
Computing, 1979, pp. 118129.
[14] P.E. Sweeney, E.R. Paternoster, Cutting and packing problems: a categorized, application-orientated
research bibliography, J. Oper. Res. Soc. 43 (1992) 691706.
[15] L-H. Tsai, The modied dierencing method for the set partitioning problem with cardinality conditions,
Discrete Appl. Math. 63 (1995) 175180.
[16] H. Tuenter, Personal communication, 1996.
[17] B. Yakir, The dierencing algorithm LDM for partitioning: a proof of a conjecture of Karmarkar and
Karp, Math. Oper. Res. 21 (1996) 8599.