Rotational Placement of Irregular Polygons Over Containers With Fixed Dimensions Using Simulated Annealing and No-Fit Polygons
Rotational Placement of Irregular Polygons Over Containers With Fixed Dimensions Using Simulated Annealing and No-Fit Polygons
Marcos S. G. Tsuzuki
Senior Member, ABCM
Annealing and No-Fit Polygons
[email protected] This work deals with the problem of minimizing the waste of space that occurs on a
University of Sao Paulo - USP rotational placement of a set of irregular bi-dimensional small items inside a bi-
Escola Politécnica dimensional large object. This problem is approached with an heuristic based on
Dept. of Mechatronics and Mech. Syst. Eng simulated annealing. Traditional “external penalization” techniques are avoided through
05508-900 São Paulo, SP, Brazil the application of the no-fit polygon, that determinates the collision-free region for each
small item before its placement. The simulated annealing controls: the rotation applied
and the placement of the small item. For each non-placed small item, a limited depth
binary search is performed to find a scale factor that when applied to the small item,
would allow it to be fitted in the large object. Three possibilities to define the sequence on
which the small items are placed are studied: larger-first, random permutation and weight
sorted. The proposed algorithm is suited for non-convex small items and large objects.
Keywords: knapsack problem, cutting and packing, optimization
that grows exponentially with the problem size rather than according
Introduction to a polynomial function (Fowler et al., 1981). It is not worthwhile
to search for an exact (optimal) algorithm, since it does not appear
The knapsack problem arises in the industry whenever one must that any efficient optimal solution is possible. Alternative
place multiple small items inside a large object such that there is no approaches that are not guaranteed to find an optimal solution are
collision between the small items, while either minimizing the size considered instead. Thus, by giving up solution quality,
of the large object or maximizing the volume occupied by the small computational efficiency can be gained. Probabilistic optimization
items. High material utilization is of particular interest to mass heuristics follow this pattern: while a stipulated stop criteria is not
production industries since small improvements of the layout can satisfied, at each step the function to be optimized is evaluated at a
result in large savings of material and considerably reduce set of points and a set of rules is applied to determinate the set of
production cost. points to be evaluated at the next step. The algorithm stops when a
The knapsack problem belongs to the more general class of satisfactory solution of the problem is reached.
combinatorial problems known as cutting and packing problems. Jakobs (1996) studied orthogonal packing where the small items
According to Dyckhoff (1990), the cutting and packing problems and large objects are rectangular. He used a bottom-left strategy to
are mainly characterized by the number of relevant dimensions, the position the small items; the placement sequence is determined
regularity and irregularity of the shapes of the small items and large using a genetic algorithm. He extended the algorithm to process
objects and the problem assignment. Considering assignment, it is irregular small items. The main idea is the determination of
possible to identify two situations: output maximization and input embedding rectangles with minimum area for all small items. The
minimization. In the input minimization, the set of large objects is rotation of the small items is determined in this local search. The
sufficient to accommodate all small items, and there is no selection large object is big enough for all small items to be easily placed.
regarding the small items. In the case of output maximization, the The algorithm considers all small items as rectangles and ensures
set of large objects is not sufficient to accommodate all the small that no overlap exists among them. This bounding rectangle
items and a set of small items has to be assigned to a given set of algorithm is not a good approach as the bounding rectangle usually
large objects. The survey made by Wascher et al. (2005) improved contains wasted material.
the typology of cutting and packing problems proposed by Dyckhoff Hifi and Hallah (2003) proposed a hybrid algorithm to solve the
(1990). According to Wascher et al. (2005), the problem studied in irregular problem. The hybrid algorithm searches for an optimal
this work can be defined as: “... the knapsack problem represents a ordering of the small items using a genetic algorithm and identifies
problem category which is characterized by a strongly heteroge- the best packing using a constructive approach, which consists of
neous assortment of small items which have to be allocated to a sequentially positioning a set of ordered small items. Each small
given set of large objects. Again, the availability of the large objects item is tested for a set of potential positions defined with respect to
is limited such that not all small items can be accommodated. The already positioned small items. They studied the exclusively
value of the accommodated small items is to be maximized”. In this translational problem where the large object is big enough such that
survey, Wascher et al. (2005) identified 294 papers containing all small items can be easily placed. The algorithm considers all
material relevant to cutting and packing. Only 5 papers thereof were small items as rectangles for positioning, then a translation is
classified as dealing with two dimensional irregular single knapsack applied to pack the configuration and, simultaneously, ensuring that
problems. This fact shows that the literature related to this kind of no overlap exists among them.
problem is scarce.1 Recently, researchers used the no-fit polygon concept to ensure
It can be shown that even restricted versions of this problem (for feasible layouts; i.e. layouts where the small items do not overlap
instance, limiting the small item shape to rectangles only) are NP- and fit inside the large object. This concept was first introduced by
Complete, which means that all algorithms currently known for Art (1966). Given two small items, A and B, the no-fit polygon can
finding optimal solutions require a number of computational steps be found by tracing one shape around the boundary of the other.
One of the small items remains fixed in space and the other slides in
Paper accepted May, 2008. Technical Editor: Glauco A. de P. Caurin. contact with the fixed small item's boundary whilst ensuring that the
small items always touch but never intersect.
J. of the Braz. Soc. of Mech. Sci. & Eng. Copyright © 2008 by ABCM July-September 2008, Vol. XXX, No. 3 / 205
Thiago de C. Martins and Marcos S. G. Tsuzuki
Dowsland et al. (2002) used the no-fit polygon and a bottom-left the cost function f(x) is the length of the smallest bin containing the
strategy as a deterministic heuristic. A small item is placed adjacent two small items plus an amount proportional to the overlapping
to the boundary of the no-fit polygon according to the bottom-left area:
strategy. The positioning sequence is algorithmically defined by a
sorting criteria: decreasing area, decreasing length, decreasing width
and others. They studied an exclusively translational problem and
the large object is big enough such that all small items can be easily
placed without overlap.
Gomes and Oliveira (2006) used the no-fit polygon concept to
eliminate the overlap in the definition of the initial solution.
Simulated annealing is used to define which small items exchange
position and with which orientation. After exchanging position of
two small items in the layout, overlap usually occurs, which is
removed by applying a separation model, i.e. a set of translations is
applied to the overlapping small items. Afterwards, the layout is
compacted by a set of translations applied to the placed small items, Figure 1. (a) a relaxed packing problem with non-valid optimal solution.
achieving layouts that are local minima. However, it is possible that The overlapped area is given by (h+b-x)2h/2d, the cost function is given by
the separation model fails in achieving a feasible layout. In this case, f(x)=x+(h+b-x)2h/2d when there is overlap, and by f(x)=x when no overlap
exists. (b) shows the graphic of cost function (h=5, b=2, d=3), it is
the proposed algorithm ignores the failed swap operation and possible to observe that the minimum happens when some overlap is
attempts exchanging two different small items. present.
Probabilistic heuristics explore the domain space in a effective
way, however the space delimited by the non-overlap restriction is
very complex. Usually, when confronted with such complex spaces, ⎧⎪ x ⇒ x >b+h
f(x)= ⎨ h (1)
probabilistic heuristics “relax” the original constraints of the x + ( h + b − x )2 ⇒ x≤b+h
problem, allowing the search to go through points outside the space ⎪⎩ 2d
of valid solutions and applying penalization to their cost. This
technique is known as external penalization. Several authors suggest For this problem, a penalization based on the overlapping area
that such an approach which allows but penalizes configurations can lead to an invalid optimal solution as shown in Fig. 1.(b) where
with overlapping small items in the solution space is more efficient the minimum corresponds to a situation with overlap.
(Heckmann and Lengauer, 1995; Bennel and Dowsland, 2001). External penalization based on overlapping length (informally
However, depending on the severity of the penalty this relaxation defined as the length of the shortest translation that, when applied to
results in a tendency to converge toward infeasible solutions. A an overlapped polygon, eliminates its overlap) would in theory
further problem is that feasible solutions may be forced to be eliminate this problem (Heckman and Lengauer, 1995). However,
computed at the expense of overall quality. the calculation of the overlapping length turns out to be
This work shows the application of a probabilistic heuristic, the computationally expensive for non-convex polygons. Another
so-called simulated annealing, to define the placement of small drawback is that those heuristics make the implicit assumption that
items without the use of external penalization. The knapsack the only class of modification that may be applied to a small item is
problem has an additional difficulty: the large object is bounded. In a translation (the overlapping length being the estimative for the
this kind of problem, usually the cost function is the value of the length of the translation that removes the overlap). This makes such
unfilled area. A limitation of this approach is its inability to heuristics to perform poorly on rotational problems where, in
differentiate between two different packing arrangements of the several overlapping instances, a large translation may be necessary
same set of small items. In this paper, this difficulty is solved by to remove an overlap that may be eliminated with a small rotation.
scaling unplaced small items and based thereon defining a measure A good discussion of external penalization techniques can be
of compactness of the remaining free space in a packing found in the work of Heckman and Lengauer (1995), where
arrangement. problems very similar to those studied here are approached with
simulated annealing. One characteristic of the presented solution is
that the optimization process may result in solutions including
External Penalization collisions between small items, thus requiring a post-processing step
The usual approach to simulated annealing applied to the kind of of the obtained data.
complex spaces discussed above is external penalization. While at The approach adopted here avoids the pitfalls of external
first this technique greatly simplifies the algorithm, it also penalization by the continuous mapping (i.e. it is updated at each
introduces the additional problem of determining the adequate step of the process) of the complex space of valid solution onto a
amount of penalization to be applied to external points. This simplified space. Although this additional mapping step increases
problem turns out to be surprisingly difficult for the knapsack the complexity of the algorithm, it confers to the algorithm a more
problem. The most adopted penalization heuristic for external universal character, as there is one less empiric parameter to be
solutions of the knapsack problem (that is, solutions with defined.
overlapping small items) is to apply a penalization based on the Actually, the proposed algorithm does not explore the whole
overlapping area of colliding small items. While this heuristic leads space of possible solutions, focusing instead on a reduced space,
to very computationally efficient iterations of the optimization that contains at least one optimal solution. It is the space of
process, it has some significant shortcomings discussed in the connected small items and large object produced by the constructive
following. approach and no-fit polygon. One can observe that it is possible to
For illustration purposes, a very simple instance of a packing construct a connected solution for any and from any non-connected
problem is considered: the task consists of packing two small items, solution of a placement problem that allows free translation without
a rectangle and a triangle inside a variable-length bin. The only increase in cost. This reduces the search through irrelevant points
considered variable is the position x of the triangle (see Fig. 1.(a)), and enhances the performance of the algorithm.
Simulated Annealing
Simulated Annealing (Kirkpatrick et al., 1983) is the
probabilistic meta-heuristic adopted in this work. It has been chosen
due to its capacity to “escape” from local minima (which are very
frequent in this problem). It is also worth mentioning that the
process of recrystallization, the inspiration for simulated annealing,
is a natural instance of a placement problem.
Simulated annealing originates in the Metropolis algorithm, a
simulation of the recrystallization of atoms in metal during its
annealing (gradual and controlled cooling). During annealing, atoms
migrate naturally to configurations that minimize the total energy of
the system, even if during this migration the system passes through
high-energy configurations. The observation of this behavior
suggested the application of the simulation of such process to
combinatorial optimization problems (Kirkpatrick et al., 1983).
Simulated annealing is a hill-climbing local exploration
optimization heuristic, which means it can skip local minima by
allowing the exploration of the space in directions that lead to a
local increase on the cost function. It sequentially applies random
modifications on the evaluation point of the cost function. If a
modification yields a point of smaller cost, it is automatically kept. Figure 2. The simulated annealing optimization algorithm.
Otherwise, the modification also can be kept with a probability
obtained from the Boltzman distribution
Simulated Annealing Parameters
∆E
− The success and efficiency of the simulated annealing process
P( ∆E ) = e kt (2)
depends significantly on the careful construction of a cooling
schedule and the definition of the appropriate initial temperature.
where P(∆E) is the probability of the optimization process keeping a Geometric cooling is used in this work, since it often leads to good
modification that incurs an increase ∆E of the cost function. k is a results, by allowing the system to settle at an optimal configuration
parameter of the process (analogous to the Stefan-Boltzman as the temperature falls. In geometric cooling, the value for the
constant) and t is the instantaneous “temperature” of the process. temperature is given by ti=α ti-1. The parameter α is usually chosen
This temperature is defined by a cooling schedule, and it is the main around 0.95, while the determination of an appropriate initial value
control parameter of the process. The probability of a given state remains a difficult problem. The initial temperature must be chosen
decreases with its energy, but as the temperature rises, this decrease high enough such that algorithm does not get stuck in a sub-set of
(the slope of the curve P(∆E)) diminishes. solutions and all the solution space is explored in the initial phase
with high temperatures (the values of ∆E of the cost function are
Algorithm very high). One must notice thought that there is a saturation of the
effect of Tinitial on the initial behavior of the algorithm. For higher
Consider the problem of minimizing a function F(x), where x is values of T, the probability of acceptance of newly explored
a vector. The algorithm starts with a feasible random solution x0. solutions is very close to 1, not increasing further with T. So,
Next, at each iteration, it applies a transformation to the solution excessively high initial temperatures do not increase the
producing a new feasible solution x* in the neighborhood of x. The effectiveness of the initial space exploration, but only increase the
cost increase ∆E=F(x*)-F(x) is evaluated. If this increase is negative duration of the cooling schedule. A proposed heuristic to
(meaning the new solution has a lower cost), the new solution is determinate the initial temperature determination (Heckman and
automatically kept. If this increase is positive (meaning the new Lengauer, 1995) is Tinitial = 3 σE / ln(P), where σE is the standard
solution has a greater cost), a random number r is uniformly deviation of the cost function obtained through some iterations of
generated between 0 and 1. If r is smaller than the probability the algorithm, and P is the desired probability of acceptance of
calculated by (2), x* replaces x as the new current solution. initial solutions, typically selected between 0.85 and 0.5.
Otherwise, x* is discarded.
As can be seen in Fig. 2, the algorithm executes iterations at a
fixed temperature until a specific stop condition is met. This The Proposed Algorithm
condition determinates whether the system has attained “thermal Local Exploration meta-heuristics such as simulated annealing
equilibrium” at a determinated temperature. Usually, it is defined as must evaluate a large number of solutions to ensure the quality of
a maximum number of accepted modifications and a maximum the obtained results. This implies that the basic operations of such
number of iterations. When either of the conditions is met, the algorithms must be efficiently performed. The no-fit polygon is used
algorithm proceeds to the next temperature of the cooling schedule. to efficiently avoid overlapping among the small items as well as to
The global stopping condition usually is defined by the cooling place them inside the large object. The proposed algorithm is a
schedule itself. When the cooling schedules reaches its end, the constructive approach with parameters controlled by the simulated
algorithm stops. Typically, additional conditions are defined which annealing: the rotation applied to each small item and its placement.
stop the algorithm before the cooling schedule ends. A common Three possibilities to define the sequence with which the small
such condition is a maximum number of iterations allowed during items are placed are studied: larger first, random permutation and
which the algorithm performs no significant progress (a situation weight sorted.
known as “frozen state”).
J. of the Braz. Soc. of Mech. Sci. & Eng. Copyright © 2008 by ABCM July-September 2008, Vol. XXX, No. 3 / 207
Thiago de C. Martins and Marcos S. G. Tsuzuki
J. of the Braz. Soc. of Mech. Sci. & Eng. Copyright © 2008 by ABCM July-September 2008, Vol. XXX, No. 3 / 209
Thiago de C. Martins and Marcos S. G. Tsuzuki
One can notice in the Fig. 4 that as the temperature decreases, Tangram Puzzle
the number of accepted solutions per iteration decreases (as more
solutions are rejected). Consequently, the number of iterations spent The tangram puzzle consists of the placement of seven convex
at each temperature increases, effectively leading to a slower non-congruent small items. Figure 7 shows the final solution of this
cooling schedule. problem and Fig. 8 the cost function histograms. In this nesting
When evaluating the algorithm performance from the obtained problem, the simulated annealing algorithm encounters a phase
results, one must take into account the fact that usually, a solution as transition when the two greater triangles settle at their final position.
good as the final one is found in far fewer iterations (on the results The convergence occurred after 525000 iterations, but the final
presented here it happens at 1/4 or less of the total iterations) than it solution was found at around 135000 iterations (about twenty-five
takes for the algorithm to converge. Of course, letting the algorithm minutes on a 2 GHz Pentium 4 processor).
take its course is the only generic way to know if any previously
found solution will be the best found, but this suggests that an
algorithm that keeps track of the best found solution may be
interrupted and still return a satisfactory solution.
Small Puzzle
The first example is a fairy simple puzzle with four non-convex
non-congruent small items. The non-convex small items are
decomposed into convex polygons in a pre-processing step. This
decomposition does not affect in the final solution. Figure 5 shows
the final solution of this problem and Fig. 6 the cost-function
histograms. Based on the cost-function histograms, two distinctive
phases of the process can be recognized. The first phase is
characterized by a energy level higher than 0.15. The second phase Figure 7. Final solution of a tangram puzzle with seven polygons.
begins after 25000 iterations and has a energy level smaller than
0.15. Convergence occurred after 350000 iterations, but the best
solution was found much earlier, around 25000 iterations (about
four minutes on a 2 GHz Pentium 4 processor). As can be seen in Fig. 8, the temperature is lowered as the
cooling schedule progresses, leading the system to lower energy
states. The transition between states is not smooth. Two distinctive
phases can be recognized. This corresponds to a macro-organization
of the system, when the two larger triangular small items settle at
their final configuration, leaving the position of smaller small items
to be defined.
Figure 12. (a) Problem instance whose optimal solution cannot be reached
through larger-first and bottom-left heuristics combined. (b) Problem
instance solved with a placement order produced by the simulated
annealing.
Conclusions
Figure 11. Energy histogram for 2 levels of scale search. The discrete
behavior of the cost function is evident. The placement problem deals with the task of minimizing the
waste of space that occurs on a rotational placement of a set of
irregular bi-dimensional small items inside a bi-dimensional large
When the number of search levels is increased, the convergence object with fixed dimensions. The survey made by Wascher et al.
is smoother and reaches optimal solutions in fewer iterations. This (2005) classified only 5 papers as related to the specific subject
can be explained by the reduction of the gap between the discrete
J. of the Braz. Soc. of Mech. Sci. & Eng. Copyright © 2008 by ABCM July-September 2008, Vol. XXX, No. 3 / 211
Thiago de C. Martins and Marcos S. G. Tsuzuki