0% found this document useful (0 votes)
45 views7 pages

Tabu Search

Tabu search is an optimization algorithm that guides a local heuristic search procedure to explore the solution space beyond local optimality. It does this by exploiting adaptive memory and responsive exploration via flexible implementations of short-term and long-term memory. Effective tabu tenures depend on the problem instance and strength of the tabu activation rule, and must balance intensification within a region and diversification across regions to find high quality solutions.

Uploaded by

Anonymus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views7 pages

Tabu Search

Tabu search is an optimization algorithm that guides a local heuristic search procedure to explore the solution space beyond local optimality. It does this by exploiting adaptive memory and responsive exploration via flexible implementations of short-term and long-term memory. Effective tabu tenures depend on the problem instance and strength of the tabu activation rule, and must balance intensification within a region and diversification across regions to find high quality solutions.

Uploaded by

Anonymus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Tabu search:

Terms/Info:

- We characterize this class of problems as that of optimizing (minimizing or maximizing)


a function f(x) subject to x E X, where f(x) may be linear or nonlinear, and the set X
summarizes constraints on the vector of decision variables x.

- The requirement x E X, for example, may specify logical conditions or interconnections


that would be cumbersome to formulate mathematically, but may be better be left as
verbal stipulations that can be then coded as rules.

- Each x E X has an associated neighborhood N(x) C X, and each solution x' E N(x) is
reached from x by an operation called a move.

- The relevance of choosing good solutions from current neighborhoods is magnified when
the guidance mechanisms of tabu search are introduced to go beyond the locally optimal
termination point of a descent method. Thus, an important first level consideration for
tabu search is to determine an appropriate candidate list strategy for narrowing the
examination of elements of N(x), in order to achieve an effective tradeoff between the
quality of x' and the effort expended to find it.

- In the TS strategies based on short term considerations, N* (x) characteristically is a


subset of N(x), and the tabu classification serves to identify elements of N(x) excluded
from N*(x). In TS strategies that include longer term considerations, N* (x) may also be
expanded to include solutions not ordinarily found in N(x).

- The approach of storing complete solutions (explicit memory) generally consumes an


enormous amount of space and time when applied to each solution generated. However,
instead of recording full solutions, these memory structures are generally based on
recording attributes (attributive memory). In addition, short term memory is often based
on the most rece.nt history of the search trajectory.
- The most commonly used short term memory keeps track of solutions attributes that have
changed during the recent past, and is called recency-based memory. To exploit this
memory, selected attributes that occur in solutions recently visited are labeled tabu-
active, and solutions that contain tabu-active elements, or particular combinations of
these attributes, are those that become tabu. This prevents certain solutions from the
recent past from belonging to N*(x) and hence from being revisited. Other solutions that
share such tabu-active attributes are also similarly prevented from being visited.

- TS example:

- Initial solution strategy: The greedy construction starts by choosing the edge (i, J) with
the smallest weight in the graph, where i and j are the indexes of the nodes that are the
endpoints of the edge. The remaining k-1 edges are chosen successively to minimize the
increase in total weight at each step, where the edges considered meet exactly one node
from those that are endpoints of edges previously chosen. For k = 4, the greedy
construction performs the steps in Table 2.

- The swap move mechanism, which is used from this point onward, replaces a selected
edge in the tree by another selected edge outside the tree, subject to requiring that the
resulting subgraph is also a tree. There are actually two types of such edge swaps, one
that maintains the current nodes of the tree unchanged (static) and one that results in
replacing a node of the tree by a new node (dynamic). Figure 5 illustrates the best swap
of each type that can be made starting from the greedy solution.
- Tabu Search procedure:

- We start from the solution with a weight of 63 as shown previously in Figure 6 which
was obtained at iteration 3. At each step we select the least weight non-tabu move from
those available, and use the improved-best aspiration criterion to allow a move to be
considered admissible in spite of leading to a tabu solution. The reader may verify that
the outcome leads to the series of solutions shown in Table 4, which continues from
iteration 3, just executed. For simplicity, we select an arbitrary stopping rule that ends
the search at iteration 10.
-

- Critical Event Memory: For the current example, therefore, we will specify that the
critical events of interest consist of generating not only the starting solution of the
previous pass ( es), but also each subsequent solution that represents a "local TS
optimum," i.e. whose objective function value is better (or no worse) than that of the
solution immediately before and after it. Using this simple definition we see that four
solutions qualify as critical (i.e., are generated by the indicated critical events) in the first
solution pass of our example: the initial solution and the solutions found at iterations 5, 6
and 9 (with weights of 40, 37, 37 and 34, respectively).

- Since the solution at iteration 9 happens to be optimal, we are interested in the effect of
restarting before this solution is found. Assume we had chosen to restart after iteration 7,
without yet reaching an optimal solution. Then the solutions that correspond to critical
events are the initial solution and the solutions of iterations 5 and 6. We treat these three
solutions in aggregate by combining their edges, to create a subgraph that consists of
the edges (1,2), (1,4), (4,7), (6,7), (6,8), (8,9) and (6,9). (Frequency-based memory, as
discussed in Section 4, refines this representation by accounting for the number of times
each edge appears in the critical solutions, and allows the inclusion of additional
weighting factors.)
- It is interesting to note that the restarting procedure generates a better solution (with a
total weight of 38) than the initial solution generated during the first construction (with a
total weight of 40). Also, the restarting solution contains 2 "optimal edges" (Le., edges
that appear in the optimal tree). This starting solution allows the search trajectory to find
the optimal solution in only two iterations, illustrating the benefits of applying a
critical event memory within a restarting strategy.

- To identify whether or not an element is currently tabu-active, let TabuDropTenure


denote the tabu tenure (number of iterations) to forbid an element to be dropped (once
added), and let TabuAddTenure denote the tabu tenure to forbid an element from being
added (once dropped). (In our Min K-Tree problem example of Section 2.2, we selected
TabuAddTenure = 2 and TabuDropTenure = 1.)

- Each element is associated with two different attributes, one where the element belongs
to the current solution and one where the element does not. Elements may be viewed as
corresponding to variables and attributes as corresponding to specific value assignments
for such variables.

- We can now identify precisely the set of iterations during which an element (Le., its
associated attribute) will be tabu-active. Let TestAdd and TestDrop denote a candidate
pair of elements, whose members are respectively under consideration to be added and
dropped from the current solution. If TestAdd previously corresponded to an element
Dropped that was dropped from the solution and TestDrop previously corresponded to
an element Added that was added to the solution (not necessarily on the same step), then
it is possible that one or both may be tabu-active and we can check their status as follows.
By means of the records established on earlier iterations, where TestAdd began to be
tabu-active at iteration TabuAddStart( TestAdd) and TestDrop began to be tabu-active
at iteration TabuDropStart( TestDrop), we conclude that as Iter grows the status of
these elements will be given by:
- Effective tabu tenures have been empirically shown to depend on the size of the
problem instance. However, no single rule has been designed to yield an effective tenure
for all classes of problems. This is partly because an appropriate tabu tenure depends on
the strength of the tabu activation rule employed (where more restrictive rules are
generally coupled with shorter tenures).

- Tabu tenures that are too small can be recognized by periodically repeated objective
function values or other function indicators, including those generated by hashing, that
suggest the occurrence of cycling. Tenures that are too large can be recognized by a
resulting deterioration in the quality of the solutions found (within reasonable time
periods). Somewhere in between typically exists a robust range of tenures that provides
good performance. Once a good range of tenure values is located, first level
improvements generally result by selecting different values from this range on different
iterations.

- In general, short tabu tenures allow the exploration of solutions "close" to a local
optimum, while long tenures can help to break free from the vicinity of a local
optimum. These functions illustrate a special instance of the notions of intensification and
diversification that will be explored in more detail later. Varying the tabu tenure during
the search provides one way to induce a balance between closely examining one region
and moving to different parts of the solution space.

- In situations where a neighborhood may (periodically) become fairly small, or where a


tabu tenure is chosen to be fairly large, it is entirely possible that iterations can occur
when all available moves are classified tabu. In this case an aspiration-by-default is
used to allow a move with a "least tabu" status to be considered admissible.

- Aspirations such as those shown in Table 7 (Pag 38) can be applied according to two
implementation categories: aspiration by move and aspirations by attribute. A move
aspiration, when satisfied, revokes the move's tabu classification. An attribute aspiration,
when satisfied, revokes the attribute's tabu-active status.
The pseudo-code of the heuristic is given below:

1. Initialize
2. Input the data and parameters;
3. Generate an initial feasible solution using greedy approach and set it as the current solution and
the best solution so far;
4. Construct five separate tabu lists of neighbourhood searching;
5. While ( ConsIter < MaxConsIter ) do begin
6. While ( CandList < MaxCandList ) do begin
7. Select one of the five types of neighbourhood move randomly and create corresponding items;
8. Perform a corresponding operation on the current solution;
9. If the condition of elimination is satisfied, then implement elimination to remove one route.

10. Add the solution produced by the selected move to the candidate list;
11. End;
12. Select the best solution in the candidate list if not tabu or a solution better than the best one so
far;

13. Set the new solution as the current solution, update the tabu list and increment Iter;
14. If the new solution improves the best solution so far, update the best solution so far and set
ConsIter to 0; otherwise, increment ConsIter;
15. Update the corresponding tabu list;
16. End.

You might also like