5.3. Heuristics For The TSP: A Min
5.3. Heuristics For The TSP: A Min
Most of the following heuristics are designed to solve symmetric instances of TSP, although
they can be modified for asymmetric cases. Let us assume, that the distances
• are nonnegative and symmetric
• satisfy the triangle inequality.
Without these assumptions, the bound given for the objective function value may not be valid.
Many of these methods, e.g. the construction heuristics, start with an arbitrary vertex. To
increase reliability, the procedure can be repeated from several (or all) vertices in turn. Then the
computation time multiplies correspondingly.
Example 5.5. The heuristics are demonstrated with this example of n=5 vertices and distances
in the following matrix:
cij 1 2 3 4 5
1 - 8 4 9 9
2 8 - 6 7 10
3 4 6 - 5 6
4 9 7 5 - 4
5 9 10 6 4 -
In construction methods, the tour is built up by adding new vertices to a partial tour or path.
Example 5.5.
Starting with node 1, the solution is 1-3-4-5-2-1, f = 31
Example 5.5.
Savings matrix, with center k=3:
sij 1 2 4 5
1 - 2 0 1
2 2 - 4 2
4 0 4 - 7
5 1 2 7 -
1) 2)
3) 4)
Solution: 1-3-5-4-2-1, f = 29
C. INSERTION METHODS
In insertion methods, the tour is consrtructed by inserting new nodes to subtours, i.e. partial
tours.
41
Notation:
T = subtour, partial tour
Îf = cik + ckj - cij = increase in tour length, when node k is inserted between nodes i and j.
Define the distance from node k to subtour T as d(k,T) = ckj
d(k,T) = d(i,T)
4. Find an edge [i,j] of the subtour to insert k, such that the increase of length Îf = cik + ckj - cij
is minimized. Modify the subtour by inserting k between i and j.
5. Go to 3 until a Hamiltonian cycle is formed.
Example 5.5.
T = 1-3-4-1 T = 1-3-4-5-1
Example 5.5.
T = 1-2-3-1 T = 1-2-4-3-1
Solution T = 1-2-4-3-1, f = 29
43
d(k,T) = d(i,T)
4. Find an edge [i,j] of the subtour to which the insertion of k gives the smallest increase of
length, i.e. for which Îf = cik + ckj - cij is smallest. Modify the subtour by inserting k between i
and j.
5. Go to 3 until a Hamiltonian cycle is formed.
According to experimental results, the Farthest Insertion Method usually gives a better average
function value than nearest and cheapest insertion, although it is unknown whether the f-ratio
has a constant upper bound.
Example 5.5.
T = 1-4-1 T = 1-2-4-1
44
Example 5.5.
E. METHOD OF CHRISTOFIDES
1. Construct the minimum spannng tree T for the weighted graph G.
2. Identify the nodes with odd degree. Construct a minimum weight perfect matching for these
nodes. Append the edges of the matching to the minimum spanning tree. Every node has an
even degree. Generate an Eulerian cycle in this graph.
3. Form a Hamiltonian cyle using shortcuts as in the previous method.
Evaluation of the solution: fa / fmin # 1.5 This is the best worst case bound
among nonexact methods!
Worst case complexity: T(n) = O(n3)
A minimum weight perfect matching in a complete graph of n vertices can be found in time
O(n3). Some edges may double when appending the matching edges to the tree.
Example 5.5.
Start with the minimum spanning tree given in the previous example.
F. MERGING METHODS
Improvement methods start with a complete tour and try to improve it gradually. Many impro-
vement methods use local search or neighborhood search.
The definition of a neighborhood depends on the problem and it has often a strong effect on the
quality of the solution.
Definition 5.1.
a) Operation k-change means deleting k edges from a tour and replacing them with k edges such
that the result is a Hamiltonian cycle.
b) A k-neighborhood of a TSP-solution (Hamiltonian cycle) x is the set of Hamiltonian cycles
that result from x with a k-change.
c) TSP tour (Hamiltonian cycle) is k-optimal, if it cannot be improved with any k-change.
We can say that any k-optimal solution is a locally optimal solution of the TSP with respect to
the k-neighborhood.
Notice that in the directed Hamiltonian cycle, some of the old edges change direction after the
k-change.
47
ALGORITHM k-OPT:
This is a typical local search for TSP. In step 1, a construction method or random choice can be
applied.
a) b)
c) d)
e)
48
Typically small values of parameter k are used: k=2 or k=3. Then the change is relatively easy
to perform and the neighborhood is small. Prosedure 3-Opt takes about n times as long as the 2-
Opt prosedure. Also this algorithm can be repeated from different randomly generated starting
solutions to improve the solution.
When k is icreased, the number of iterations increases but the solution improves. It has been
shown experimentally that values of k>3 don't give further advantage to the quality of the
solution. Best results are gained with methods where the value of k is dynamically changed.
Modifications:
• Repeat the composite procedure with varying construction methods. Random generation is
recommended because it tends to cover the solution space more evenly.
• Repeat the construction step 1 from several starting vertex. Apply steps 2-3 to the best initial
solution.
LIN-KERNIGHAN HEURISTIC
This is a very powerful method, based on k-Opt. Different versions of the Lin-Kernighan-
methods exist, and they have given good, "near-optimal" results on empirical tests. The method
is more complicated to code than the simple k-Opt.
(http://www.research.att.com/~dsj/papers/TSPchapter.pdf)
Several other heuristics are available for the TSP. Some special methods are introduced in the
last chapter:
- Genetic Algorithms
- Simulated Annealing
- Tabu Search