0% found this document useful (0 votes)
13 views13 pages

Liao 2018

Hdudid

Uploaded by

defi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views13 pages

Liao 2018

Hdudid

Uploaded by

defi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1

A hierarchical algorithm based on Density Peaks


Clustering and Ant Colony Optimization for
Traveling Salesman Problem
Erchong Liao, Member, IEEE, and Changan Liu

[11], Particle Swarm Optimization (PSO) [12], neural network


Abstract—This paper proposed a hierarchical hybrid algorithm [13], [14], tabu search [15], shuffled frog-leaping algorithm
for Traveling Salesman Problem (TSP) according to the idea of [16], wedging insertion method [17].
divide-and-conquer. The TSP problem is decomposed into a few Every algorithm has some superiority and weakness. Exact
subproblems with small-scale nodes by Density Peaks Clustering
algorithm (DPC). Every subproblem is resolved by Ant Colony
algorithms can obtain the optimal solution, but usually consume
Optimization algorithm (ACO), this is the lower layer. The center long runtime, particularly for large-scale TSP. The intelligent
nodes of all subproblems constitute a new TSP problem, which algorithms can find the near optimal solution in a relatively
forms the upper layer. All local tours of these subproblems are short time. Some hybrid intelligent algorithms synthesizing
joined to generate the initial global tour in the same order that the multiple methods can have a better and more competitive
center nodes are traversed in the upper layer TSP problem. performance.
Finally, the global tour is optimized by k-Opt algorithms. Thirty
benchmark instances taken from TSPLIB are divided into three
Othman et al. introduces the performance and behavior of
groups on the basis of problem size: small-scale, large-scale, and Water Flow-like algorithm (WFA) applying using 2-Opt and 3-
very large-scale. Experimental result shows that the proposed Opt in TSP [18].
algorithm can obtain the solutions with higher accuracy and A parallelized neural network system (PMSOM) is proposed
stronger robustness, and significantly reduce runtime, especially to solve the TSP problem [19]. It consists of dividing all cities
for the very large-scale TSP problem. into municipalities, finding the best overall solution for each
municipality, joining neighborhood municipalities to generate
Index Terms—Ant Colony Optimization, Density Peaks
Clustering algorithm, k-Opt algorithm, Traveling Salesman the final solution. Test results show that the approach can obtain
Problem a better answer in terms of solution quality and runtime.
A hybrid genetic algorithm (HGA) with two local
optimization strategies is introduced to solve TSP [20]. The
I. INTRODUCTION computation results show that it has better performance than
GA.
T HE Traveling Salesman Problem (TSP) attempts to find the
shortest tour that traverses all nodes once and only once.
TSP is also a typical combinatorial optimization problem. The
A discrete symbiotic organisms search (DSOS) algorithm is
proposed to find a near optimal solution for TSP [21]. The
symmetric TSP means that the distances are the same between symbiotic organisms search is improved and extended by using
from node i to node j and from node j to node i. The problem three mutation-based local search operators to reconstruct its
with n nodes has a set of (n-1)! feasible solutions. The time population. Numerical results show that the proposed method
complexity is O(n!) [1]. The computational complexity of the can achieve results close to the theoretical best known solutions
TSP increases exponentially with the problem size. Thus, it is a (BKS) within a reasonable time.
well-known NP hard problem. In improved fruit fly optimization algorithm (IFOA), three
According to solution accuracy, the TSP algorithms are improvements are incorporated [22]. The results show a very
divided into two categories. One is the exact algorithms which competitive performance in terms of the convergence rate and
can find the optimal solution in thousands of nodes, the typical precision.
algorithms are branch and bound [2], branch and cut [3], branch A hybrid method between dynamic programming and
and price [4], and dynamic programming [5]. The other is the memetic algorithm is introduced to solve the TSP with hotel
heuristic intelligent algorithms, e.g., Genetic Algorithm (GA) selection which is a variant of the classic TSP [23]. Experiments
[1], [6], [7], Simulated Annealing (SA) [8], [9], Artificial Bee show the proposed approach has remarkable performance.
Colony algorithm(ABC) [10], Ant Colony Optimization (ACO) An effective local search algorithm based on simulated

This work was supported by the Fundamental Research Funds for the Changan Liu is with the School of Control and Computer Engineering,
Central Universities under Grant No. 2016MS121. North China Electric Power University, Beijing, CO 102206 China (e-mail:
Erchong Liao is with the School of Control and Computer Engineering, [email protected]).
North China Electric Power University, Baoding, CO 071003 China (e-mail:
[email protected]).

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 2

annealing and greedy search techniques (ASA-GS) is presented used to generate initial swarm of PSO. Then PSO is made on
to solve the TSP [24]. Test results show that the proposed this swarm to find optimal path which is improved by k-Opt.
algorithm provides better compromise between CPU time and The benchmark test results show that the algorithm is more
solution accuracy for the TSP. efficient with respect to accuracy as well as runtime.
PSO and GA have the shortcomings of premature For a TSP problem, a feasible strategy is to firstly decompose
convergence and poor convergence for high dimensional into a number of small-scale subproblems which should
complex problem, which cannot guarantee to obtain the optimal maintain the main structure of the optimal tour, then solve all
solution. Neural network is easy to generate unreasonable small-scale subproblems and finally merge all local tours.
solution or local optimal solution. ACO is premature This article proposes a hybrid hierarchical algorithm for TSP.
convergence, but ACO is a forward feedback method with very Firstly, the TSP problem is decomposed into small-scale TSP
few parameters which is easy in adjusting. ACO algorithm has by clustering algorithm based on the superiority of ACO
been considered that it is effective for small-scale TSP [25], algorithm in small-scale TSP. Then solve the TSP tour by ACO
[26]. Some hybrid algorithms based on ACO have a more algorithm in each cluster and merge the local tours among
competitive performance. clusters. Finally escape from local optima by using k-Opt
Ant colony extended (ACE) is a novel extensional algorithm algorithm to obtain the global tour. Three groups of benchmarks
of the general ACO framework [27]. Experimental results show from TSPLIB are used to validate the algorithm. The results
ACE has better performance than ant colony system and max- show that the algorithm in the article can get the tour closed to
min ant system in almost every tested TSP instance. the theoretical optimal values in a reasonable and short runtime
A new hybrid method based on SA, GA, ACO, and PSO is and has strong robustness.
presented to solve TSP [28]. First, the initial solutions are The rest of the article is organized as follows. Section 2
generated by using ACO. Then optimize them by genetic introduces the Density Peaks Clustering (DPC), ACO, and k-
simulated annealing. After a given number of iterations, the Opt algorithms. The proposed algorithm is detailly explained in
pheromone information between groups is exchanged by PSO. Section 3. Experiments and comparisons are revealed in Section
Experimental results from TSPLIB show that the proposed 4. The results are analyzed in Section 5. Finally, Section 6
method has a better solution than the others. draws some conclusions and provides future work.
In [29], a parallel cooperative hybrid algorithm (PACO-3Opt)
is proposed to solve TSP. Firstly the ACO algorithm generates II. MATERIALS AND METHODS
initial solutions in parallel, and then the 3-Opt algorithm
A. Density Peaks Clustering
improves these solutions. The algorithm enhances the quality
and the robustness of the obtained solutions, significantly There are a lot of clustering algorithms, such as k-means [34],
decreases the computational time and can solve large-scale TSP affinity propagation algorithm [35], and density peaks
problem within a reasonable time. clustering [36]. The DPC algorithm assumes that the cluster
A hybrid algorithm (ACO-Taguchi) based on ACO and the centers are surrounded by neighbors with lower local density
Taguchi method is proposed to solve TSP [30]. The and that the centers are at a relatively farther distance from any
performance of the proposed algorithm is improved after points with higher local density. The local density  i and the
optimizing the ACO parameters by using the Taguchi method. distance  i from points with higher local density are computed
Wang presents hybrid max-min ant system (HMMA) [18] for each point i. Both two variable quantities only rely on the
with four vertices and three lines inequality to solve TSP. The
distances d ij between points. Local density  of point i is
experimental results show that it can find the better approximate
solutions calculated as (1).
A hierarchical algorithm based on clustering algorithms and i =   (dij − d c ). (1)
j
ACO (HCACO) is presented for solving large-scale TSP [31].
Firstly, the large-scale TSP problem is decomposed into several Where,
small-scale TSP subproblems by k-means algorithms, while all 1, x  0
 ( x) =  . (2)
subproblems could be solved by ACO. Then, the local solutions 0, x  0
Let  i indicate the number of points that are closer than d c
for all subproblems are merged. Numerical simulation results
show that the proposed algorithm has a beneficial effect for
large-scale TSP. to point i. While d c is the cutoff distance. The clustering result
In [32], A hybrid method called PSO-ACO-3Opt based on is not sensitive to the value of d c for massive nodes.  i means
ACO, PSO, and 3-Opt is proposed. In this algorithm, PSO the shortest distance between point i and any other point with
optimizes the parameters of ACO, the initial solution is higher density. It is defined as (3).
obtained by ACO, and finally 3-Opt is used to avoid falling in  i = min (dij ), (3)
local minimum. Experimental results show that the j :  j  i

performance of the proposed method is better than that of other If point i is the highest density,  i is computed as (4).
compared methods in terms of solution quality and robustness.
i = max j (dij ). (4)
Another hybrid algorithm (C-PSO-ACO-KOpt) based on
ACO, PSO, and k-Opt is proposed to solve TSP [33]. ACO is The cluster centers are the points with especially large  i ,

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 3

each remaining point is clustered to the same cluster as its kth ant. The algorithm continues until the maximum number of
nearest neighbor with higher density. The DPC algorithm need iterations is met. The tour with the shortest length is regarded
not iterate and is executed in a single step, therefore DPC is as the final solution.
robust and effective.
C. k-Opt
B. Ant Colony Optimization Some optimization algorithms (e.g. GA, ABC) are presented
Ant colony algorithm [26], [37], [38], [39] proposed by M. to decrease the length of the tour. These algorithms sometimes
Dorigo in 1991 is a heuristic search algorithm based on swarm fall in local minimum when searching the optimal tour for TSP.
intelligence. Ants can find the shortest tour between the nest K-Opt algorithm has been used to avoid local optima. 2-Opt [44]
and the food source in nature. Ants deposit a chemical and 3-Opt [45] are the typical subclasses of the k-Opt algorithm.
substance, called pheromone, on the way they passed. Other The 2-Opt algorithm basically removes two edges from the
ants follow the pheromone trail and search the food source. tour and reconnects the two edges to obtain a new tour. The
Furthermore, ants reinforce the pheromone while they return to steps continue only when the latest tour is shorter. Removing
their nest. Meanwhile, the pheromone evaporates over time and and reconnecting the tour leads to an optimal tour. The 3-Opt
reduces its attractiveness. The pheromone density on the way algorithm is similar, but instead of removing two edges, three
depends on the times the tour is selected, especially in recent are removed and reconnected.
time. Ants can find the shortest tour by choosing the ways on If the three removed edges are interval among six nodes, all
which the pheromone density is the highest. Inspired by the possible 3-Opt reconnection variants are as shown in Fig. 1.
foraging behavior of ants in nature, ant colony optimization is And Fig. 2 demonstrates all possible variants among five nodes.
proposed to solve the optimization problem in the discrete
system. At present, the algorithm has been widely used to solve
traveling salesman problem, the assignment problem [40], the original
scheduling problem [41], the feature selection [42], and path tour
planning of mobile robots [43]. Ant colony algorithm without
prior knowledge is a stochastic optimization method. Ants
essitial
randomly select the node, gradually optimize the tour with the
2-Opt
aid of pheromones, and obtain the global optimal tour at last.
tour
Assume that the TSP has n nodes, m ants. k represents the kth
ant in the colony. d ij denotes the distance between node i and (a) (b) (c)
node j. The kth ant at time t moves from node i to node j
according to the transition probability in (5).
3-Opt
 ij (t )ij (t ) /  sak  is(t )is (t ), j  ak

   
tour
=
k
P ij 0, otherwise
( t ) . (5)
(d) (e) (f) (g)


Fig. 1. All possible 3-Opt reconnection variants among six nodes. There are
Where ak is the set of nodes not visited by the kth ant. The seven variants. In which, (a), (b), and (c) are essentially 2-Opt movements;
while (d), (e), (f), and (g) are 3-Opt movements.
variables i, j and s are the identifier of the nodes.  ij (t ) is the
intensity of trail between node i and node j at time t, ij (t ) is
the visibility at time t and generally given by 1/ d ij . The
parameters  and  denote the relative importance between
trail and visibility. Over time, the pheromone evaporation (a) 3-Opt
occurs and the quantity of pheromone is updated as in (6). tour
 ij (t + 1) = (1 −  ) ij (t ) +  ijk (t, t + 1). (6)
Where  ∈(0,1) is the pheromone evaporation coefficient.
 is the total quantity of increased or decreased pheromone
left by all ants between node i and node j. It is calculated in (7). (b)
n
 ijk (t, t + 1) =   ijk (t, t + 1). (7) essitial
k =1 2-Opt
In the ant-cycle system, the quantity of pheromones left by tour
the kth ant between node i and node j is determined as in (8). (c) (d) (e)


Q / Lk , if the kth ant uses pathij Fig. 2. All possible 3-Opt reconnection variants among five nodes. The red
 ijk (t , t + 1) =  . (8) edge is relatively motionless. There are five possible variants. In which, (c),

0, otherwise (d), and (e) are essentially 2-Opt movements; while (a) and (b) are 3-Opt
movements, and they are symmetric.
Where Q is a constant. Lk represents the tour length of the

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 4

A 3-Opt movement is equal to two or three 2-Opt movements. TSP problem which could be solved by ACO algorithm. All
A 3-Opt movement may provide better solutions, but it is center nodes comprise the upper layer. Each center node
significantly slower. represents one clustering group in the lower layer. Therefore,
the upper layer is also a small-scale TSP problem. According to
III. PROPOSED ALGORITHM (DPC-ACO-KOPT) the sequence among groups in the upper layer, the initial global
Generally, in the process of solving the TSP problem by tour which visits all nodes can be constructed by merging local
using ACO algorithm, computational complexity increases tours for all clustering groups in the lower layer. The final
rapidly nonlinearly along with the dimension increase of the global tour is obtained after optimizing the initial local tour by
TSP problem instance. When the nodes are more than 200, it is using k-Opt algorithm.
difficult to compromise between runtime and solution accuracy. The local tour of each group in the lower layer would be
Jiang et al. consider that the runtime and solution quality of integrated the global tour. All center nodes in the upper layer
ACO algorithm are both in the optimal region when the nodes
are less than 40, but longer runtime and lower quality when Begin
more than 40 [31]. In [31], the clustering is conducted by using
k-means algorithm. K-means is applicable for the nodes with
circle or spherical distribution, but not for linear distribution. Initialize basic parameters
DPC which groups all nodes according to the density works
effectively in more distribution styles, such as circle, spherical,
curvilinear, and linear. This is a bold attempt for DPC in the read the location information of
similar solution. all nodes
As shown in Fig. 3 and Fig. 4, the paper proposes a
hierarchical hybrid method based on ACO. Firstly, group all Compute the distance matrix
nodes and find cluster center node inner each group by using
DPC algorithm. It takes linear and short time. All nodes are
divided into two layers. The lower layer of the proposed Cluster all nodes and obtain center
approach consists of all clusters. Every cluster is a small-scale node in each cluster by DPC

Adjusting parameters according


to the scale of the groups

Gain the macro TSP path among


(a) TSP
all center nodes by ACO

Obtain the TSP path in each


cluster by ACO

(b) grouping and Find the nearest pair of nodes


finding center points between two adjacent clusters
by density peak
clustering algorithm Reconnect on the nearest nodes to
construct the initial global path

Optimize the global path by 2-Opt


(c) solving in each
group and among
groups by ACO Optimize the global path by 3-Opt
and get the final solution

Get statistic information and print


the report

(d) merging and


local optimizing End
Fig. 3. Schematic diagram of proposed hierarchical algorithm. Fig. 4. The flowchart of the proposed algorithm.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 5

constitute a new TSP problem which could also be solved by that the proposed algorithm has better performance in terms of
ACO. The order among center nodes in the upper layer is the the accuracy.
sequence of integrating local tours in the groups at the macro CT = TACO / THHA . (10)
level. The local tour in every group is the Hamilton loop. Thus, Let TACO be the computational time by using traditional ACO
the local tour should be disconnected at one node, and
algorithm and let THHA be the computational time by using
reconnected with the adjacent group. To obtain the optimal tour,
one pair of nodes with the shortest distance between two hierarchical hybrid algorithm. CT as in (10) is the ratio of the
adjacent groups are disconnected in local tour in the lower layer two time. If CT is greater than one, it means the proposed
and reconnected to construct the initial global TSP tour. algorithm has less runtime. The runtime will become less as
The ACO parameters may be different in the process of CT increases.
merging local tours when the number of groups is different with In order to obtain unbiased comparisons, CRE and CT are
clustering. The parameters need to be determined by both obtained under the same external environment, e.g., the
experiments for multiplex problem sizes. same parameters of ACO and the identic k-Opt local
In the process of clustering nodes by using DPC algorithm, optimization algorithm. This mechanism assures the
when the number of nodes increases in each cluster, the number environment is coincident and the performance is comparable
of groups and the clustering time both become less, but it takes between with clustering and without clustering.
more time to find the local tour by using ACO algorithm in each Both accuracy improvement and runtime reduction must be
cluster. The total runtime usually becomes longer. In addition, comprehensively considered in the assessment. The evaluation
the number of nodes in each group would affect the solution function is defined as (11).
accuracy of the proposed method. Thus, the runtime and the EV =   CRE + CT . (11)
accuracy of the algorithm are closely related to the number of Let  be the weight coefficient of the accuracy. It means how
nodes in each cluster.
many times runtime would be consumed when improving one
The method that decomposes the problem into subproblems
percent accuracy. Generally,  is 100.
can improve the solution speed but may produce nonoptimal
solution when connecting the local paths among subproblems. The relative error and the runtime based on the test are shown
Therefore, the solution accuracy should be enhanced by using in Table I. We observed that the accuracy was improved and the
local path optimization algorithm, such as k-Opt. runtime was reduced for all instances by using the proposed
algorithm. The accuracy improvement was the best for ch150
IV. EXPERIMENT and kroa200 when the evaluation value was the highest, and the
third best for rd400. While the runtime reduction was the most
The proposed algorithm is carried out by MATLAB 2015b obvious for ch150, kroA200, and d493, and the second best for
with single thread, the CPU is Intel (R) Core (TM) i7-6700 rd400 and p654. In five instances, the candidate number 35 was
@3.40GHz, the memory capacity is 8 GB, and the OS is Win7. hit four times, and the number 30 was hit one time. The result
TSPLIB [46] is the public library of TSP benchmark instances shows that the runtime and the relative error of the proposed
which are commonly used for testing the optimization algorithm are both in the best range when there are at most 35
algorithms. The experiments are divided into three groups nodes in each group. The number of clusters increases as the
according to the problem size: small-scale, large-scale, and very nodes become more, so does the communication and merge
large-scale. time between groups. In theory, the number of nodes is the
A. The Scale of Each Clustering optimal for the ACO algorithms in the both upper and lower
To find the optimal range of the number of nodes in each layers when the nodes are clustered 35 groups and each group
cluster, the numbers from 30 to 80 with an interval five are has at most 35 nodes on average.
candidate scales. These five instances (ch150, kroA200, rd400, That is, the hierarchical hybrid algorithm with 1225 (35×35)
d493, and p654) are selected for parameters determination nodes has a strong advantage, especially in term of runtime. But
experiments because they are different distribution styles and when the number of nodes continually increases, the number of
the nodes are not too many, but wide and uniform distribution. groups increases synchronously. So does the time of
Each candidate number was repeated 20 times for five TSPLIB communication, integration, and optimization in the process of
instances. The changed relative error is computed according to merging local solutions to the global solution. Consequently,
(9). the superiority would gradually decline even lose. The proposed
CRE = ( LACO − LHHA ) / LBKS 100. (9) algorithm is tested in three groups according to problem size.
The first group is small-scale (50 to 350 nodes), the second
For a certain TSP instance, let LHHA be the average tour length
group is large-scale (350 to 1200 nodes), and the last group is
by using the hierarchical hybrid algorithm, oppositely let LACO very large-scale (1200 to 3500 nodes). The instances with
be the average tour length by using the ACO and k-Opt different distribution styles and different sizes are selected to
algorithms without clustering. Let LBKS be the length of BKS comprehensively test the performance of the proposed
for the same instance. CRE is the percent that the accuracy is algorithm. We used 10 different symmetric TSP instances from
improved by the hierarchical hybrid algorithm compared with TSPLIB (eil51, berlin52, st70, eil76, rat99, kroa100, eil101,
the algorithm without clustering. CRE greater than zero means lin105, ch150, and kroA200) for small-scale problems, 11

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 6

TABLE I
THE RELATIONSHIP BETWEEN PERFORMANCE AND THE MAXIMUM SCALE IN EACH CLUSTER

Candidate scale 30 35 40 45 50 55 60 65 70 75 80

ch150 CRE 3.1 3.43 2.54 3.02 1.55 1.38 1.45 1.1 1.29 1.8 1.91
CT 19.33 19.45 12.88 12.9 5.54 5.52 5.48 5.42 5.47 3.2 3.19
EV 329.22 362.69 266.49 314.8 160.11 143.86 150.88 115.18 134.85 182.76 194.59
kroA200 CRE 2.41 1.96 1.55 2.33 1.75 1.73 0.89 1.24 1.26 2.33 2.18
CT 26.15 16.15 15.36 16.99 8.75 8.98 9.03 9.03 6.75 6.67 6.67
EV 267.46 212.09 170.63 250.16 183.3 181.49 97.82 132.6 133.01 239.87 224.51
rd400 CRE 1.17 1.49 1.33 1.84 1.57 1.29 0.82 0.87 0.12 0.71 0.3
CT 127.12 116.92 78.12 70.64 68.49 69.15 36.28 35.99 24.87 24.48 21.85
EV 244.02 265.51 211.03 254.82 225.74 198.46 118.7 122.95 36.81 95.21 51.47
d493 CRE 1.39 1.95 1.7 1.59 2.89 2.65 2.74 3.29 2.97 3.4 3.76
CT 218.27 326.09 325.49 324.85 85.61 81.9 81.27 80.66 81.94 71.1 71.02
EV 357.03 520.68 495.65 484.27 374.28 347.32 355.39 410.04 378.96 411.03 446.94
p654 CRE 0.67 0.82 0.51 1.12 0.6 1.46 1.45 1.32 1.06 0.74 0.91
CT 78.62 77.89 76.63 5.71 5.69 6.08 5.84 6.09 6.06 6.07 5.96
EV 145.46 159.69 127.53 117.25 65.27 152.47 150.45 137.99 111.94 80.05 97.13
Hit counts 1 4 0 0 0 0 0 0 0 0 0

different instances (rand400, fl417, pr439, pcb442, d493, decreased by using k-Opt local optimization algorithm versus
rat575, p654, d657, u724, rat783, and pcb1173) for large-scale the length of BKS. The time cost is calculated in (13), and it is
TSP problems, and nine instances (d1291, nrw1379, fl1400, the percentage of the k-Opt runtime versus the total runtime
d1655, rl1889, vm1748, u2152, pr2392, and pcb3038) for very with k-Opt.
large-scale problems. Every instance is repeated 100 times. DLkopt = ( Lavgwithoutkopt − Lavg ) / LBKS 100. (12)
Each run stops when the test repeats continually 1000 times.
TCkopt = (Tkopt / Twithkopt ) 100. (13)
B. Evaluation of the DPC-ACO-KOpt algorithm on the
We observed that k-Opt algorithm yielded better tour lengths
small-scale TSP problems
for all instances. The optimization effect was most obvious for
There are usually less than 10 groups after using DPC rat99, up to 13.14%. In the worst case, the tour length was
algorithm for small-scale TSP problems. The communication decreased 2.23% for eil51. The time consuming for k-Opt was
time between groups is very short. The number of nodes for below 0.26% and almost negligible.
ACO are slightly different between with clustering and without As previously mentioned, the characters and features of some
clustering. The runtime of the proposed algorithm decreases, related algorithms have been briefly stated in Section 1. To test
but not obviously. the performance of the hierarchical hybrid algorithm, 11 other
Since the node distribution and the size in every TSP instance algorithms were selected to compare with the DPC-ACO-KOpt
may be different, the optimal values of the  and  may be
different in the ACO algorithm. In fact, the parameter  TABLE II
strongly depends on the parameter  [47]. The values of the THE EXPONENTIAL PARAMETER VALUES FOR THE SMALL-SCALE TSP
INSTANCES
parameters  and  in ACO are discussed in the literature,
TSP Instance  
such as [32], [48], and [49]. Table II lists the best values of the
parameters  and  for the small-scale TSPLIB instances in eil51 1.11 1.44
berlin52 0.95 1.05
[31]. The same parameter values are selected in our study. st70 0.94 1.05
Additional parameters for the proposed algorithm are shown in eil76 0.88 1.50
the Table III. rat99 0.99 1.07
kroA100 1.01 1.10
Generally, the number of ants m is in direct proportion to the eil101 1.20 0.75
number of nodes, and the ratio is closely related to the runtime lin105 1.20 0.65
and the solution accuracy. Computational time increases as the ch150 0.75 1.20
kroA200 0.75 1.15
ratio increases. The solution may lose the optimal accuracy
when the ratio is below or above the optimal region. The ratio
was determined as 0.6 according to the average tour length, the
TABLE III
shortest tour length and the runtime based on eil51 instance.
OTHER PARAMETER SETTING OF DPC-ACO-KOPT FOR THE SMALL-SCALE
The k-Opt algorithm is used to escape from the local optimal TSP INSTANCES
solution. The k-Opt algorithm has a significant effect on
Parameters  Q m
improving solution accuracy. The time cost and solution
accuracy improvement of k-Opt algorithm are shown in Table Values 0.1 1 0.6 times the number of the nodes
IV for 10 small-scale TSP instances. The optimization effect is
measured in (12). It is the percentage of the average tour length

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 7

TABLE IV
THE TIME COST AND OPTIMIZATION EFFECT OF K-OPT ALGORITHM ON THE 10 TSP INSTANCES

Without k-Opt With k-Opt


Instance DLkopt TCkopt
Best Average Worst Best Average Worst

eil51 427 435.75 445 426 426.25 427 2.23 0.02


berlin52 7756 8081.5 8612 7542 7542 7542 7.15 0.04
st70 707 735.8 778 675 675.25 680 8.97 0.04
eil76 571 589.5 611 538 538.3 541 9.52 0.03
rat99 1300 1371.05 1433 1211 1211.9 1215 13.14 0.07
kroA100 22412 23481.4 24202 21282 21283.65 21305 10.33 0.07
eil101 659 676.65 715 629 630.35 636 7.36 0.07
lin105 15171 15903.8 16955 14379 14380.1 14401 10.6 0.09
ch150 6882 7146.1 7370 6528 6536.5 6573 9.34 0.26
kroA200 30252 31745.8 33257 29368 29398.4 29461 7.99 0.18

algorithm: PACO-3Opt (2016) [29], PSO-ACO-3Opt (2015) with these 11 algorithms for 10 small-scale TSP instances. The
[32], ACE (2015) [27], ASA-GS (2011) [24], SA-ACO-PSO quality criteria of the solutions are the average tour length, the
(2011) [28], WFA-2Opt (2013) [18], WFA-3Opt (2013) [18], standard deviation (SD) and the relative error (RE). The RE is
ACO-Taguchi (2013) [30], IFOA (2017) [22], DSOS (2017) calculated in (14). LBKS is the length of the BKS, Lavg denotes
[21], and C-PSO-ACO-KOpt (2017) [33]. Table V
the average tour length found by DPC-ACO-KOpt. The better
demonstrates the comparison of the DPC-ACO-KOpt algorithm
TABLE V
COMPARISON OF THE PROPOSED ALGORITHM WITH OTHER ALGORITHMS ON THE SMALL-SCALE TSP INSTANCES
avg
Algorithm Instance eil51 berlin52 st70 eil76 rat99 kroA100 eil101 lin105 ch150 kroA200
(RE)
BKS 426 7542 675 538 1211 21282 629 14379 6528 29368
Proposed Best 426 7542 675 538 1211 21282 629 14379 6528 29368
algorithm Avg. 426.25 7542 675.25 538.30 1211.90 21283.65 630.35 14380.10 6536.5 29398.40
SD. 0.44 0.00 1.11 0.80 1.25 5.50 2.01 4.92 14.36 39.08
0.07 RE (%) 0.06 0.00 0.04 0.06 0.07 0.01 0.21 0.01 0.13 0.10
PACO-3Opt Avg. 426.35 7542 677.85 539.85 1217.10 21326.80 630.55 14393.00 6601.40 29644.50
(2016) [29] SD. 0.49 0.00 0.99 1.09 4.01 33.72 2.63 19.76 15.01 53.43
0.4 RE (%) 0.08 0.00 0.42 0.34 0.50 0.21 0.25 0.10 1.12 0.94
PSO-ACO- Avg. 426.45 7543.20 678.20 538.30 1227.40 21445.10 632.70 14379.15 6563.95 29646.05
3Opt (2015) SD. 0.61 2.37 1.47 0.47 1.98 78.24 2.12 0.48 27.58 114.71
[32] 0.49 RE (%) 0.11 0.02 0.47 0.06 1.35 0.77 0.59 0.00 0.55 0.95
ACE (2015) Avg. 426.82 7543.04 676.42 538.31 1213.29 21298.60 633.62 14385.5 6550.00 –
[27] SD. 0.58 13.37 2.69 1.14 4.12 42.46 3.90 26.22 13.61 –
0.21 RE (%) 0.19 0.01 0.21 0.06 0.19 0.08 0.73 0.05 0.34 –
ASA-GS Avg. 428.872 7544.37 677.11 544.369 1219.49 21285.4 640.515 14383 6539.8 29438.4
(2011) [24] SD. – – – – – – – – – –
0.51 RE (%) 0.67 0.03 0.31 1.18 0.7 0.01 1.83 0.02 0.16 0.23
SA-ACO- Avg. 427.27 7542 – 540.20 – 21370.30 635.23 14406.37 6563.70 29738.73
PSO (2011) SD. 0.45 0.00 – 2.94 – 123.36 3.59 37.28 22.45 356.07
[28] 0.52 RE (%) 0.30 0.00 – 0.41 – 0.41 0.99 0.19 0.55 1.27
WFA-2Opt Avg. 426.65 7542 – 541.22 – 21282.00 639.87 14379 6572.13 29654.03
(2013) [18] SD. 0.66 0.00 – 0.66 – 0.00 2.88 0.00 13.84 151.42
0.52 RE (%) 0.15 0.00 – 0.60 – 0.00 1.73 0.00 0.68 0.97
WFA-3Opt Avg. 426.60 7542 – 539.44 – 21282.80 633.50 14459.40 6700.10 29646.50
(2013) [18] SD. 0.52 0.00 – 1.51 – 0.00 3.47 1.38 60.82 110.91
0.66 RE (%) 0.14 0.00 – 0.27 – 0.00 0.72 0.56 2.64 0.95
ACO- Avg. 435.40 7635.40 – 565.50 – 21567.10 655.00 14475.20 – –
Taguchi SD. – – – – – – – – – –
(2013) [30] 2.45 RE (%) 2.21 1.24 – 5.11 – 1.34 4.13 0.67 – –
IFOA Avg. 427.53 7542 677.26 – 1237.20 21357 642.05 14427.06 6618.20 –
(2017) [22] SD. 1.26 0.00 2.33 – 15.17 43.77 4.77 44.67 31.68 –
0.88 RE (%) 0.36 0.00 0.34 – 2.16 0.35 2.08 0.33 1.38 –
DSOS Avg. 427.9 7542.60 679.20 547.40 1228.37 21409.50 650.60 – – –
(2017) [21] SD. 1.20 0.00 2.80 3.90 14.32 149.15 4.57 – – –
1.18 RE (%) 0.45 0.01 0.62 1.75 1.43 0.60 3.43 – – –
C-PSO- Avg. 426.29 7543.29 676 538.15 1213.9 21319.5 631.2 14379.29 – 29642
ACO-KOpt SD. 0.46 3.9 1.73 0.65 0.99 47.79 1.5 1.3 – 145
(2017) [33] 0.14 RE (%) 0.07 0.01 0.14 0.02 0.07 0.17 0.34 0 – 0.46

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 8

results in the comparison are written in bold and the character 5


‘–’ means that there is no result in original references. 4 wihout DPC
RE = ( Lavg − LBKS ) / LBKS 100. (14) with DPC
3
In terms of the average tour length and RE , DPC-ACO- 2

runtime(s)
KOpt algorithm obtained better tours than 11 other algorithms
on these instances except kroA100 and lin105. Furthermore, the 1
proposed algorithm generated suboptimum tours on kroA100 0
and lin105. WFA-2Opt algorithm yielded better results than the
other algorithms on the two instances. The relative error of the
proposed algorithm was less than 0.21% for all instances.
For the small-scale TSP instances, the proposed algorithm TSP instance
performs better than the remaining 11 algorithms in 80% of Fig. 5. Runtime comparison of the proposed algorithm with ACO-KOpt
instances ranging from 51 up to 200 nodes (i.e., 8 out of 10 (without DPC) for 10 small-scale TSP instances. DPC-ACO-KOpt
outperformed ACO-KOpt without DPC in all test instances. The advantage
instances). The average relative error was 0.07%, which was was more significant when the nodes are more than 105.
obviously less than the 0.14% of C-PSO-ACO-KOpt, 0.21% of
ACE, and 0.4% of PACO-3Opt respectively.
TABLE VI
The standard deviations of these tours found by the proposed PARAMETER SETTING OF DPC-ACO-KOPT FOR THE LARGE-SCALE TSP
algorithm were minimal for eil51, berlin52, and kroA200. INSTANCES
WFA-2Opt had the best standard deviations for kroA100,    Q
Parameters Maximum Iteration
lin105, and ch150. C-PSO-ACO-KOpt was the most stable for
rat99 and eil101, PSO-ACO-3Opt for eil76, and PACO-3Opt Values 0.98 1.10 0.1 1 1000
for st70. This means the proposed algorithm has more robust
solutions than other algorithms.
The proposed algorithm resolves the smaller scale TSP by
using ACO after clustering all nodes and decomposing the than 0.99% except rat-style instances. It was distinct that ASA-
original TSP problem. The problem size for the ACO is much GS was more suitable for rat-style TSP instances than the
smaller in the proposed algorithm, and the runtime is shorter in proposed algorithm. The densities of all nodes in the rat-style
each cluster. Although it spends some runtime on merging the instances are almost equal so that the connection between
local tours, the total runtime is still shorter. The runtime groups could not lead to better tour when merging the local
comparison for 10 small-scale TSP instances is summarized in tours, and k-Opt does not remedy the deficiency.
Fig. 5. For the large-scale TSP instances, the proposed algorithm
outperforms the other algorithms in 81.82% of instances
C. Evaluation of the DPC-ACO-KOpt algorithm on the ranging from 400 up to 1173 nodes (i.e., 9 out of 11 instances).
large-scale TSP problems The average relative error was 1.45%, which was less than the
There are usually no more than 35 groups after clustering by 1.65 of ASA-GS, 2.4 of PACO-3Opt, and 2.65 of PSO-ACO-
DPC algorithm for large-scale TSP. The communication time 3Opt respectively. If excluding the rat-style instances, the
between groups is not long. The problem sizes for ACO are average relative error of the proposed algorithm is 0.70 which
clear different between with clustering and without clustering. is much less than 1.45.
The superiority of short runtime is significant for these large-
D. Evaluation of the DPC-ACO-KOpt algorithm on the very
scale TSP instances. Eleven cases from TSPLIB with 350 to
large-scale TSP problems
1200 nodes were chosen to test. The test environment was the
same as that in the small-scale TSP instances. The parameters There are usually no more than 100 groups after clustering
 and  are the mean values of 10 small-scale instances. The by using DPC algorithm for very large-scale TSP. The
communication time between groups may be long and cannot
specific values of all parameters are listed in Table VI.
be neglected. The size of every group after clustering is much
To test the performance of the DPC-ACO-KOpt algorithm in
smaller than the scale without clustering. Overall, the runtime
the large-scale TSP problems, eight other algorithms briefly
can be reduced. The test environment and parameters are both
explained in Section 1 were selected to compare with the DPC-
the same as them in the large-scale instances.
ACO-KOpt algorithm: PACO-3Opt (2016) [29], PSO-ACO-
To test the performance of the DPC-ACO-KOpt algorithm in
3Opt (2015) [32], HMMA (2015) [25], PMSOM (2015) [19],
the very large-scale TSP problems, five other algorithms briefly
HCACO (2014) [31], HGA (2014) [20], ASA-GS (2011) [24],
explained in Section 1 were selected to compare with the DPC-
and DSOS (2017) [21]. The comparison results with eight other
ACO-KOpt algorithm: PMSOM (2015) [19], HMMA (2015)
algorithms on the 11 large-scale instances are shown in Table
[25], HCACO (2014) [31], ASA-GS (2011) [24], and DSOS
VII. DPC-ACO-KOpt algorithm yielded the better solution than
(2017) [21]. The comparison results with five other algorithms
eight other algorithms except rat-style instances (rat575 and
on the nine very large-scale instances are shown in Table VIII.
rat783) in terms of the average tour length and the best tour
We observed that DPC-ACO-KOpt algorithm constructed
length. The relative error of the proposed algorithm was less

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 9

TABLE VII
COMPARISON OF THE PROPOSED ALGORITHM WITH OTHER ALGORITHMS ON THE LARGE-SCALE TSP INSTANCES
avg
Algorithm Instance rd400 fl417 pr439 pcb442 d493 rat575 p654 d657 u724 rat783 pcb1173
(RE)
BKS 15281 11861 107217 50778 35002 6773 34643 48912 41910 8806 56892
Proposed Avg. 15387.25 11890.15 107752.15 51158.95 35347.5 7135.0 34979.55 49317.65 42282.7 9192.1 57113.7
algorithm Best 15333 11870 107412 51001 35237 7071 34693 49103 42118 9118 56901
1.45 RE (%) 0.7 0.25 0.5 0.75 0.99 5.34 0.97 0.83 0.89 4.38 0.39
PACO- Avg. 15613.9 11987.4 108702 52202.4 35841 7012.4 35075 50277.5 43122.5 9127.3 –
3Opt Best 15578 11972 108482 51962 35735 7003 35045 50206 42764 9111 –
(2016) [29] 2.4 RE (%) 2.18 1.07 1.39 2.81 2.4 3.53 1.25 2.79 2.89 3.65 –
PSO-ACO- Avg. 15691.3 11980.4 108965.4 52368.1 35973.8 7018.6 35098.2 50475.5 43300.3 9138.1 –
3Opt Best 15594 11947 108530 52131 35789 6987 35052 50291 43172 9128 –
(2015) [32] 2.65 RE (%) 2.69 1.01 1.63 3.13 2.78 3.63 1.31 3.2 3.32 3.77 –
HMMA Avg. 16723.68 12897.84 116855.91 55670.17 38410.25 7745.63 37931.4 55554.31 47132.71 10256.23 –
(2015) [25] Best 16534 12543 114095 54401 37187 7575 37044 55163 46662 10149 –
11.29 RE (%) 9.44 8.74 8.99 9.63 9.74 14.36 9.49 13.58 12.46 16.47 –
PMSOM Avg. – – – 53362.6 – – 36399.4 – 43980.35 – 61386.47
(2015) [19] Best – – – 52631 – – 35953 – 43704 – 60938
5.75 RE (%) – – – 5.09 – – 5.07 – 4.94 – 7.9
HCACO Avg. – – – 55326 – – – 53234 – 9505 –
(2014) [31] Best – – – 54838 – – – 52665 – 9453 –
8.58 RE (%) – – – 8.96 – – – 8.84 – 7.94 –
HGA Avg. 15852.74 – 109249.66 52376.26 – – – – – – –
(2014) [20] Best – – – – – – – – – – –
2.93 RE (%) 3.74 – 1.9 3.15 – – – – – – –
ASA-GS Avg. 15429.8 12043.8 110226 51269.2 – 6904.82 – – 42470.4 8982.19 57820.5
(2011) [24] Best 15351 11940 110020 51064 – 6872 – – 42275 8954 57761
1.65 RE (%) 0.97 1.54 2.81 0.97 – 1.95 – – 1.34 2 1.63
DSOS Avg. – – – – – 7117.32 – – – 9102.67 –
(2017) [21] Best – – – – – 7073 – – – 9045 –
4.22 RE (%) – – – – – 5.08 – – – 3.37 –

better solution than other remaining algorithms on all very respectively.


large-scale TSP instances. The relative error of the proposed Compared with the suboptimum algorithm, the solution
algorithm was between 0.22% and 1.57%. accuracy of the proposed algorithm improves 2.0 times for the
For the very large-scale TSP instances, the proposed small-scale problems (i.e., 0.07 versus 0.14), 1.14 times for the
algorithm surpasses the five other algorithms in 100% of these large-scale problems (i.e., 1.45 versus 1.65), 4.02 times for the
nine instances in terms of solution accuracy. The average very large-scale problems (i.e., 0.54 versus 2.17) in terms of the
relative error was 0.54%, which was significantly less than the average relative error. The proposed algorithm is more
2.17 of ASA-GS, 6.94 of PMSOM, and 10.96 of HCACO applicable for the very large-scale TSP problem than the small-
TABLE VIII
COMPARISON OF THE PROPOSED ALGORITHM WITH OTHER ALGORITHMS ON THE VERY LARGE-SCALE TSP INSTANCES
avg
Algorithm Instance d1291 nrw1379 fl1400 d1655 vm1748 rl1889 u2152 pr2392 pcb3038
(RE)
BKS 50801 56638 20127 62128 336556 316536 64253 378032 137694
Proposed Avg. 50911.5 56869.05 20430.35 63066.3 337555.55 321097.2 64575.9 381052.05 139855.5
algorithm Best 50828 56772 20236 62733 337040 319706 64321 379942 139445
0.54 RE (%) 0.22 0.41 1.51 1.51 0.3 1.44 0.5 0.8 1.57
PMSOM Avg. – – – – – – – 402528.47 148351.52
(2015) [19] Best – – – – – – – 401659 148186
6.94 RE (%) – – – – – – – 6.25 7.62
HMMA (2015) Avg. 58161.95 – 23752.24 – – – – – –
[25] Best 57380.17 – 23099 – – – – – –
13.86 RE (%) 12.95 – 14.77 – – – – – –
HCACO Avg. – – – – – – – 421630 –
(2014) [31] Best – – – – – – – 419476 –
10.96 RE (%) – – – – – – – 10.96 –
ASA-GS (2011) Avg. 52252.3 – 20782.2 64155.9 343911 – – – 141242
[24] Best 51751 – 20647 63636 342437 – – – 140742
2.17 RE (%) 1.87 – 2.58 2.43 1.75 – – – 2.21
DSOS (2017) Avg. – – – – – – – 425431.78 –
[21] Best – – – – – – – 419246 –
12.54 RE (%) – – – – – – – 12.54 –

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 10

scale and large-scale TSP. This verifies the fact the algorithm TABLE IX
DESCRIPTIVE STATISTICS OF PROPOSED ALGORITHM AND C-PSO-ACO-
has more remarkable advantage for the very large-scale TSP KOPT COMPARED WITH BKS ON THE SMALL-SCALE INSTANCES
problem.
Overall, the proposed algorithm overcomes other algorithms algorithm mean SD Min Max range
in 86.67% of the 30 instances ranging from 51 to 3038 nodes BKS 8257.8 10233.56 426 29368 28942
(i.e., 26 out of 30 instances). The average relative error is 0.75 Proposed algorithm 8262.27 10240.43 426.25 29398.4 28972.15
for all 30 instances (0.07 of small-scale TSP, 1.45 of large-scale C-PSO-ACO-KOpt 8485.24 10905.86 426.29 29640 29213.71
TSP, and 0.54 of very large-scale TSP). Therefore, this result
supports the fact that the proposed algorithm can realize TSP
solutions with high-accuracy and compete favorably with the TABLE X
state-of-the-art TSP algorithms, especially for the very large- DESCRIPTIVE STATISTICS OF PROPOSED ALGORITHM AND ASA-GS
COMPARED WITH BKS ON THE LARGE-SCALE AND VERY LARGE-SCALE
scale TSP problem. INSTANCES

V. RESULTS AND ANALYSIS algorithm mean SD Min Max range

The better solution accuracy of the proposed algorithm has BKS 69755.69 89155.21 6773 336556 329783
Proposed algorithm 70287.02 89469.60 7135 337555.55 330420.55
been demonstrated in Table V, VII, and VIII. The results arise ASA-GS 71345.39 91180.61 6904.82 343911 337006.18
from better strategy, parameter tuning, and local optimization.
The solution accuracy after grouping and merging does not
decline but surpasses other algorithms. This phenomenon is
mainly due to two reasons. One is that the ACO algorithm has TABLE XI
EQUAL VARIANCE TEST OF SUBOPTIMAL ALGORITHMS AND PROPOSED
higher-accuracy for small-scale TSP problems versus large- ALGORITHM AGAINST BKS BASED ON LEVENE’S TEST
scale TSP problems. The other is that the DPC algorithm STATISTIC
clustering nodes based on the density is effective for the Statistic Levene’s p-value
decomposing of TSP problem. It is appropriate that the
proposed algorithm solves the small-scale TSP by using ACO Wsmall a 0.02282 0.97746
Wlarge b 0.00288 0.99712
algorithm after DPC algorithm decomposes large-scale TSP Wvery c 0.00121 0.99879
into small-scale TSP. The performance is further revealed by
a
descriptive statistical analysis and runtime analysis. Wsmall is the Levene's statistic and p-value on the small-scale instances.
b
Wlarge is the Levene's statistic and p-value on the large-scale instances.
c
A. Descriptive statistical analysis on solution accuracy Wvery is the Levene's statistic and p-value on the very large-scale
instances.
The two best algorithms for small-scale instances are the
proposed algorithm and C-PSO-ACO-KOpt, while for large-
scale and very large-scale instances are the proposed algorithm
TABLE XII
and ASA-GS. We compare them against BKS using MATLAB ONE WAY ANOVA TEST OF THE DIFFERENCE BETWEEN PROPOSED
statistical package to further validate the performance. ALGORITHM COMPARED AND BKS
In summary, descriptive statistics reveal these algorithms in
Source of variation SS a df b MS c F p-value
terms of mean, standard deviation, minimum, maximum and
range. The Levene’s test validates whether all the algorithms Between groups 4.3604e+06 1 4.3604e+06 4.2914e-04 0.9835
Within group 5.8932e+11 58 1.0161e+10
have the same variance. The one way Analysis of Variance Total 5.8933e+11 59
(ANOVA) test reveals performance difference among all the
a
solutions and tests whenever the parametric assumption is met. SS is the sum of squares
b
df is the degree of freedom.
Table IX and X demonstrates the descriptive statistics about c
MS is the mean sum of squares.
the performance of proposed algorithm, C-PSO-ACO-KOpt,
and ASA-GS with BKS. The proposed algorithm is averagely
smaller than C-PSO-ACO-KOpt or ASA-GS in terms of mean,
rejected. In other words, proposed algorithm and the suboptimal
standard deviation, maximum and range. This supports the fact
algorithms have the same equal variance against BKS.
that the proposed algorithm is better than C-PSO-ACO-KOpt One way ANOVA is conducted to assess the difference
or ASA-GS. The data about the latter two algorithms have
between the proposed algorithm and BKS. In Table XII, the test
wider range and data spread around its mean value, while
result indicates that the majority of variation is within group and
proposed algorithm has the smaller range and data dispersion not between groups. The test that explains the difference
around its mean value.
between the two solutions reveals that the residual is minimal
The analysis of equal variance among the three algorithms is
in the variance. Therefore, the ANOVA test is found to be
based on the Levene’s test because it is robust even with statistically insignificant in the light of the high p-value and low
departure from data normality. In Table XI, we conclude that
F-statistic. In other words, there is no statistically significant
the test results on small-scale, large-scale, and very large-scale
difference between the BKS and the proposed algorithm.
instances are found to be statistically insignificant. This
suggests the null hypothesis of equal variance cannot be

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 11

B. Runtime analysis 7.31), 3.07 times for ACO-Taguchi (i.e., 5.26 versus 16.14),
The runtime is an important performance for optimization and 38.03 times for C-PSO-ACO-KOpt (i.e., 5.26 versus
200.06).
algorithm. Let N c be the number of ACO iteration, while N c
is a constant in the proposed algorithm. Let n denote the VI. CONCLUSION
problem size. Let m be the number of ants, and m is usually
constant or proportional to n. Computational complexity of TSP is a NP-hard problem to traverse all nodes once and only
ACO algorithm without hierarchical is calculated in (15). once in the shortest tour. The traditional exact algorithms and
heuristic intelligent algorithms have some weaknesses in
T (n) = O( Nc  n2  m) = O(n3 ). (15) solving large-scale TSP, such as slow convergence speed in the
Let Cm be the maximum number of nodes in each cluster. intelligent algorithms, and long runtime that unacceptable in the
exact algorithms. It becomes more noticeable when there are
Generally, Cm is constant and given as 35 in the study.
more than 200 nodes. A novel hierarchical hybrid algorithm is
Computational complexity of the proposed algorithm is proposed in this article, which takes full advantage of the
calculated in (16). superiority that ACO algorithm in solving small-scale TSP
T HHA (n) = O(n + N c  Cm3  n / Cm  + n) = O(n). (16) problem and DPC algorithm in clustering large-scale nodes.
In hierarchical hybrid algorithm, the runtime of grouping The nodes are clustered by using DPC algorithm, and the large-
nodes by using DPC and merging local solutions are both O(n) . scale TSP problem is decomposed into a few subproblems of
TSP with small-scale nodes. Then the TSP tour in each group
All nodes in TSP problem are decomposed into  n / Cm  and the tour among all cluster center nodes are resolved by
clusters, and it would take linear time. There are Cm nodes at using ACO algorithm. The initial global tour is constructed by
merging the local tours at the nearest nodes between two
most in each cluster. Computational complexity of each cluster
adjacent clusters. Finally, k-Opt optimizes the initial tour to
is O( N c  Cm ) . Therefore, the total time complexity of
3
generate the final global solution. The proposed algorithm is
proposed algorithm is O(n) . validated on 30 TSP instances in three groups. Experimental
It is obvious that computational complexity of the proposed results show that the proposed algorithm has a significant effect
algorithm is less than the traditional ACO algorithm. The on reducing runtime and has the superiority of higher accuracy
advantage of short runtime becomes more remarkable as and robustness. The advantage of the hierarchical hybrid
problem size increases. The algorithms with short runtime or algorithm would be more significant as the problem size
high accuracy in Section 1 were selected: ASA-GS (2011) [24], increases. The next research content is exploring the
ACO-Taguchi (2013) [30], HMMA (2015) [25], PACO-3Opt relationship between iteration times of ACO in each group and
(2016) [29], and C-PSO-ACO-KOpt (33) [2017]. We compared the local or global optimal tour. Another research direction is
the proposed algorithm with these five methods. Table XIII studying how to merge more efficiently and effectively the local
demonstrates runtime comparison as the nodes become more. suboptimal tours to construct the global optimal solution.
Obviously, PACO-3Opt was more suitable for eil51 due to
parallelization and without clustering. AS problem size ACKNOWLEDGMENT
increases, the proposed method has more significant advantage We are grateful to the anonymous referees for valuable
in terms of runtime and overcomes other algorithms on all suggestions and comments which helped us improve the paper.
instances except eil51. The average runtime is 5.26 seconds for This work was supported by the Fundamental Research Funds
proposed algorithm, 1.39 times for ASA-GS (i.e., 5.26 versus for the Central Universities under Grant No. 2016MS121.

TABLE XIII REFERENCES


RUNTIME COMPARISON OF PROPOSED ALGORITHM WITH OTHER
ALGORITHMS [1] H. P. Lee, “Solving traveling salesman problem using generalized
chromosome genetic algorithm,” Prog. Nat. Sci., vol. 18, pp. 887–892,
C-
Jul. 2008.
ASA- ACO- PACO- PSO-
proposed HMMA [2] J. Yan, T. Weise, J. Lassig, R. Chiong, and R. Athauda, “Comparing a
Instance GS Taguchi 3Opt ACO-
algorithm [25] hybrid branch and bound algorithm with evolutionary computation
[24] [30] [29] KOpt
[33] methods, local search and their hybrids on the TSP,” in Proc. CIPLS,
eil51 3.03 3.91 3.32 7.22 2.39 19.91 Orlando, FL, USA, 2014, pp. 148-155.
berlin52 1.48 3.83 3.15 8.19 2.1 20.28 [3] H. Hernández-Pérez and J. Salazar-González, “ A branch-and-cut
st70 3.89 5.15 5.38 13.07 6.97 100 algorithm for a traveling salesman problem with pickup and delivery,”
eil76 4.69 5.5 6.12 15.25 8.18 150 Discrete Appl. Math., vol. 145, no. 1, pp. 126-139, Dec. 2004.
rat99 5.6 7.34 7.9 25.36 19.79 200 [4] C. Barnhart, E.L. Johnson, G. L. Nemhauser, and P. H. Vance, “Branch
kroA100 6.34 7.14 11.52 25.81 21.1 305.01 and price: column generation for solving huge integer programs,” Oper.
eil101 6.55 7.42 20.75 26 20.79 200.9 Res., vol. 46, no. 3, pp. 316-329, Jun. 1998.
lin105 5.65 7.68 21.45 27.92 14.57 320.1 [5] O. Ergun and J. B. Orlin, “A dynamic programming methodology in very
ch150 5.95 10.91 33.28 55.75 79.35 334.32 large scale neighborhood search applied to the traveling salesman
kroA200 9.46 14.26 48.48 98.48 213.12 350.12 problem,” Discrete Optim., vol. 3, no. 1, pp. 78-85, Mar. 2006.
Average 5.26 7.31 16.14 30.31 38.84 200.06
[6] H. Zhou and M. Song, “An improvement of partheno-genetic algorithm
to solve multiple travelling salesmen problem,” in Proc. ICIS, Okayama,
Japan, 2016, pp. 331-336.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 12

[7] R. M. F. Alves and C. R. Lopes, “Using genetic algorithms to minimize [29] Ş. Gülcü, M. Mahi, Ö. K. Baykan, and H. Kodaz, “A parallel cooperative
the distance and balance the routes for the multiple traveling salesman hybrid method based on ant colony optimization and 3-Opt algorithm for
problem,” in Proc. CEC, Sendai, Japan, 2015, pp. 3171-3178. solving traveling salesman problem,” Soft. Comput., pp. 1-17, Nov. 2016.
[8] A. E. Ezugwu, A. O. Adewumi, and M. E. Frincu, “Simulated annealing [30] M. Peker, B. Sen, and P. Y. Kumru, “An efficient solving of the traveling
based symbiotic organisms search optimization algorithm for traveling salesman problem: the ant colony system having parameters optimized by
salesman problem,” Expert. Syst. Appl., vol. 77, pp. 189-210, Jul. 2017. the taguchi method,” Turk. J. Electr. Eng. Comput. Sci., vol. 21, no.1, pp.
[9] Y. Lin, Z. Bian and X. Liu, “Developing a dynamic neighborhood 2015-2036, Jan. 2013.
structure for an adaptive hybrid simulated annealing-tabu search [31] J. Jiang, J. Gao, G. Li, C. Wu, and Z. Pei, “Hierarchical solving method
algorithm to solve the symmetrical traveling salesman problem,” Appl. for large scale TSP problems,” Lect. Notes Comput. Sci., Vol. 8866, no.
Soft. Comput., vol. 49, pp. 937-952, Dec. 2016. 2014, pp. 252-261, Dec. 2014.
[10] M. S. Kıran, H. İşcan, and M. Gunduz, “The analysis of discrete artificial [32] M. Mahia, Ö. K. Baykanb, and H. Kodazb, “A new hybrid method based
bee colony algorithm with neighborhood operator on traveling salesman on particle swarm optimization, ant colony optimization and 3-opt
problem,” Neural Comput. & Appli., vol. 23, no. 1, pp. 9-21, Jul. 2013. algorithms for traveling salesman problem,” Appl. Soft. Comput., vol. 30,
[11] C. Cheng and C.Mao, “A modified ant colony system for solving the pp. 484-490, May 2015.
traveling salesman problem with time windows,” Math. Comput. Model., [33] I. Khan, M. K. Maiti, and M. Maiti, “Coordinating Particle Swarm
vol. 46, no. 9-10, pp. 1225-1235, Nov. 2007. Optimization, Ant Colony Optimization and K-Opt Algorithm for
[12] Y. Marinakis and M. Marinaki, “A hybrid multi-swarm particle swarm Traveling Salesman Problem,” in Proc. ICMC, Haldia, India, 2017, pp.
optimization algorithm for the probabilistic traveling salesman problem,” 103-119.
Comput. Oper. Res., vol.37, no. 3, pp. 432-442, Mar. 2010. [34] J. A. Hartigan and M. A. Wong, “A k-means clustering algorithm,”
[13] H. Ghaziri and I. H. Osman, “A neural network algorithm for the traveling Applied Statistics, vol. 28, no. 1, pp. 100-108, Jan. 1979.
salesman problem with backhauls,” Comput. Ind. Eng., vol. 44, no. 2, pp. [35] B. J. Frey and D. Dueck, “Clustering by passing messages between data
267-281, Feb. 2003. points,” Science, vol. 315, no. 5814, pp. 972-976, Feb. 2007.
[14] K.S. Leung, H. D. Jin, and Z. B. Xu, “An expanding self-organizing [36] A. Rodriguez and A. Laio, “Clustering by fast search and find of density
neural network for the traveling salesman problem”, Neurocomputing, vol. peaks,” Science, vol. 344, no. 1492, pp. 1492-1496, Jun. 2014,
62, no. 1-2, pp. 267-292, Nov. 2004. [37] A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed optimization by
[15] Y. He, Y. H. Qiu, and G. Y. Liu, “A parallel adaptive tabu search approach ant colonies,” in Proc. ECAL, Paris, France, 1991, pp. 134-142.
for traveling salesman problems,” in Proc. IEEE NLP-KE, Wuhan, China, [38] M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a
2005, pp.796-801. colony of cooperating agents,” IEEE T. Syst. Man Cy. B, vol. 26, no. 1,
[16] X. Luo, Y. Yang, and L. Xia, “Solving TSP with shuffled frog-leaping pp. 29-41, Feb. 1996.
algorithm,” in Proc. ISDA, Kaohsiung, Taiwan, 2008, pp. 228-232. [39] M. Dorigo and G. Di Caro, “Ant colony optimization: a new meta-
[17] Z. Xiang, Z. Chen, X. Gao, X. Wang, F. Di, L. li, G. Liu, and Y. Zhang, heuristic,” in Proc. CEC, Washington, DC, USA, 1999, pp. 1470-1477.
“Solving large-scale TSP using a fast wedging insertion partitioning [40] A. Vlachos and A. Moue, “Ant Colony Optimization (ACO) meta-
approach,” Math. Probl. Eng., vol. 2015, pp. 1-8, Jun. 2015. heuristic solving the Vehicle Scheduling Problem (VSP) ,” WSEAS Trans.
[18] Z. A. Othman, A. I. Srour, A. R. Hamdan, and Y. L. Pan, “Performance Inf. Sci. Applic., vol. 3, no. 10, pp. 2041-2046, Oct. 2006.
water flow-like algorithm for TSP by improving its local search,” Int. J. [41] P. Moradi and M. Rostami, “Integration of graph clustering with ant
Adv. Comput. Technol., vol. 5, no. 14, pp. 126-137, Oct. 2013. colony optimization for feature selection,” Knowl-based. Syst., vol. 84,
[19] B. Avsar and D. E. Aliabadi. “Parallelized neural network system for pp. 144-161, Aug. 2015.
solving Euclidean traveling salesman problem,” Appl. Soft. Comput., vol. [42] O. Dridi, S. Krichen, and A. Guitouni, “A multiobjective hybrid ant
34, pp. 862-873, Sep. 2015. colony optimization approach applied to the assignment and scheduling
[20] Y. Wang, “The hybrid genetic algorithm with two local optimization problem,” Int. T. Oper. Res., vol. 21, no. 6, pp. 935-953, Nov. 2014.
strategies for traveling salesman problem,” Comput. Ind. Eng., vol. 70, pp. [43] J. Liu, J. Yang, H. Liu, X. Tian, and M. Gao, “An improved ant colony
124-133, Apr. 2014. algorithm for robot path planning,” Soft. Comput., pp. 1-11, May 2016.
[21] A. E. Ezugwu and A. O. Adewumi. “Discrete symbiotic organisms search [44] L. Muyldermans, P. Beullens, and D. Cattrysse, “Exploring variants of 2-
algorithm for travelling salesman problem,” Expert. Syst. Appl., vol. 87, opt and 3-opt for the general routing problem,” Oper. Res., vol. 53, no. 6,
no. 1, pp. 70-78, Nov. 2017. pp. 982-995, Nov. 2005.
[22] L. Huang, G. Wang, T. Bai, and Z. Wang. “An improved fruit fly [45] A. Levin and U. Yovel, “Nonoblivious 2-opt heuristics for the traveling
optimization algorithm for solving traveling salesman problem,” Front salesman problem,” Networks, vol. 62, no. 3, pp. 201-219, Oct. 2013.
Inform. Technol. Electron. Eng., vol. 18, no. 10, pp. 1525-1533, Oct. 2017.
[46] G. Reinelt, Heidelberg, Germany [Online], Available: http://comopt.ifi.u
[23] Y. Lu, U. Benlic, and Q. Wu. “A hybrid dynamic programming and ni-heidelberg.de/software/TSPLIB95/., Accessed on: May 23, 2017.
memetic algorithm to the Traveling Salesman Problem with Hotel
[47] H. Duan, G. Ma, and S. Liu, “Experimental study of the adjustable
Selection,” Comput. Operres., vol. 90, no. 1, pp. 193-207, Feb. 2018.
parameters in basic ant colony optimization algorithm,” in Proc. CEC,
[24] X. Geng, Z. Chen, W. Yang, D. Shi, and K. Zhao, “Solving the traveling Singapore, Singapore, 2007, pp. 149-156.
salesman problem based on an adaptive simulated annealing algorithm
[48] Z. Hao, R. Cai, and H. Huang, “An adaptive parameter control strategy
with greedy search,” Appl. Soft. Comput., vol. 11, no. 4, pp. 3680-3689,
for ACO,” in Proc. ICMLC, Dalian, China, 2006, pp. 203-206.
Jun. 2011.
[49] T. Stützle, M. López-Ibáñez, P. Pellegrini, M. Maur, M. Montes De Oca,
[25] Y. Wang, “Hybrid max–min ant system with four vertices and three lines
M. Birattari, and M. Dorigo, “Parameter adaptation in ant colony
inequality for traveling salesman problem,” Soft. Comput., vol. 19, no. 3,
optimization,” Auton. Search, vol. 6, no. 1, pp. 191-215, Oct. 2011.
pp. 585-596, Mar. 2015.
[26] M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative
learning approach to the traveling salesman problem,” IEEE T. Evolut. Erchong Liao (M’17) received the B.S.
Comput., vol. 1, no. 1, pp. 53-66, Apr. 1997. degree in computer science and technology
[27] J. B. Escario, J. F. Jimenez, and J. M. Giron-Sierra, “Ant colony extended: from North China Electric Power
experiments on the travelling salesman problem,” Expert. Syst. Appl., vol. University, Baoding, China, in 2005 and
42, no. 1, pp. 390-410, Jan. 2015.
the M.S. degree in computer science and
[28] S. M. Chen and C. Y. Chien, “Solving the traveling salesman problem
based on the genetic simulated annealing ant colony system with particle technology from Harbin Institute of
swarm optimization techniques,” Expert. Syst. Appl., vol. 38, no. 12, pp. Technology, Harbin, China, in 2007. He is
14439-14450, Nov. 2011. currently pursuing the Ph.D. degree in
intelligent robot at North China Electric Power University,
Beijing, China.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2853129, IEEE Access

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 13

Since 2007, he has been a Lecturer with the School of


Control and Computer Engineering, North China Electric
Power University, Baoding, China. His main research interests
include robot path planning and robotic vision.
Changan Liu is currently a professor and
director of Intelligence Robot Institute in
North China Electric Power University. He
received B.S. degree from Northeast
Agricultural University in 1995 and M.S.,
Ph.D. degrees from Harbin Institute of
Technology in 1997 and 2001 respectively.
His research interests focus on technology
of intelligent robot, theory of artificial intelligence.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like