0% found this document useful (0 votes)
21 views16 pages

Adaptive Grey Wolf Optimization Algorithm With Neighborhood Search Operations: An Application For Traveling Salesman Problem

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views16 pages

Adaptive Grey Wolf Optimization Algorithm With Neighborhood Search Operations: An Application For Traveling Salesman Problem

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/356616576

Adaptive Grey Wolf Optimization Algorithm with Neighborhood Search


Operations: An Application for Traveling Salesman Problem

Article in International Journal of Intelligent Engineering and Systems · November 2021


DOI: 10.22266/ijies2021.1231.48

CITATIONS READS

0 221

2 authors:

Ayad Mohammed Jabbar Ku Ruhana Ku-Mahamud


Shat Al Arab University College Universiti Utara Malaysia
17 PUBLICATIONS 126 CITATIONS 213 PUBLICATIONS 1,320 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Ayad Mohammed Jabbar on 29 November 2021.

The user has requested enhancement of the downloaded file.


Received: August 20, 2021. Revised: September 16, 2021. 539

Adaptive Grey Wolf Optimization Algorithm with Neighborhood Search


Operations: An Application for Traveling Salesman Problem

Ayad Mohammed Jabbar1* Ku Ruhana Ku-Mahamud2

1
Department of Computer Science, Shatt Al-Arab University College, Basra, Iraq
2
School of Computing, Universiti Utara Malaysia, Malaysia
*Corresponding author’s Email: [email protected]

Abstract: Grey wolf optimization (GWO) algorithm is one of the best population-based algorithms. GWO allows
sharing information in the wolf population based on the leadership hierarchy using the hunting mechanism behavior
of real wolves in nature. However, the algorithm does not represent any key exchange information sharing for the
traveling salesman problem because of two issues. The candidate solutions are improved dependently, similar to local
search concepts, losing their capability as a population-based algorithm. The algorithm is limited in its search process
in finding only the local regions and ignoring any chance to explore search space effectively. This study introduced
an adaptive grey wolf optimization algorithm (A-GWO) to solve the information-sharing problem. The proposed A-
GWO maintains sufficient diverse solutions among the best three wolves and the rest of the population. It also
improves its neighborhood search by obtaining more locally explored regions to enhance information sharing among
the wolves. An adaptive crossover operator with neighborhood search is proposed to inherit the information between
the wolves and provide several neighborhoods to find more solutions in the local region. Experiments are performed
on 25 benchmark datasets, and results are compared against 12 state-of-the-art algorithms based on three scenarios.
The credibility of the proposed algorithm produces approximately 53%, 58%, and 63% better tour distance in the first,
second, and third scenarios, respectively. The proposed A-GWO achieves approximately 87% better minimum tour
distance compared with the GWO algorithm.
Keywords: Crossover operator, Exploration, Exploitation, Machine learning, Position update, Swarm algorithms.

have early convergence and local optima problems.


1. Introduction Metaheuristics is a problem-independent algorithm in
finding several near-optimal solutions. The main
The search for optimal solutions in artificial
characteristic of metaheuristics combines several
intelligence aims to find the “best” solution among
heuristic methods that perform in higher-level
various solutions in the search space. This type of
metaphors [7, 8]. These metaphors are inspired by
problem is known as combinatorial optimization and
different behaviors, representing the swarm of insects,
is considered an NP-hard problem [1]. Several
including foraging, dancing of bees, collection of
examples of combinatorial optimization problems are
eggs of ants, and odor for membership recognition in
vehicle routing problem (VRP) [2], traveling
a colony. The swarm approach, an intelligence
salesman problem (TSP), clustering [3],
system, describes the collective behavior of social
classification [4], and feature selection [5]. Stochastic
insects while interacting with their environment and
methods have been introduced with alternative
one another to solve a specific problem [9, 10]. The
randomness called metaheuristics because of the
interaction occurs because of external influences
many complexities and limitations in most
representing positive feedback used as a
combinatorial optimization problems. This result
communication in the population. For example,
concerns the exponential expansion in the area of
pheromones in the ant colony optimization algorithm
search for the best solutions and avoids problems to
(ACO) make the insects converge and perform a
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 540

unique behavior following one another [10]. The key algorithm is limited in its search process in finding
aspects of swarm intelligence are decentralization only the local regions and ignoring any chance to
and self-organization, wherein the population explore other parts of the search space. The algorithm
represents the power concept. One of the popular does not represent any key exchange or sharing of
population bio-inspired behaviors is the evolutionary information between the wolves, allowing the
genetic algorithm (GA). GA represents a key algorithm to improve different solutions during the
overlapping feature, as introduced by Holland (1973). algorithm run.
Survival of the fittest is a fundamental concept In this study, an adaptive algorithm is proposed
applied in which the candidate solutions are allowed with two new modifications for information sharing.
to procreate further solutions on the basis of results The modifications are on adaptive crossover operator
obtained by previous iterations. The crossover and and neighborhood search to enhance the
mutation operators are used iteratively to produce a neighborhood search in the best local regions at the
new candidate solution. leadership hierarchy. The rest of the paper is
Information sharing is a fundamental element of organized as follows. Section 2 discusses related
swarm intelligence. However, such sharing is works. Section 3 introduces the proposed adaptive
implemented in several population-based algorithms, algorithm. Section 4 presents the evaluation of the
such as ACO, artificial bee colony (ABC), grey wolf proposed algorithm. Section 5 concludes the work
optimizer (GWO), and GA. The crucial elements for and highlights future research.
swarm intelligence algorithms include self-
organization, stigmergy, and positive feedback 2. Related works
concept. The principle of self-organization is a
Stochastic algorithms ensure the construction of
process of building an organized and cohesive system
solutions according to the quality of solutions guided
from a disordered system on the basis of basic issues
by some objective functions and compact with some
that are solved simultaneously [11]. Flocking of birds
randomization to explore the search space and avoid
and ant foraging are typical examples that
stacking in local optima. Stochastic algorithms utilize
demonstrate the self-organization element of
different concepts, such as local search and tuning
activities [12]. Stigmergy is another important
parameters, to optimize the objective function deeply,
concept that is represented as a chemical substance
wherein the problem is formulated as the
used in a swarm intelligence system in which the
optimization problem [14-16]. The three main
information is exchanged among different members
approaches popular in optimization are exact,
that belong to the same population in the colony. GA
estimation, and approximation. Exact algorithms can
does not have any stigmergy concept to exchange
produce the optimal solution to an optimization
information, whereas self-organization and positive
problem within a dependent runtime instance.
feedback are utilized to optimize the objective
However, exact algorithms require exponential time,
function. Following the same concept of swarm
especially with complex optimization problems, and
intelligence, GWO has a population of different
thus are difficult to use in complex problems. The
wolves responsible for various tasks. The three main
estimation approach does not guarantee an optimal
wolves are called alpha, beta, and delta, which are
solution because the results are produced according
responsible for heading the hunting activity. The
to a predefined range of inputs [17]. The optimal or
remaining wolves in the pack are called omega
near-optimal solutions can be generated in a short
wolves, which update their positions based on the
time using the approximation approach. Although
main three wolves.
this approach does not guarantee finding the optimal
Different combinatorial optimization problems
solution always, it can produce reasonable solutions.
have employed GWO, which shows promising
This approach optimizes the problem on the basis of
results in optimizing the objective function of the
a single or population approach. The single approach,
problem and exploring the search space using the best
such as simulated annealing, tracks the improvement
wolves in the population. However, Sopto (2019)
of one single solution candidate, whereas the
showed a shortcoming in the mechanism of updating
population approach iteratively modifies a set of
the positions of omega wolves, which was locally
candidate solutions based on the algorithm feedback
performed [13]. Therefore, the omega wolves would
[18]. This popular approach can be either a swarm or
update their positions based only on local changes,
an evolutionary algorithm (EA).
similar to what local search algorithms do. Each
EAs are a stochastic iterative search method,
omega position is updated based on the neighborhood
which simulates the evolution of species in nature. A
search of the omega itself without looking for global
set of candidate solutions evolves iteratively. These
solutions of alpha, beta, and delta positions. The
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 541

candidate solutions are known as the population of datasets, where adaptive search could find the best
the algorithm, where each individual has its fitness solution. Similar research proposed the adaptive
function for survival through life. The main crossover algorithm using the 1Bit Adaptive
operations in EAs are selection, recombination, and Crossover, where the specified crossover factor was
mutation. Each operation is responsible for coded in the genotype [21]. Riff and Bonnaire (2002)
increasing the accuracy of the solution. The introduced this concept by using more operators [22].
probabilistic application is the search process used in It was extended in 2004 by encoding rate and reward
EAs to find better solutions. It is guided by the operators, where the value change according to the
objective function of the optimization problem, fitness of the offspring was better or worse than its
which represents the survival of the best individuals. parents [23]. Cruz-Salinas and Perdomo (2017)
The most known algorithm under this category is the extended this work by having a population of
GA. GA is one of the best EAs that have been operators that were exposed to evolution and
successfully applied in several application domains. mutation and maintaining the operator selection
It has been used as the main part of many modern method [24] employed in [23]. However, the
algorithms because of its components’ popularity and algorithm used operators that were not suitable for the
simplicity. The algorithm inspired by nature ordered nature of TSP. Similar research has proposed
represents the theory of Charles Darwin and includes an adaptive two-opt mutation in the process of
a selection of the best individuals for optimization. mutation in the GA algorithm [25]. The strategy
The process consists of selecting individuals who are changed the two-opt mutation to another operation
most fit for reproduction to achieve the best offspring. called exchange cities, where a pair of cities were
The offspring for the next generation completes the swapped one at a time adaptively. Nevertheless, the
cycle of Charles Darwin’s theory. The initial step in adaptive strategy employed did not memorize either
GA initializes each chromosome as a single solution. the two-opt mutation or exchanges during the time. It
The reproduction consists of using a crossover only used the best one according to the fitness of the
operator. The operator uses a set of chromosomes to solution.
be mated to produce offspring better than the older A kind of biology-inspired concept is swarm
population. Parts of a chromosome are moved to intelligence, which expresses the regular behavior
other parts of the chromosome to create new practiced by living organisms to communicate or
offspring. However, a crossover can be performed in solve different problems. Swarm intelligence is
many ways. Many genes are used, and the location of motivated by groups of insects, such as ants, bees,
the genes on a chromosome plays a vital role in and bacteria, behaving intelligently as groups instead
producing a better solution. A mutation operator is of a single insect. Several algorithms, such as ACO,
used to maintain the diversity of the solution in GA. ABC, and GWO, are examples of the swarm
The operator alters one or more genes in the intelligence category because they use the basic
chromosome, such as moving one gene (city) to other swarm intelligence concepts, including self-
locations and modifying the route. Several organization, stigmergy, and positive feedback.
researchers have used an adaptive strategy in the GA Developed by Dorigo [26], ACO is a publication-
algorithm. An adaptive GA utilized three crossover based swarm algorithm inspired by the foraging
strategies [19]. A fit offspring was determined by behavior of ants. The algorithm has been successfully
selecting the best strategy while running the applied in different NP-hard combinatorial problems,
algorithm, which was adaptively performed. including classification, clustering [27, 28], and
Information reinforcement was conducted by feature selection [29, 30]. Dorigo modeled the
utilizing pheromone information, which was updated foraging behavior to solve TSP when the stigmergy
using the best strategy according to the ACO model concept was used as an indirect communication guide
pheromone update. However, the algorithm did not for the colony to find the shortest route for TSP [31].
use the evaporation procedure of ACO, which in the The algorithm has many parameters that control the
end may converge quickly on one of these strategies algorithm’s performance to produce better results in
because of the wrong decision selected in the initial different application domains, including clustering,
algorithm run. classification, and feature selection. The density of a
Sung and Jeong (2014) proposed an algorithm pheromone laid by an ant controls the probability of
that adaptively changed the crossover parameter and choosing the best arc according to the value of the
the mutation parameter during the algorithm run to pheromone, which is the core of the algorithm and
generate a new population [20]. Nevertheless, the has been adaptively optimized by different
algorithm showed better results only with small researchers. In ACO, stochastic methods are
performed by integrating randomization through the
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 542

ant movement from city to city. This integration looking for food, representing the foraging principle
guarantees that the algorithm can avoid the local in insects. The simple steps of the algorithm allow the
optima problem and explore the search space researchers to apply the algorithm in several
efficiently. The most important part of this algorithm application domains. The algorithm consists of three
is its model, which simulates foraging behavior, such main bees. The core engine of the algorithm includes
as using the evaporation process, which is set by a the scout bee, employee bee, and onlooker bee. Each
parameter (evaporation rate) optimized by different bee has its tasks, such as exploration, that are relevant
researchers. The pheromone trail evaporates during to a scout bee. Exploration in the neighborhood of
the time. Therefore, long-distance routes are solutions is related to an employee bee task. An
forgotten because ants have no desire to pass them. onlooker bee performs and exploits a task to find a
Different modifications in ACO have been proposed deeper region with high-quality solutions. However,
in the literature since the initial ant system proposed the algorithm has a limit parameter set statically by
by Dorigo and the ant colony system [32]. users, indicating how many times the solution can be
In 2012, the pheromone decay parameter accepted once the algorithm achieves the same result
(pheromone evaporation rate) was adapted to avoid for a while. This parameter has been optimized by
local optima and adjust the convergence rate during different researchers who included adaptive and self-
the algorithm run [33]. However, the adjustment was adaptive strategies [2, 36–39]. The purpose of the
performed by using significant value changes in a parameter is to restart the search space process
descending manner, which did not guarantee to find frequently during the algorithm and allow the finding
the best value (best evaporation rate) that represented of a better region on the search space by avoiding the
a particular dataset. Adaptive-related research has local optima solution. The algorithm generates the
utilized different groups of ants to select pheromone initial solution randomly (initial city sequence),
arcs with different concentrations [34]. Nevertheless, whereby each bee represents a single solution
the major problem of the algorithm was the high comprising a set of cities visited one at a time. Each
diversity of solutions and reinforcing the algorithm to employee explores the local region by using a
converge on the best solution in a long time. This neighborhood structure, modifying the sequence
shortcoming occurred because different groups used locations of the cities. However, this step is
different transition probability rules during the considered by some researchers as an exploitation
algorithm run. In 2021, another hybrid Harris’s Hawk phase because the employee bee performs and
and ACS, a variant of the ACO algorithm (HHO- improves locally. This step ensures finding better
ACS), was proposed to optimize the ACS parameters solutions in the neighborhood region.
[35]. Five parameters are subjected to optimization. The adaptive ABC algorithm for a TSP proposed
In the end, the algorithm’s performance is determined in 2013 was adaptive-count initialized for each bee,
by the pheromone coefficient, heuristic coefficient, which changed dynamically during the algorithm run
decision rule, and the evaporation rate of the [40]. Nevertheless, this study did not clarify which
pheromone. This kind of optimization problem is count value was suitable for each dataset. Another
called online parameter tuning based on an extended limitation was that the scout had multiple jumps, not
algorithm. HHO algorithm optimizes the five ACS ensuring that all feasible solutions in the local region
parameters during the algorithm run, thus find the were found. Another study on numerical optimization
best value for each parameter in solving TSP. HHO- was done in 2017 [41]. It proposed adaptive ABC on
ACS algorithm achieved better results than the ACS the basis of source food ranking. Each source food
algorithm but the exponential time required in the test was ranked higher when more opportunities could be
compared to both algorithms is long. Other related selected. However, the algorithm could only imitate
swarm algorithms, namely the black hole algorithm if the selected food sources in the early search stages
(BH) proposed to solve the problem of TSP [36]. The had high fitness, forcing the algorithm to converge
algorithm produced promising results compared with quickly without exploring the local neighborhood
other swarm algorithms. However, two limitations region. The same idea of food source perturbation
have been investigated in the BH algorithm. First, the using a synergetic mechanism was proposed in 2020,
performance of the algorithm is achieved based on enhancing the population diversity in algorithm
the population that is randomly initialized. The initialization using a chaotic round map [42]. Another
second showed a low exploitation-based search, study used adaptive elitism-based immigration,
where its standard deviation was high compared with which replaced the worse wolf in the population
other swarm algorithms. during the algorithm run with an elite individual with
TSP has also been solved by the ABC algorithm, better fitness function with a controlled-mutation
which is swarm-based inspired by a bee colony parameter. However, the mutation value was
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 543

controlled in some cases to a random value, which combinatorial optimization problems. Maintaining
was distributed to an individual after the mutation the balance between the two phases is the core engine
process in an unpromising region. A 2019 study of the algorithm, and both phases are affected by the
examined the performance of different variants of updated position equations. The first three wolves
ABC using several statistical tests with 15 TSP (alpha, beta, and delta) represent the best positions of
instances. The test also included the convergence rate the hierarchy (in the search space of the problem).
and the parameter setting of the ABC algorithm, such The remaining wolves (omega) are improved during
as the limit parameter. The limit parameter the run according to that hierarchy level. The omega
significantly controlled the ABC algorithm; reporting wolf position depends only on three wolves. A study
suitable parameter values for TSP [43]. done in 2019 had a mechanism for updating omega
Particle swarm algorithm (PSO) for a traveling position [13]. Its limitation is that it was locally
salesman problem has been improved by introducing performed only when each solution was updated
the best current solution [44]. The basic idea is to use based on the neighborhood search without looking
the best current iteration in the moving steps, for a global region. Thus, omega wolves could be
improving the movement of the particles toward the densely settled in the same region or certain regions
best regains that have the best quality of solutions in during the prey catching process. In other cases, the
the search space. However, the algorithm quickly mechanism of updating the omega position is locally
stuck locally because the search process was only performed only when each solution is updated based
guided by the objective function, which was limited on the neighborhood search, preventing the exchange
to the stochastic search to explore the search space of information between the omega and the first three
effectively. Another related study proposed a hybrid wolves (alpha, beta, and delta). This issue has been
PSO and ACO algorithm based on the best-worst ant observed in the TSP problem, where omega wolves
[45]. The improvement starts with the population, are improved by only using the neighborhood
which PSO initializes. The ACO algorithm is then structure. No real update position is available to move
performed to improve the solutions. The first step the omega wolves to the best region and provide a
ensured that the generation of the initial population chance during the run to explore a more promising
was iteratively better than what ACO does in a region in the search space [13]. Two methods have
constructive manner. A new swarm algorithm, been used in other related works to improve the
chicken swarm optimization, was proposed in 2021 performance of GWA for competitive traveling
to solve the TSP problem [46]. Although the salesman problems [50]. The benefit of both methods
algorithm produced promising results, it has is to increase the exploration and exploitation of the
limitations in maintaining quality when feeding back algorithm during the algorithm run. The static
the algorithm to find more solutions in the method used as the first method to divide cities
neighborhood. Other hybrid algorithm proposed evenly among salesmen, whereas the parallel method
discrete whale optimization (DWO) with ACO was used as the second method for all cities available.
algorithm to improve the performance of DWO [47]. However, the methods are limited in the sharing of
The initialization of individuals improved by the information between agents.
ACO algorithm as the initial phase of DWO
algorithms. Although DWO has been improved, the 3. Proposed adaptive algorithm
algorithm is still easily stacked in the local optima if
The grey wolf optimization algorithm is
the ACO algorithm is produced an initial population
metaheuristic. It simulates the social behavior and
with high-quality solutions. DWO was improved by
leadership hierarchy of grey wolves in real life.
other researchers in 2021 using a variable
Social behaviors, such as hunting, organization, and
neighborhood algorithm. [48]. This improvement
decision-making, make the algorithm a unique model
increases the exploitation-based capability to find
used by researchers to solve different optimization
more local regions in the search space. However, the
problems. Each group of wolves usually contains
proposed algorithm cannot explore the search space
between 5 and 12 members. The members are sorted
when stagnation occurs. In 2019, another swarm
according to the hierarchy level, where members are
algorithm, dragonfly algorithm (DA), was proposed
known as alpha (α), beta (β), delta (δ), and omega (ω).
to solve the problem of TSP [49] and showed
Each of the omega wolves in the group has its task in
promising results compared with other swarm
the pack regardless of its position. The α wolf is
algorithms.
responsible for the leadership and decision to hunt in
The GWO literature indicates that the most
addition to other tasks, such as sleeping and waking
important issue is the use of the updated position
up for all wolves. The β wolf helps the α wolf in the
equations when GWO is utilized for different
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 544

A-GWO algorithm
1 Input: Data (TSP Instances)
2 Output: Best Solution
3 Generate initial wolf population;
4 Calculate fitness 𝑓(𝑋𝑖 );
5 Identify three best wolves as Xα, Xβ, and Xδ;
6 IterationIndex=1;
7 WHILE (IterationIndex < Max)
8 Repeat
9 Update Position of 𝑋𝑖 ;
10 S*=Crossover();
11 IF(S*<𝑋𝑖 )
12 𝑋𝑖 = S*;
13 S*= NeighborhoodSearch(𝑋𝑖 ); /
14 IF(S*<𝑋𝑖 )
15 𝑋𝑖 = S*;
16 Compute break point;
17 Until termination condition meet
18 UpdateCoefficient();
19 Calculate fitness 𝑓(𝑋𝑖 )
20 UpdatePosition();
21 IterationIndex = IterationIndex + 1;
22 UNTIL (IterationIndex = Max)
23 END-WHILE
Figure. 1 A-GWO algorithm

decision-making and is considered the best successor between omega wolves and the three leadership
for the group if the α wolf dies. The δ wolf, who is hierarchy wolves for the TSP problem [13]. In the A-
the third level in the hierarchy, is responsible for GWO algorithm, information sharing between the
organizing the ω wolves. Without the δ wolf, the best three wolves and the rest of the population is
group will encounter internal chaos. In hunting, grey maintained, improving the search in the
wolves form the main nerve for group survival, which neighborhood regions. Both modifications are used to
involves searching, tracking, encircling, and explore the best solution found in global and local
attacking prey. Those tasks are mathematically regions from the search space. Figure 1 illustrates the
modeled as the GWO algorithm. A-GWO algorithm.
The process of encircling the prey is The A-GWO algorithm has three stages:
mathematically modeled as the movement of the grey encircling, hunting, and attacking the prey (Steps 1–
wolf’s location to the prey location and circling it. 3). The modifications are performed in those stages,
This task can be achieved by finding the distance including the crossover operator encircling the prey,
between the grey wolf and prey location in the search selecting the best solution, performing neighborhood
space. The movement of the grey wolf involves the search in hunting the prey, and computing the
exploration and exploitation processes in finding a breakpoint in attacking the prey. The algorithm starts
new location during the search and avoid falling into with the initialization of all parameters and wolf
the local optima problem. This study proposes an A- population. Step 4 calculates the fitness function of
GWO algorithm by introducing two new all wolves, and Step 5 determines the best three
modifications to the present GWO algorithm. These wolves, α, β, and δ, according to the fitness function
modifications are essential to enhance the search of each wolf in the population. Step 6 starts the
process of the algorithm. The adaptive crossover algorithm iterations by initializing the value of the
operator and neighborhood search operations are iteration index to 1. Step 7 initializes iterations by
proposed as the core engine of the algorithm. The checking each wolf, and Step 8 starts the algorithm
standard GWO algorithm allows the sharing of cycle. Step 9 checks each wolf’s position in the
information in the wolf population using leadership search space. Step 10 has the first modification,
hierarchy. However, it ignores information sharing which represents encircling the prey, which is
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 545

Differences are observed between the proposed


algorithm, the original algorithm GWO in 2014 [51],
and the algorithm in 2019 to solve TSP [13]. The prey
encircling process in the original GWO uses absolute
distance to modify the position. In contrast, the A-
GWO algorithm uses the crossover operator but did
not explicitly specify this process. Sharing of
Figure. 2 Adaptive crossover and neighborhood search information can be seen in the original algorithm
processes GWO between the omega wolves and the leaders, but
no information sharing is found between the wolves
in [13] solving TSP. Finally, GWO and A-GWO
utilize a parameter in the attacking stage, where the
value changes linearly during the algorithm run.
Figure. 3 Agent representation in A-GWO algorithm
However, in [13], no parameter is used, making the
algorithm unable to explore and exploit the search
mathematically modeled as the movement of the grey space according to the changes in the parameter value
wolf’s location to the prey location to be surrounded. during the algorithm run.
This task can be achieved by finding the distance Details of Steps 9–15 in Fig. 1, consisting of both
between the grey wolf and prey location in the search modifications, are highlighted in Fig. 2. A-GWO
space. The movement of the grey wolf maintains the constructs many solutions by performing high
exploration and exploitation to find a new position exploration in the early search stages adaptively
during the search and avoid falling into the local based on the crossover operator, which linearly
optima problem. This process is articulated using the changes during the algorithm run. The neighborhood
crossover operator, as shown in Step 10. This step search using displacement and pair-swap operations
represents the sharing of information between each ω increases the probability of finding high-quality
wolf with the α, β, and δ wolves. However, the solutions in the neighborhood region of wolves.
crossover operator is performed according to a
predefined breakpoint, which is adaptively changed 4. Updated position in A-GWO
during the run and represents the matting process. Each agent (wolf) is given a unique number
The best matting is selected as the new position for representing a city to solve the problem of finding the
the current ω wolf, which, in the end, represents the minimum tour length in TSP. Each agent denotes one
prey hunting stage of steps 11–12. The hunting starts candidate solution of length (n+1), where n is the
by surrounding the position of the prey in the search maximum number of cities. The tour starts and ends
space. However, no real knowledge about the prey in the same city, which signifies a complete rule. In
position as in nature is available in the mathematical A-GWO, each agent randomly generates a candidate
model. The only information provided is the solution of one dimension, representing the initial
positions of the three best wolves in the search space solution, as shown in Fig. 3.
α, β, and δ, representing the best solutions in the The first modification, which is the adaptive
algorithm. Therefore, the three candidate solutions crossover, is linearly changed during the algorithm
are used to guide the rest of the ω wolves toward the run. The second is the use of neighborhood search to
best location that is surrounded by α, β, and δ. As a explore more regions in the local neighborhood
result, each ω wolf updates its position according to structure [40, 51]. The crossover operation starts with
the best crossover operator between ω wolf and the high exploration by exchanging the cities between
three best wolves. The process of attacking the prey two tours. The exchange includes an omega tour with
can be represented as the time of performing the alpha, beta, and delta tours, as shown in Fig. 4. The
attack and determining the best time to make the purpose is to move omega wolves to the best position
attack (steps 14–16). In the algorithm, the breakpoint near the three best wolves. In the beginning, the
represents exploration and exploitation, controlled exploration is high to explore more positions in the
according to the value of a that changes linearly. The search space. This condition increases the probability
second modification (i.e., the neighborhood search) of finding other prey positions with better fitness
(step 13) increases the probability of finding better functions (minimum tour distance). The crossover
solutions in the neighborhood region of wolves operation gradually decreases in the advanced search
during the exploration that is performed in the process to improve the prey position’s search.
crossover operator.

International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 546

using two neighborhood operation approaches,


namely, displacement and pair-swap operations. The
pair-swap operation swaps only two cities randomly
to ensure moving the agent into a new location.
Meanwhile, displacement operation moves a
subsequence of cities (a length of the subsequence of
cities randomly) and instates them into a new position
in the tour, as illustrated in Fig. 6. The figure shows
the pair-swap between city numbers 6 and 2. The
displacement operation moves a subsequence of
cities 1 and 2, inserting them before city number 6.

Figure. 4 Position updating in A-GWO • Pair-swap: Two cities (two city positions) are
randomly selected from the tour to be swapped.
• Displacement: A random position from the tour is
selected with a random subsequence of city
positions, and the subsequence of cities is moved
before the selected random position.

Figure. 5 Example of crossover operator using partially The tour is accepted as a new better tour if and
mapped crossover technique only if its quality is better than the omega tour Eq. (1).
However, the vital issue is how can the omega tour
provide better quality tour fitness. Such tour fitness
can be produced either based on the crossover
operator between the omega tour and the three best
wolves, alpha, beta, and delta or by keeping the
current omega tour if the fitness function of omega
Figure. 6 Neighborhood search using pair-swap and wolf is better than that of crossover operation, as
displacement reflected in the proposed Eq. (1). In Eq. (1), two
solutions are S* and S0, where S* represents the
In each iteration, the crossover is performed if the improved solution, and S0 represents the non-
fitness function is better than the omega. Otherwise, improved solution. S* is obtained according to Eq.
it is rejected, and the older tour is kept. The crossover (2), which performs three crossover operations; only
utilizes the technique called partially mapped one is accepted as the best solution, represented by
crossover. This technique changes one city S*. S0, the non-improved solution, is accepted when
randomlyand swaps cities from two tours and within it has better fitness quality than S*. The crossover is
the tour itself. This process guarantees that no city is linearly changed during the iterations according to a
visited twice, which is a violation of the TSP rule. parameter proposed in this algorithm, namely, break
This process is illustrated in Fig. 5. In the example point. The breakpoint is used in the crossover to cut
between omega and delta wolf tours, three cities are the tour in each iteration. The crossover stops until
moved from the omega wolf tour into the delta wolf the value of breakpoint equals 0 or less, which
tour. The omega wolf updates its position according ensures that the crossover starts with high diversity in
to the new tour produced if it has better fitness, the early search process and ends with low diversity
ensuring that the wolf moves to a better position in in the advanced search process. The breakpoint value,
the search space. as shown in Eq. (3), is computed according to the
The second modification improves the value of 𝑎. This step is followed by the neighborhood
exploitation capability, especially in the advanced search, as shown in step 13 in Fig. 1., to improve the
search space using the neighborhood search [52, 53]. tour fitness using either pair-swap or displacement.
This modification aims to avoid stagnation when
wolves are located in the same location. The slight The acceptance criterion =
modification in the position of cities in the tour can S ∗, 𝑖𝑓 Quality (S ∗) > Quality (o𝑚𝑒𝑔𝑎)
move the wolves into new locations within the
neighborhood structure. The neighborhood search { (1)
changes cities in the same tour (new tour in Fig. 5) S0, 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 547

Quality (S ∗) = is set to 0.8, which guides to algorithm toward the


𝑆1 = 𝐶𝑟𝑜𝑠𝑠𝑜𝑣𝑒𝑟 (𝑂𝑚𝑒𝑔𝑎, 𝐴𝑙𝑝ℎ𝑎), best quality solutions during the algorithm run. The
{ 𝑆2 = 𝐶𝑟𝑜𝑠𝑠𝑜𝑣𝑒𝑟 (𝑂𝑚𝑒𝑔𝑎, 𝐵𝑒𝑡𝑎), (2) evaporation rate is set to 0.01 in ACO, which is
𝑆3 = 𝐶𝑟𝑜𝑠𝑠𝑜𝑣𝑒𝑟 (𝑂𝑚𝑒𝑔𝑎, 𝐷𝑒𝑙𝑡𝑎), responsible for increasing and decreasing exploration
capability, where low evaporation means fast
Breakpoint = (a. M𝑥_cities) − M𝑥_cities) (3) convergence and vice versa. The other parameters,
such as the coefficients in GWO and A-GWO,
The greedy process in Eq. (1) improves the tour decreased from 2 to 0 throughout the iterations. Such
solution by accepting only the best solution from the a decrease forces the algorithm to increase
crossover based on the linear value changed during exploitation capability in the advance iterations,
the run. In the algorithm, the modifications are made which is used to attack the prey when it stops moving
at the “compute breakpoint value” and “update in the GWO algorithm. All parameters were
current position 𝑋𝑖 .’ The calculation to compute the initialized as the literature in [57, 13] for a fair
breakpoint is performed using Eq. (3) is used to comparison. The second scenario of the comparison
calculate the current position of 𝑋𝑖 using Eq. (1). The is performed with other benchmarks and other
breakpoint in Eq. (3) decreases during the algorithm optimization algorithms. The target of the scenario is
run. Thus, the crossover operator in Eq (2) changes to indicate the effectiveness of the proposed
the length of the solution adaptively, linearly technique by finding the optimal solutions only. The
decreasing the long length. This step is followed by TSP instance used in the second scenario, as shown
using the neighborhood search to select the best in Table 1, belongs to the standard library and is
solution either from pair-swap or displacement. This selected to test eil76, eli51, berlin52, kroa100, st70,
stage aims to increase the algorithm's exploitation oliver30, pr76, pr107, ch150, d198, tsp225, and
capability in finding more neighborhood solutions f1417. The algorithms used in the second scenario are
from the best global region. discrete whale optimization algorithm (DWOA) [47],
discrete whale optimization algorithm with variable
5. Performance evaluation neighborhood search (VDWOA) [48], bat algorithm
(BA) [59], GWO, moth-flame optimization (MFO)
The performance of the proposed A-GWO [60], and PSO [61]. In this scenario, each algorithm
algorithm is evaluated using 25 TSP benchmark is performed 50 times, and the optimal solution
datasets taken from TSPLIB [56]. These datasets provided is used in the comparison. The parameters
differ in the number of cities, including small and of DWOA and VDWOA are similarly set in the
medium numbers of cities, and the maximum and experiment, where constant-coefficient 𝑏 is set to 1,
minimum distances between the cities, as shown in and the ε value is set to 0.35. In BA, the maximum
Table 1. Table 1 also lists the distribution of the pulse frequency value is set to 1, whereas the
problems and the optimal solution of each problem. minimum value is set to 0. The coefficient of sound
Experimental results of A-GWO are performed in loudness is 0.9, the search frequency the
three scenarios. The first scenario compares A-GWO enhancement factor of is 0.9, the loudness of the
with four state-of-the-art algorithms that are best sound is between (0, 1), and the pulse emission rate
known for providing the best tour distance: ACO, GA is between (0, 1). The spiral shape parameter is set to
[57], GWO [13], and the producer scrounger method 1 in MFO. The inertia weight factor is set to 0.2, and
(PSM) [58], based on the minimum tour distance the acceleration factor is set to 2 in the PSO algorithm.
across all cities in the first set of TSP instances. The The third scenario was performed with other
setting of the parameters for all the algorithms is set optimization algorithms. Six optimization algorithms
similar to that in [13]. GA and PSM represent GA used to indicate the effectiveness of the proposed
and PSM, respectively. The population size is technique include HHO-ACS [35], ACO [57], PSO
initialized equally, where each algorithm’s [61], GA [57], BH [36], and DA [49]. The
population size is set to 100. The number of iterations benchmarks used in this scenario, as shown in Table
is set to 500, and the number of runs is set to 10 for 1, contain eight datasets: bays29, att48, eil51,
all algorithms, which meant that all algorithms had berlin52, st70, eil76, and eil101. In this scenario, the
the same improvement time to produce the result maximum number of iterations is set to 200, and the
within the same parameters. The parameter mutation population size is set to 100 for all algorithms. All
is set to a small value to increase the diversity of parameters are set based on [35].
solutions, usually applied with a low probability of Table 2 shows the average results of the minimum
approximately 0.001. A high probability indicated tour of the algorithms. The figures in brackets are the
that GA is reduced to a random search. The crossover standard deviation of the results, and the best result
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 548

Table 1. Benchmark characteristics of each TSP dataset


NO Dataset Number of cities Distribution Optimal solution
1 burma14 14 14-cities in Burma 33.23
2 ulysses16 16 Odyssey of Ulysses 68.59
3 gr17 17 17-city problem (Groetschel) 2085
4 gr21 21 21-city problem (Groetschel) 2707
5 ulysses22 22 Odyssey of Ulysses (Groetschel and Padberg) 70.13
6 gr24 24 24-city problem (Groetschel) 1272
7 fri26 26 26-city problem (Fricker) 637
8 bays29 29 29-cties in Bavaria (street distance) 2020
9 hk48 48 48-city problem (Held/Karp) 11461
10 eil51 51 51-city problem (Christofides/Eilon) 426
11 berlin52 52 52-locations in Berlin (Germany) 7542
12 st70 70 70-city problem (Smith/Thompson) 675
13 eil76 76 76-city problem (Christofides/Eilon) 538
14 gr96 96 Africa-Subproblem of 666-city TSP (Groetschel) 55209
15 kroa100 100 100-city problem A (Krolak/Felts/Nelson) 21282
16 oliver30 30 30-city problem 420
17 pr76 76 76-city problem (Padberg/Rinaldi) 108159
18 pr107 107 107-city problem (Padberg/Rinaldi) 44303
19 ch150 150 150-city problem (Churritz) 6528
20 d198 198 Drilling problem (Reinelt) 15780
21 tsp225 225 A TSP problem (Reinelt) 3916
22 fl417 417 Drilling problem (Reinelt) 11861
23 bays29 29 29 Cities in Bavaria -
24 att48 48 48 capitals of the US -
25 eil101 101 101-city problem (Christofides/Eilon) 629

Table 2. Average tour distance for all algorithms (first experiment)


TSP instance GA ACO PSM GWO A-GWO
burma14 31.83 31.21 30.89 30.87 30.25
ulysses16 74.79 77.13 74.2 73.99 69.05
gr17 2458.36 2332.58 2375.39 2332.58 2321.08
gr21 3033.82 2954.58 2838.22 2714.65 2711.11
ulysses22 79.62 86.81 76.68 76.08 72.65
gr24 1402.01 1267.13 1372.57 1289.23 1277.03
fri26 689.49 646.48 675.24 644.67 639.06
bays29 9981.49 9964.78 9917.59 9219.40 9390.40
hk48 16033.31 12731.07 13870.94 12117.05 12122.63
eil51 592.3 504.83 474.58 463.29 459.02
berlin52 10413.61 8088.95 8865.08 8289.11 8115.18
st70 1203.35 748.65 845.40 800.14 755.12
eil76 926.4 601.77 631.58 629.24 618.92
gr96 1092.04 590.67 618.68 660.48 584.21
kroa100 57940 24623.01 30210.57 28340.42 25150.04

for each dataset is highlighted. The A-GWO the average standard deviation shows that ACO, A-
algorithm produced the best results in eight datasets, GWO, GWO, PSM, and GA receive 196, 2128, 2736,
followed by the ACO and GWO algorithms with five 4473, and 6802, respectively. The average standard
and two datasets, respectively. A-GWO, GWO, and deviation indicates that the proposed A-GWO is
ACO have more control over the exploration and ranked as the second algorithm producing a better
exploitation than GA and PSM, enabling the standard deviation.
algorithms to produce better results regardless of the The second scenario experiments with VDWOA,
datasets’ characteristics. However, ACO has better DWOA, BA, GWO, MFO, and PSO algorithms, as
stability than GWO and A-GWO, as indicated by the shown in Table 3. The experiment is performed
small values of the standard deviation. Furthermore, according to the optimal solution (minimum value as)
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 549

Table 3. Optimal Solution of seven algorithms (second experiment)


TSP instance VDWOA DWOA BA GWO MFO PSO A-GWO
Oliver30 420 420 420 422 423 424 420
Eil51 429 445 439 441 449 445 427
Berlin52 7542 7727 7694 7898 8184 7862 7544
St70 676 712 718 726 710 732 675
Eil76 554 579 561 565 577 595 545
Pr76 108,353 111,511 111,989 114,261 114,377 115,265 108,321
KroA100 21,721 22,471 23,424 22,963 23,456 23,480 21,717
Pr107 45,030 45,780 46,419 46,083 47,437 46,919 45,042
CH150 6863 7329 7440 7384 7329 7833 6858
D198 16,313 16,603 16,849 17,109 16,911 18,130 16,509
Tsp225 4136 4399 4427 4620 4469 5049 4133
Fl417 12,462 13,886 15,532 15,492 14,087 18,688 12,476

Table 4. Average tour distance for all algorithms ( third experiment)


TSP instance HHO-ACS ACO PSO GA BH DA A-GWO
bays29 9079.60 9823.20 9195.91 10015.2 9463.25 9480.29 9390.86
bayg29 9077.20 9882.22 9947.03 9771.95 9375.44 9547.75 9011.100
att48 33580.2 39436.2 47018.4 43620.6 34473.8 37759.7 33570.223
eil51 429.600 461.018 574.802 453.477 458.925 475.16 457.98
berlin52 7589.00 8522.90 11089.5 9288.45 8455.83 9486.70 8120.13
st70 685.200 757.754 1321.81 1158.85 797.575 839.01 745.543
eil76 548.600 594.144 975.64 652.059 659.102 644.89 619.432
eil101 654.200 763.921 1499.99 838.831 897.381 997.60 654.180

provided in 50 runs by each algorithm according to instance. The performance of the A-GWO algorithm
the setting in [48]. It indicates that the proposed A- is evaluated with other swarm algorithms,
GWO algorithm produces the best results in seven particularly the ACO algorithm. A-GWO
TSP instances: Eil51, St70, Eil76, Pr76, KroA100, outperforms the ACO algorithm in seven TSP
CH150, and Tsp225 (approximately 58%). It also instances (i.e., bays29, bays29, att48, eil51, berlin52,
shows that the VDWOA algorithm produced the best and st70 [75%]), whereas the ACO algorithm
results in four TSP instances: Berlin52, Pr107, D198, produces the best results only in one TSP instance
and F1417 (about 33%). Table 3 reports that the A- (i.e., eli76). The final comparison between HHO-
GWO algorithm produces the best results because it ACS and A-GWO algorithm indicates that HHO-
is the best algorithm compared with DWOA, BA, ACS results better than A-GWO, where the former
GWO, MFO, and PSO algorithms. The algorithm outperforms A-GWO in five TSP instances
produces better results (100%) compared with all (approximately 63%). The HHO-ACS algorithm
mentioned algorithms. Table 3 illustrates the results produces the best results in bays29, eil51, berlin52,
of algorithm performance based on the optimal st70, and eil76, whereas the A-GWO algorithm
solution by each algorithm. produces the best results only in bayg29, att48, and
In the third scenario, an experiment was eil101 (approximately 38%). The reason is that the
performed with six optimization algorithms to HHO-ACS algorithm can optimize its parameter
indicate the effectiveness of the proposed algorithm. better than the A-GWO. Table 3 illustrates the results
The several algorithms used in the experiment are of algorithm performance based on the optimal
HHO-ACS, ACO, PSO, GA, BH, and DA, as solution by each algorithm. However, A-GWO and
reported in Table 4. The experiment is performed HHO-ACS have better stability, providing a standard
according to the average tour distance of each deviation than other optimization algorithms. HHO-
algorithm. It indicates that the A-GWO algorithm ACS, as indicated by the third experiment, provides
produces better results than DA and BH algorithms small values of the standard deviation because it is
in all TSP instances (100%). The comparison the best algorithm in the ranking. The average
between A-GWO and GA and PSO algorithms standard deviation showed by HHO-ACS
indicated that A-GWO produces the best results in (approximately 16.6111). The second algorithm in
seven TSP instances. In contrast, GA and PSO the ranking is A-GWO, which provides a standard
algorithms produce the best results only in one TSP deviation of a value of approximately 18.3766. DA
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 550

with 89.6325 is the third algorithm in the ranking, BH omega wolf and the hierarchy level (alpha, beta, and
with 210.7414 is the fourth, and GA with 493.2315 is delta). Cumulative iterations with neighborhood
the fifth. ACO has an average standard deviation of search including displacement and pair-swap
approximately 948.8546. The last algorithm that operations improve the quality of tour distance in the
provides a high standard deviation is PSO with neighborhood region of the hierarchy level. The
1696.3588. The average standard deviation proves proposed algorithm’s performance provides a better
that the proposed A-GWO algorithm converges on minimum tour distance among all optimization
the same results during the algorithm run. However, algorithms. The evaluation conducted using 25 TSP
the HHO-ACS algorithm is approximately the same instances is differed in the number of cities against 12
in each algorithm run. HHO-ACS convergence is state-of-the-art algorithms. The 12 algorithms are GA,
better than A-GWO because its parameters have been ACO, PSM, GWO, VDWOA, DWOA, BA, MFO,
optimized, forcing the algorithm to provide similar PSO, HHO-ACS, BH, and DA. The experiment
results in the search history. indicates that the proposed A-GWO is approximately
The proposed algorithm produces a better 58% better than all algorithms, except the HHO-ACS
minimum distance tour than all other algorithms. The algorithm.
modifications (i.e., the adaptive crossover operators Future research will focus on applying the
and neighborhood search) improve the results by algorithm directly to similar problems, such as VRP,
enhancing information sharing among the wolves and employing online parameter adaption to optimize
during the algorithm run and intensifying the search the parameter of A-GWO to include self-adaptive and
process for more promising regions in the search-based strategies. Other neighborhood search
neighborhood of the three best wolves’ locations. operators can be tested in the proposed algorithm
with other application problems, such as clustering
6. Conclusion and future work and classification, to guide future research plans.
This study aims to solve the problem of exchange
Conflicts of interest
information among the leadership hierarchy in the
traveling salesman problem. The proposed adaptive The author declares no conflict of interest.
algorithm (A-GWO) has two contributions using the
crossover operator as the first contribution and Author contributions
neighborhood search as the second contribution. The
crossover operator allows the information to be The main author (“Ayad”) contributed to coding,
inherited between the leadership hierarchy during the implementation, discussion of results, and
algorithm run. The neighborhood search provides preparation. The co-author “Ku Ruhana Ku-
different neighborhoods regions with different Mahamud” contributed to the planning, presentation
solutions quality during the run, which could and supervision.
generate several landscapes to support the algorithm
in finding more solutions in the local region of the Acknowledgments
best solution. The scientific contribution of this study
The author would like to thank Shatt Al-Arab
is to employ a linear crossover that changes during
University College and Universiti Utara Malaysia for
the algorithm run. The benefit is a high exploration
supporting this manuscript financially.
ratio at the beginning of the run.
The other scientific contribution increases the
References
exploitation using neighborhood search. Thus, this
search is locally performed to find the global [1] H. N. K. A. Behadili, R. Sagban, and K. R. K.
solutions in the local region based on the quality of Mahamud, “Hybrid ant colony optimization and
best solutions reached. The advantage of both iterated local search for rules-based
contributions is the trade-off between the exploration classification”, Journal of Theoretical and
search-based and the exploitation search-based, Applied Information Technology, Vol. 98, No. 4,
guiding the search toward the best regions in the pp. 657–671, 2020.
search space. [2] M. Alzaqebah, S. Abdullah, and S. Jawarneh,
The limitation of the study is that it requires long “Modified artificial bee colony for the vehicle
convergence because of the crossover operator that is routing problems with time windows”,
linearly exchanged through the time run. Due to the Springerplus, Vol. 5, No. 1, 2016.
advantage of both modifications, the improvement is [3] D. Arimbi, A. Bustamam, and D. Lestari,
achieved using a crossover operator between the “Implementation of Hybrid Clustering Based on

International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 551

Partitioning Around Medoids Algorithm and Systems, Vol 35, No. 6, pp. 1-26, 2018.
Divisive Analysis on Human Papillomavirus [15] R. Xu, J. Xu, and D. Wunsch, “A Comparison
DNA”, In: Proc. of AIP Conference Proceedings, Study of Validity Indices on Swarm-
Vo.1825, pp. 1–8, 2017. Intelligence-Based Clustering”, IEEE
[4] H. Ismanto, A. Azhari, S. Suharto, and L. Arsyad, Transactions on Systems, Man, and Cybernetics,
“Classification of the mainstay economic region Vol. 42, No. 4, pp. 1243–1256, 2012.
using decision tree method”, Indonesian Journal [16] S. Zhu and L. Xu, “Many-objective fuzzy
of Electrical Engineering and Computer Science, centroids clustering algorithm for categorical
Vol. 12, No. 3, pp. 1037–1044, 2018. data”, Expert Systems with Applications, Vol. 96,
[5] H. Almazini and K. R. K. Mahamud, “Grey pp. 230–248, 2018.
Wolf Optimization Parameter Control for [17] A. M. Jabbar, “Rule Induction with Iterated
Feature Selection in Anomaly Detection”, Local Search”, International Journal of
International Journal of Intelligent Engineering Intelligent Engineering and Systems, Vol. 14,
and Systems, Vol. 14, No. 2, pp. 474–483, 2021. No. 4, pp. 289–298, 2021.
[6] M. Kohli and S. Arora, “Chaotic grey wolf [18] C. Blum and A. Roli, “Metaheuristics in
optimization algorithm for constrained combinatorial optimization: overview and
optimization problems”, Journal of conceptual comparison”, ACM Computing
Computational Design and Engineering , Vol. 5, Surveys, Vol. 35, No. 3, pp. 189–213, 2003.
No. 4, pp. 458–472, 2018. [19] S. Mukherjee, S. Ganguly, and S. Das, “A
[7] L. Xinwu, “Research on Text Clustering strategy adaptive genetic algorithm for solving
Algorithm Based on Improved K-means”, In: the travelling salesman problem”, In: Proc. of
Proc. of International Conf. on Future International Conf. on Swarm, Evolutionary,
Computer and Communication, Vol. 4, pp. 573– and Memetic Computing, pp. 778–784, 2012.
576, 2010. [20] J. Sung and B. Jeong, “An Adaptive
[8] U. Chandrasekhar and P. Naga, “Recent trends Evolutionary Algorithm for Traveling
in Ant Colony Optimization and data clustering: Salesman”, The Scientific World Journal, Vol.
A brief survey”, In: Proc. of International Conf. 14, pp. 1–11, 2014.
on Intelligent Agent & Multi-Agent Systems, pp. [21] J. Mcdonnell, R. Reynolds, and D. Fogel,
32–36, 2011. “Adapting crossover in evolutionary
[9] C. Huang, W. Huang, H. Chang, C. Yeh, and C. algorithms”, In: Proc. of in the Fourth Annual
Tsai, “Hybridization strategies for continuous Conf. on Evolutionary Programming, pp. 367-
ant colony optimization and particle swarm 384, 1995.
optimization applied to data clustering”, Applied [22] M. Riff and X. Bonnaire, “Inheriting parents
Soft Computing, Vol. 13, No. 9, pp. 3864-3872, operators: A new dynamic strategy for
2013. improving evolutionary algorithms”, In: Proc. of
[10] K. Ye, C. Zhang, J. Ning and X. Liu, “Ant- International Conf. on Methodologies for
colony algorithm with a strengthened negative- Intelligent Systems, pp. 333-343, 2002.
feedback mechanism for con- straint-satisfaction [23] J. Gomez, “Self adaptation of operator rates in
problems”, Information Sciences, Vol. 4, pp. 29- evolutionary algorithms”, In: Proc. of Genetic
41, 2017. and Evolutionary Computation Conf., pp. 1162-
[11] E. Bonabeau, M. Dorigo, and G. Theraulaz, “A 1173 , 2004.
Primer on Multiple Intelligences”, Cham: [24] A. C. Salinas and J. Perdomo, “Self-adaptation
Springer, pp. 213-250,1999. of genetic operators through genetic
[12] M. Worall and M. Worall, “Homeostasis in programming techniques”, In: Proc. of Genetic
nature: Nest building termites and intelligent and Evolutionary Computation Conf., pp. 913-
buildings”, Intelligent Buildings International, 920, 2017.
pp. 87–95, 2011. [25] Y. Yang, H. Dai, and H. Li, “Adaptive genetic
[13] S. Sopto, S. Ayon, M. Akhand, and N. Siddique, algorithm with application for solving traveling
“Modified Grey Wolf Optimization to Solve salesman problems”, In: Proc. of International
Traveling Salesman Problem”, In: Proc. of Conf. on Internet Technology and Applications,
International Conf. on Innovation in pp. 1–4, 2010.
Engineering and Technology, pp. 1–4, 2019. [26] T. Stützle and H. H. Hoos, “MAX –MIN Ant
[14] B. Anari, J. A. Torkestani, and A. M. Rahmani, System”, Future Generation Computer Systems,
“A learning automata-based clustering Vol. 16, pp. 889–914, 2000.
algorithm using ant swarm intelligence”, Expert [27] A. G, Pardo, J. Jung, and D. Camacho, “ACO-
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 552

based clustering for Ego Network analysis,” control parameter ‘limit”, Information
Future Generation Computer Systems, Vol. 66, Technology And Control, Vol. 46, No. 4, pp.
pp. 160–170, 2017. 566–604, 2017.
[28] H. Menéndez, F. Otero, and D. Camacho, [39] S. Mortada and Y. Yusof, “A Neighbourhood
“SACOC:A Spectral-Based ACO Clustering Search for Artificial Bee Colony in Vehicle
Algorithm”, In: Proc. of International Conf. on Routing Problem with Time Windows”,
Intelligent Distributed Computing, pp. 185–194, International Journal of Intelligent Engineering
2014. and Systems, Vol. 14, No. 3, pp. 255–266, 2021.
[29] H. Kanan, K. Faez, and S. M. Taheri, “Feature [40] A. Rekaby, A. Youssif, and A. S. Eldin,
selection using Ant Colony Optimization “Introducing Adaptive Artificial Bee Colony
(ACO): A new method and comparative study in algorithm and using it in solving traveling
the application of face recognition system”, In: salesman problem”, In: Proc. of Science and
Proc. of Industrial Conference on Data Mining, Information Conf., pp. 502–506, 2013.
pp. 63–76, 2007. [41] C. Laizhong, L.Genghui, W. Xizhao, L.Qiuzhen,
[30] P. Shunmugapriya and S. Kanmani, “A hybrid C. Jianyong, L.Na, and L. Jian “A ranking-based
algorithm using ant and bee colony optimization adaptive artificial bee colony algorithm for
for feature selection and classification”, Swarm global numerical optimization”, Information
and Evolutionary Computation, Vol 36, pp. 27- Sciences, Vol. 417, pp. 169–185, 2017.
36, 2017. [42] Y. Wang, T. Wang, S. Dong, and C. Yao, “An
[31] M. Dorigo and L. M. Gambardella, “Ant colony Improved Grey-Wolf Optimization Algorithm
system: a cooperative learning approach to the Based on Circle Map”, In: Proc. of International
traveling salesman problem”, IEEE Conf. on Machine Learning and Computer
Transactions on Evolutionary Computation, Vol. Application, pp. 1-7, 2020.
1, No. 1, pp. 53–66, 1997. [43] D. Karaboga and B. Gorkemli, “Solving
[32] M. Dorigo and L. M. Gambardella, “Ant Traveling Salesman Problem by Using
colonies for the travelling salesman problem”, Combinatorial Artificial Bee Colony
Biosystems, Vol. 43, No. 2, pp. 73–81, 1997. Algorithms”, International Journal on Artificial
[33] L. Yangyang, S. Xuanjing, and C. Haipeng, “An Intelligence Tools, Vol. 28, No. 1, 2019.
Adaptive Ant Colony Algorithm Based on [44] M. Yousefikhoshbakht, “Solving the Traveling
Common Information for Solving the Traveling Salesman Problem: A Modified Metaheuristic
Salesman Problem”, In: Proc. of International Algorithm”, Complexity Journal, Vol. 20, pp. 1-
Conf. on Systems and Informatics, Vol. 35, pp. 13, 2021.
1263–1277, 2012. [45] M. Qamar, S. Muhammad, T. Shanshan, A.
[34] G. Ping, X. Chunbo, C. Jing, and L. Yanqing, Farman, A. Ammar, F. Muhammad, A. Fayadh,
“Adaptive ant colony optimization algorithm”, M. Fazal, A. Asar, and N. Alnaim,
In: Proc. of International Conf. on Mechatronics “Improvement of traveling salesman problem
and Control, pp. 95–98, 2015. solution using hybrid algorithm based on best-
[35] S. A. Yasear and K. R. K. Mahamud, “Fine- worst ant system and particle swarm
Tuning the Ant Colony System Algorithm optimization”, Applied Sciences, Vol. 11, No. 11,
Through Harris’s Hawk Optimizer for 2021.
Travelling Salesman Problem”, International [46] Y. Liu, Q. Liu, and Z. Tang, “A discrete chicken
Journal of Intelligent Engineering and Systems, swarm optimization for traveling salesman
Vol. 14, No. 4, pp. 136–145, 2021. problem”, In: Proc. of International Conf. on
[36] A. Hatamlou, “Solving travelling salesman Physics, Mathematics and Statistics, pp. 1-7,
problem using black hole algorithm”, Soft 2021.
Computing, Vol. 22, No. 24, pp. 8167–8175, [47] J. Li and M. Le, “Application of Discrete Whale
2018. Optimization Hybrid Algorithm in Multiple
[37] S. Anuar, A. Selamat, and R. Sallehuddin, “A Travelling Salesmen Problem”, In: Proc. of
modified scout bee for artificial bee colony Advanced Information Technology, Electronic
algorithm and its performance on optimization and Automation Control Conf., pp. 588–595,
problems”, Journal of King Saud University - 2019.
Computer and Information Sciences, Vol. 28, [48] J. Zhang, L. Hong, and Q. Liu, “An improved
No. 4, pp. 395–406, 2016. whale optimization algorithm for the traveling
[38] N. Veček, S. Liu, M. Črepinšek, and M. Mernik, salesman problem”, Symmetry Journal, Vol. 13,
“On the importance of the artificial bee colony No. 1, pp. 1–13, 2021.
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48
Received: August 20, 2021. Revised: September 16, 2021. 553

[49] A. Hammouri, E. Samra, M. A. Betar, R. Khalil, In: Proc. of International Conf. on Intelligent
Z. Alasmer, and M. Kanan, “A dragonfly Computing, pp. 857–862, 2013.
algorithm for solving traveling salesman
problem”, In: Proc. of International Conf. on
Control System, Computing and Engineering, pp.
136–141, 2019.
[50] M. Taha, B. A. Khateeb, Y. Hassan, O. Ismail,
and A. Rawash, “Solving competitive traveling
salesman problem using gray wolf optimization
algorithm”, Periodicals of Engineering and
Natural Sciences, Vol. 8, No. 3, pp. 1331–1344,
2020.
[51] S. Mirjalili, M. Mirjalili, and A. Lewis, “Grey
Wolf Optimizer”, Advances in Engineering
Software, Vol. 69, pp. 46–61, 2014.
[52] P. Pellegrini, T. Stützle, and M. Birattari, “A
critical analysis of parameter adaptation in ant
colony optimization”, Swarm Intelligence
Journal, Vol. 6, No. 1, pp. 23–48, 2012.
[53] L. Manuel, M. Maur, and M. Oca, “Parameter
Adaptation in Ant Colony Optimization”,
Autonomous Search Journal, Vol. 3, No. 1, pp.
191–215, 2010.
[54] C. Fan, Q. Fu, G. Long, and Q. Xing, “Hybrid
artificial bee colony algorithm with variable
neighborhood search and memory mechanism”,
Journal of Systems Engineering and Electronics,
Vol. 29, No. 2, pp. 405–414, 2018.
[55] P. Hansen and N. Mladenović, “Variable
neighborhood search”, in Handbook of
Heuristics, 2018.
[56] TSPLIB, “Symmetric traveling salesman
problem (TSP)”, 1995.
[57] S. Alharbi and I. Venkat, “A Genetic Algorithm
Based Approach for Solving the Minimum
Dominating Set of Queens Problem”, Journal of
Optimization, Vol. 6, No. 2, pp. 1–9, 2017.
[58] M. H. Akhand, P. Shill, and M. Hossain,
“Producer-Scrounger Method to Solve
Traveling Salesman Problem”, International
Journal of Intelligent Systems and Applications,
Vol. 7, No. 3, pp. 29–36, 2015.
[59] E. Osaba, X. Yang, F. Diaz, P. L. Garcia, and R.
Carballedo, “An improved discrete bat
algorithm for symmetric and asymmetric
Traveling Salesman Problems”, Engineering
Applications of Artificial Intelligence, Vol. 48,
pp. 59–71, 2016.
[60] A. Helmi and A. Alenany, “An enhanced Moth-
flame optimization algorithm for permutation-
based problems”, Evolutionary Intelligence, Vol.
13, No. 4, pp. 741–764, 2020.
[61] X. Xu, X. Cheng, Z. Yang, X. H. Yang, and W.
L. Wang, “Improved particle swarm
optimization for Traveling Salesman Problem”,
International Journal of Intelligent Engineering and Systems, Vol.14, No.6, 2021 DOI: 10.22266/ijies2021.1231.48

View publication stats

You might also like