0% found this document useful (0 votes)
40 views13 pages

Khan 2021

1) The document presents a novel hybrid algorithm called Hybrid Gravitational Search Particle Swarm Optimization Algorithm (HGSPSO) that combines the Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO) to address issues with both algorithms. 2) PSO can get trapped in local minima and loses exploration ability over iterations while GSA is good at global search but slows down in later iterations. HGSPSO merges GSA's local search with PSO's social learning ability. 3) The algorithm is tested on benchmark functions and a DNA sequence problem to evaluate its performance in finding optimal solutions and maintaining solution stability and convergence. Results show the proposed hybrid algorithm has extraordinary performance

Uploaded by

Chakib Benmhamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views13 pages

Khan 2021

1) The document presents a novel hybrid algorithm called Hybrid Gravitational Search Particle Swarm Optimization Algorithm (HGSPSO) that combines the Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO) to address issues with both algorithms. 2) PSO can get trapped in local minima and loses exploration ability over iterations while GSA is good at global search but slows down in later iterations. HGSPSO merges GSA's local search with PSO's social learning ability. 3) The algorithm is tested on benchmark functions and a DNA sequence problem to evaluate its performance in finding optimal solutions and maintaining solution stability and convergence. Results show the proposed hybrid algorithm has extraordinary performance

Uploaded by

Chakib Benmhamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Engineering Applications of Artificial Intelligence 102 (2021) 104263

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence


journal homepage: www.elsevier.com/locate/engappai

A novel hybrid gravitational search particle swarm optimization algorithm


Talha Ali Khan ∗, Sai Ho Ling
School of Biomedical Engineering, University of Technology Sydney, New South Wales, Ultimo, 2007, Australia

ARTICLE INFO ABSTRACT


Keywords: Particle Swarm Optimization (PSO) algorithm is a member of the swarm computational family and widely
PSO used for solving nonlinear optimization problems. But, it tends to suffer from premature stagnation, trapped
GSA in the local minimum and loses exploration capability as the iteration progresses. On the contrary, Gravitational
Hybrid
Search Algorithm (GSA) is proficient for searching global optimum, however, its drawback is its slow searching
DNA computation
speed in the final phase. To overcome these problems in this paper a novel Hybrid Gravitational Search Particle
Swarm Optimization Algorithm (HGSPSO) is presented. The key concept behind the proposed method is to
merge the local search ability of GSA with the capability for social thinking (gbest) of PSO. To examine the
effectiveness of these methods in solving the abovementioned issues of slow convergence rate and trapping
in local minima five standard and some modern CEC benchmark functions are used to ensure the efficacy of
the presented method. Additionally, a DNA sequence problem is also solved to confirm the proficiency of the
proposed method. Different parameters such as Hairpin, Continuity, H-measure, and Similarity are employed
as objective functions. A hierarchal approach was used to solve this multi-objective problem where a single
objective function is first obtained through a weighted sum method and the results were then empirically
validated. The proposed algorithm has demonstrated an extraordinary performance per solution stability and
convergence.

1. Introduction maintain a steadiness between exploitation and exploration capability.


PSO is a nonlinear smart computational technique that is undeterred by
During the last few years, many new and modified versions of the the problem size, and effectively utilized to obtain an optimal solution
existing algorithms have been the topic of interest in the evolutionary compared to other conventional techniques (Sabeti et al., 2018). There-
computational research community. These algorithms include Particle fore, it can be used proficiently with numerous optimization problems.
Swarm Optimization Algorithm, Genetic Algorithm, Ant Colony, Gravi- Previously, many other optimizing algorithms are combined with PSO
tational Search Algorithm to name a few. The primary focus of all these to minimize the trapping probability in the local minimum. Lately, a
algorithms is to trace the best solution and to evade trapping in the unique optimization technique based on the force of gravity has been
local minima. The algorithm that is used for obtaining the optimized presented known as GSA. Therefore, in this paper, a combination of
solution must contain the main two characteristics these are exploration both these techniques has been used.
during global searching and exploitation during the local search.
To obtain a good heuristic optimization method there must be a
steadiness between exploitation and exploration. Exploitation is the 1.1. Standard particle swarm optimization algorithm (SPSO)
convergence proficiency of the algorithm to the best solution, whereas
exploration is defined as the competency to search the entire area of the Kennedy and Eberhart in (1995), proposed the PSO algorithm that
problem space. The primary objective of the optimization algorithms is depends on the concepts of the social behavior of the birds. In the
to maintain a trade-off between exploitation and exploration effectively same way as the flock of birds, the algorithm comprises the number
to obtain the global optimum. However, it is challenging to keep of particles to form a swarm. Within the searching area, each agent is
a balance between exploration and exploitation as one ability often finding the best solution.
overlaps the others (Eiben and Schippers, 1998). Therefore, due to At the initial stage, a swarm is formed by allocating random ve-
this problem, the current optimization algorithms are only proficient locity and positions to all the particles. The assessment of each agent
in solving a limited number of problems. Until now, no optimizing (particle) fitness is completed by a set test benchmark function. The
algorithm is capable of solving all the problems. The amalgamation of velocity and the position after every iteration is calculated by using
the different optimization algorithms is one of the other approaches to (1) and (2). Accordingly, if the position found out is improved than

∗ Corresponding author.
E-mail address: [email protected] (T.A. Khan).

https://doi.org/10.1016/j.engappai.2021.104263
Received 11 July 2020; Received in revised form 14 March 2021; Accepted 19 April 2021
Available online xxxx
0952-1976/© 2021 Elsevier Ltd. All rights reserved.
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

the last best position is stored in the memory. 𝑉𝑚𝑎𝑥 is set to limit the 𝑣𝐷 𝐷 𝐷
𝑖 (𝑡 + 1) = 𝑟𝑎𝑛𝑑 × 𝑣𝑖 (𝑡) + 𝑎𝑖 (𝑡) (9)
redundant mobility of the agents outside the search zone. If the velocity
where ‘‘t ’’ is the present iteration. Let us suppose that the gravitational
goes above 𝑣𝑚𝑎𝑥 it is set to zero. Every particle travels in the search
space for calculating the best solution. The position of each particle is and inertial masses are equal then masses will be calculated as follows:
calculated as 𝑀𝑎𝑖 = 𝑀𝑖𝑖 = 𝑀𝑝𝑖 = 𝑀𝑖 𝑖 = 1, 2, 3, … 𝑁 (10)
𝑥𝑖 (𝑡 + 1) = 𝑥𝑖 (𝑡) + 𝑣𝑖 (𝑡) (1) 𝑓 𝑖𝑡 (𝑡) − 𝑋 𝑤 (𝑡)
𝑚𝑖 (𝑡) = 𝑏𝑖 (11)
𝑋 (𝑡) − 𝑋 𝑤 (𝑡)
The information of every particle is based upon its knowledge and the 𝑚 (𝑡)
surrounding particle’s experience. These fundamentals have identical 𝑀𝑖 (𝑡) = ∑𝑁 𝑖 (12)
significance and might be altered based on the choice of the particle so 𝑗=1 𝑚𝑗 (𝑡)

the velocity equation will be Where 𝑓 𝑖𝑡𝑖 (𝑡) is the optimal value of the agent and 𝑋 𝑤 (𝑡) is the worst
{ ( ) ( )} and 𝑋 𝑏 (t) is the best value for this function in case of the minimization
𝑣𝑖 = 𝜒. 𝑤.𝑣𝑖 + 𝐶1 𝑅1 𝑃𝑖 − 𝑥𝑖 + 𝐶2 𝑅2 𝑃𝑔 − 𝑥𝑖 (2)
problem it can be calculated as:
where
𝑋 𝑏 (𝑡) = 𝑚𝑖𝑛𝑗𝜖{1,…,𝑁} 𝑓 𝑖𝑡𝑗 (𝑡) (13)
𝑃𝑖 = [𝑃(𝑖,1) , 𝑃(𝑖,2) , 𝑃(𝑖,3) , 𝑃(𝑖,𝐷) ] 𝑖 = 1, 2…𝑁 𝑤
𝑋 (𝑡) = 𝑚𝑎𝑥𝑗𝜖{1,…,𝑁} 𝑓 𝑖𝑡𝑗 (𝑡) (14)
𝑃𝑔 = [𝑃 𝑔1 , 𝑃 𝑔2 , 𝑃 𝑔3 , 𝑃 𝑔𝐷 ]
To maintain a trade-off between exploitation and exploration of the
𝑃𝑔 is the global best position, the 𝑣𝑖 = {𝑣1 , 𝑣2 … 𝑣𝐷 } is the velocity of algorithm over the iteration the number of the agents is reducing so
the particles, R is the random number [0, 1], D is the dimension of that the agents with heavier masses acting force to one another are
the search space that is the integer, 𝑃𝑖 is the local best, and 𝑥𝑖 is the evaluated. ‘‘𝐾𝑏𝑒𝑠𝑡 ’’ are the best agents having heavier masses. Therefore,
current position. Each particle is assessed by a given fitness function. the value of the 𝐾𝑏𝑒𝑠𝑡 is reducing slowly until a single agent is exerting
The primary purpose of the PSO is to decrease the cost values of the force to the remaining agents. Consequently, (4) is modified as:
particles iteratively for the given benchmark function. ∑
𝐹𝑖𝐷 (𝑡) = 𝑟𝑎𝑛𝑑𝑗 𝐹𝑖𝑗𝐷 (𝑡) (15)
𝑗𝐾𝑏𝑒𝑠𝑡 ,𝑗≠𝑖
1.2. Standard gravitational search algorithm (SGSA)

1.3. Amendments and contributions in this work


Rashedi et al. proposed a novel method known as Gravitation search
optimization (GSA) in 2009 (Rashedi et al., 2009). The concept behind
this algorithm was dependent on Newton’s law of motion and gravity. Although the proficiency has been enhanced by these several ver-
Every agent has its mass in GSA, and the heavier mass agent gives a sions of PSO and GSA most of these variants do not have a strong
better force of attraction. Therefore, all the agents travel near to the learning approach for the particles that perform well during the search-
heaviest agent as the iteration progresses. ing phase. On the other hand, those particles that have not improved
The position of the agents is initialized as follows: their fitness over the iterations and may have suffered from stagnation,
( ) and no technique is used to solve this condition as well. These concerns
𝑥𝑖 = 𝑥1𝑖 , 𝑥2𝑖 , … , 𝑥𝐷
𝑖 𝑓 𝑜𝑟 𝑖 = (1, 2, … 𝐷) (3) may limit the ability of the PSO and GSA algorithms for solving highly
In the 𝐷th dimension, 𝑥𝑖 denotes the position of an 𝑖th object, and D complex global optimization problems.
denotes the dimension of the search area. Since both the algorithms have some points in common and a few
At time ‘‘t ’’ the force of attraction by mass ‘‘i’’ from mass ‘‘j’’ is differences, so it is worth mentioning it before proposing the hybrid
defined as: version. These are highlighted as under:
𝑀𝑝𝑖 (𝑡) ∗ 𝑀𝑎𝑗 (𝑡) 𝐷
𝐹𝑖𝑗𝐷 (𝑡) = 𝐺(𝑡) (𝑥𝑗 (𝑡) − 𝑥𝐷 (4) • Both the algorithm come under the population-based category.
𝑅𝑖𝑗 (𝑡) + 𝜖 𝑖 (𝑡))
• PSO depends on the social behavior of the birds and flocks of
( )𝛽
( ) 𝑡 fishes whereas the GSA depends on the laws of physics.
𝐺(𝑡) = 𝐺 𝑡0 ∗ 0 ,𝛽 < 1 (5)
𝑡 • The optimization is achieved in both the algorithms by the agents’
mobility in the search area, though the mobility scheme is dissim-
where 𝑀𝑝𝑖 is the passive gravitational mass related to object i, 𝑅𝑖𝑗 (𝑡) is
ilar.
the distance between two agents i and 𝑗, 𝜀 is a small constant, 𝐺(𝑡) is a
• The total force of attraction acted by all other agents assesses
gravitational constant at time t, and 𝑀𝑎𝑗 is an active gravitational mass
of object j. the direction of the agents in GSA, whereas, the direction of the
In dimension ‘‘D’’ the overall force experienced on agent i is a particles is obtained by global best and local best positions in PSO.
random weighted sum of all the other parts of the forces exerted on • In GSA, the updating process is carried out by considering the
the other agents, (4) can be modified by the following equation. force of attraction experienced by agents that is proportional
to the fitness value, therefore, the agents look into the search

𝑁
area around them under the effect of force. On the contrary, in
𝐹𝑖𝐷 (𝑡) = 𝑟𝑎𝑛𝑑𝑗 𝐹𝑖𝑗𝐷 (𝑡) (6)
𝑗=1,𝑗≠𝑖
PSO, updating evaluation is achieved lacking the quality of the
solutions, and considering the fitness values as insignificant in the
According to the law of motion, the acceleration of the agent can be
updating process.
defined as:
• GSA is a memoryless algorithm and only the present position is
𝐹𝑖𝐷 (𝑡) important in the updating process, but for updating the velocity
𝑎𝐷
𝑖 (𝑡) = (7)
𝑀𝑖𝑖 (𝑡) in PSO, it utilizes a kind of memory.
Where 𝑀𝑖𝑖 denotes the inertial mass. • The distance between the particles is not important in PSO for
The velocity of the masses is dependent on their current velocity updating. On the other hand, the force is inversely proportional
and acceleration. The velocity and position of the agent are updated to the distance between the agents in GSA.
by (8) and (9)
In this paper, the following modifications and contributions have
𝑥𝐷 𝐷 𝐷
𝑖 (𝑡 + 1) = 𝑥𝑖 + 𝑣𝑖 (𝑡 + 1) (8) been presented:

2
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

• A new method is implemented for the velocity clamping by coefficients adaptively as there is no specific boundary in the evo-
adding a new velocity term in the velocity update equation so lutionary computation for the transiting between these two stages.
that the particles can move faster towards the optimal solution. Moreover, this adaptive method helps the exploration in the beginning
Since the particle velocity is a presumptive parameter, so, it forms and exploitation in the later stage. The best results of the HGSPSO are
an uncurbed path permitting the particles to create wider cycles obtained when changing the 𝐶1 from 2.5 to 0.5, 𝐶2 from 0.5 to 2.5, 𝑃𝑔
in the search area. To circumvent these oscillations, the limits for is the global best, and 𝑃𝑖 is the local best.
the velocity are implemented in terms of upper and lower bounds In HGSPSO, every agent is considered as a potential solution. The
in the HGSPSO. One other primary factor for amending the ve- gravitational constant, resultant forces and gravitational force are mea-
locity update equation is that during exploration, the particles sured among other agents through (19), (20), and (21). (22) calculates
trapped in the local minima and cannot leave which resulted in the acceleration of the agents.
a premature convergence. The concept of increasing the velocity At time ‘‘t ’’ the force of attraction by agent (mass) ‘‘i’’ from an agent
of the particles helps to reach the optimal position swiftly and (mass) ‘‘j’’ is defined as:
ameliorate the convergence.
𝑀𝑝𝑖 (𝑡) ∗ 𝑀𝑎𝑗 (𝑡)
• Adaptive time-varying acceleration constants presented for 𝐹𝑖𝑗𝐷 (𝑡) = 𝐺(𝑡) (𝑥𝐷 𝐷
𝑗 (𝑡) − 𝑥𝑖 (𝑡)) (19)
𝑅𝑖𝑗 (𝑡) + 𝜀
changing the constants as the iteration progresses.
• Inertial weight is also introduced in this paper that makes a where 𝑀𝑝𝑖 is the passive gravitational mass associated to object i, 𝜀 is
balance between the exploration and exploitation of the HGSPSO. a small constant, R𝑖𝑗 (𝑡) is the distance between two agents i and j, M 𝑎𝑗
• The proposed method is applied to real-world DNA problem is an active gravitational mass of agent j, and 𝐺(𝑡) is a gravitational
to demonstrate the performance of the HGSPSO. constant at time t.
To signify the importance of the exploration in the initial phase of
So, in this paper, a fusion of the GSA and PSO is presented that uses the algorithm and exploitation in the later phase, G has been used adap-
the exploitation capability of the PSO and exploration proficiency of the tively so its value rises as the iteration increases. The main objective
GSA. The efficiency of the HGSPSO is then compared with the standard for designing the G adaptively is to motivate the agents to move with
versions of both GSA and PSO by using five standard test benchmark bigger steps in the early stage of the algorithm, however, agents are
functions, CEC benchmark functions, and solving a DNA problem. bound to travel gradually at the end of the iterations. 𝐺(𝑡) is modified
The layout of the remaining paper is as follows. Section 2 contains as follows:
the hybrid gravitational search particle swarm optimization algorithm 𝑇 −𝑡
discussion. Section 3 shows the application of the proposed method on 𝐺(𝑡) = 𝐺𝑜 × 𝑒−𝛾×( 𝑇
)
(20)
the benchmark functions, Section 4 contains the fundamental concepts
where 𝐺0 is the initial gravitational constant and 𝛾 is the coefficient of
of the DNA; and the comparison of the presented technique is made
decrease.
with the contemporary algorithms for DNA computation. The paper is
Assume that in dimension ‘‘D’’ the overall force experienced on
culminated by presenting the key conclusions in Section 5.
agent i is a random weighted sum of all the other parts of the forces
exerted on the other agents, (19) can be modified by the following
2. Hybrid gravitational search particle swarm optimization algo-
equation.
rithm (HGSPSO)

𝑁

Many hybrid algorithms are available in the literature that use the 𝐹𝑖𝐷 (𝑡) = 𝑟𝑎𝑛𝑑𝑗 𝐹𝑖𝑗𝐷 (𝑡) (21)
𝑗=1,𝑗≠𝑖
properties of two or more algorithms to improve the capability of
the hybrid version. Different approaches are used to combine the two Similarly, acceleration can be calculated as:
algorithms that are at the low level and high level in a homogeneous 𝐹𝑖𝐷 (𝑡)
and heterogeneous way. In this paper, GSA is combined with PSO, 𝑎𝐷
𝑖 (𝑡) = (22)
𝑀𝑖𝑖 (𝑡)
and the properties of both the algorithms are utilized i.e. both the
algorithms were executed at the same time and the hybrid algorithm The gravitational masses, inertial mass for the minimization problem
i.e. HGSPSO is utilized for obtaining the results. The key rationale for can be calculated in the same way as defined from (10)–(14).
the proposed fusion is to benefit from the exploitation capability of the As discussed earlier, that the GSA is a memoryless algorithm there-
PSO and exploration proficiency of the GSA. fore the best solution might not be saved as the best mass is at-
In HGSPSO, the parameters of both these algorithms are used. tracted away by other less fitted masses. To overcome This limitation
The inertial weight and acceleration constants are the two important is mitigated through proposing a novel strategy in the velocity update
parameters in PSO. Therefore, in HGSPSO inertial weight ‘‘w’’ are procedure. The new equation for the velocity update is defined as:
modified in such a way that either it is not used as a linear reducing ( )
𝐶1 ( )
or not it is set as a constant value, however, it is used as a function of 𝑉𝑖 (𝑡 + 1) = 𝑤 × 𝑣𝑖 (𝑡) + 𝐶1 × 𝑎𝑖 (𝑡) + × 𝑝𝑔 − 𝑥𝑖 (23)
𝐶2
the fitness function values of the global and the local best. Similarly,
the acceleration constants are used in this paper are also adaptively Where 𝐶1 and 𝐶2 are the acceleration constants, w is the inertial
changes as the iteration progresses. These two parameters are defined weight, a is the acceleration, 𝑝𝑔 is the global best (𝑔𝑏𝑒𝑠𝑡 ), 𝑥𝑖 is the
as follows: current position 𝑣𝑖 is the velocity of the agent. The third term in (23)
( ) is somewhat analogous to the social term of the PSO velocity equation.
𝑃𝑔
𝑤𝑖 = 1 − (16) A higher value of 𝐶1 biases towards GSA behavior, whereas a higher
𝑃𝑖
( ) value of 𝐶2 encourages the social factor of PSO in the execution of the
𝑡
𝐶1 = 𝐶3 − 𝐶4 ∗ (1 − ) + 𝐶4 (17) search procedure. The adaptive technique permits GSA to examine the
𝑇
( ) 𝑡 search area more effectively and a PSO alike exploitation of the best
𝐶2 = 𝐶5 − 𝐶6 ∗ (1 − ) + 𝐶6 (18) solution.
𝑇
where 𝐶3 𝐶4 , C 5 , and 𝐶6 are constant values, ‘‘T ’’ is the maximum Finally, the position of agents is updated as follow:
iteration, and ‘‘t ’’ is the current iteration. The values for 𝐶1 are reducing
𝑥𝐷 𝐷 𝐷
𝑖 (𝑡 + 1) = 𝑤 ∗ 𝑥𝑖 + 𝑣𝑖 (𝑡 + 1) (24)
adaptively and 𝐶2 is increasing over the iteration, as a result, the
masses move closer to the best solution as the proposed method moves In the presented algorithm, all agents are initialized randomly. The
in the exploitation stage. The reason for adjusting the acceleration gravitational force, gravitational constant and resulting forces between

3
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

them are obtained using (19)–(21), correspondingly. Later, the accel- It is a multi-modal function with many local minima as a result; it tends
eration of particles is measured using (22). At each iteration, the best to convergence in the wrong direction
solution attained up to now should be updated. Then, the velocities
1 ∑ 2 ∏
𝐷 𝐷
𝑥
of all agents are computed using (23), and lastly, (24) update the 𝑓4 (𝑥) = 𝑥𝑖 − cos( √𝑖 ) + 1 (28)
positions of agents. The procedure ends after reaching the desired 4000 𝑖=1 𝑖=1 𝑖
stopping criteria. (v) Ackley:
A few advantages and remarks on the proposed method are high- Ackley is a multi-modal function with many local minima.
lighted as under:
( √ ) ( )
1 ∑ 2 1 ∑
𝐷 𝐷
( )
• The presented algorithm used memory for storing the best solu- 𝑓5 (𝑥) = −20 exp −0.2 𝑥 − exp cos 2𝜋𝑥𝑖 + 20 + 𝑒
tion achieved, in the updating process the quality of the agent’s 𝐷 𝑖=1 𝑖 𝐷 𝑖=1
solutions is also considered. The particles that are closer to the (29)
potential solutions attempted to allure the other particles that are
exploring the search space. In the process of exploring the search
3.1.1. Comparison with the state-of-the-art algorithms
area, the particles that are, closer to the potential solution started
The comparison is made with the standard PSO, GSA, and the
to move slowly towards the optimal point. The effectiveness of
following techniques.
the global best searching topology criteria helps the particles to
explore the global best. Since HGSPSO utilizes a memory-based 3.1.1.1. Improved hybrid gravitational search algorithm (IHGSA) (Han
global best topology to save the potential solution available so et al., 2017). In IHGSA, a new parameter that is known as the learning
the best solution will not be missing and will be easily available factor is introduced. It helps to maintain a balance between exploitation
at any point in time. and exploration of this method.
• Each particle will explore the best solution and travel in the
3.1.1.2. Mean Gbest Particle Swarm Optimization Gravitational Search Al-
direction of it, so masses are providing with a kind of social
gorithm (MGBPSO-GSA) (Singh et al., 2017). MGBPSO-GSA is a hybrid
intelligence.
version of the Mean Gbest Particle swarm optimization algorithm and
• The computational cost of this algorithm is very low.
the gravitational search algorithm.
• The influence of 𝑃𝑔 is noticeable in the exploitation stage by using
adaptive 𝐶1 and 𝐶2 . 3.1.1.3. Self-learning Particle Swarm Optimizer (SLPSO) (Li et al., 2012).
• The impact of 𝑃𝑔 on agents is free of their masses and it will be Li et al. introduced a new four different learning approaches where
considered as an exterior force not depending on gravitational every particle adaptively adopts one according to the local fitness
rules. This efficiently stops particles from ge𝑡-together and having landscape.
very slow movement.
3.1.1.4. Dynamic Neighborhood learning-based GSA (DNLGSA) (Zhang
Due to these modifications in the presented algorithm, it delivers better et al., 2018a). Aizhu et al. presented a dynamic neighborhood learn-
results than the standard GSA and PSO. ing (DNL) strategy to replace the 𝐾𝑏𝑒𝑠𝑡 model to maintain a balance
between exploration and exploitation in GSA.
3. HGSPSO for functions optimization
3.1.1.5. Dynamic multi swarm particle swarm optimization and gravita-
tional search algorithm (GSADMSPSO) (Nagra et al., 2019). In this
Five test benchmark functions (Liang et al., 2013) are used to
method, the main population of masses is divided into smaller sub-
confirm the efficiency of the proposed method.
swarms and also stabilizing them by presenting a new neighborhood
strategy.
3.1. Standard benchmark functions
3.1.1.6. Hybrid gravitational search algorithm (Qian et al., 2017). A local
The strength, efficacy, and ability tests of different optimization search technique (LST) is combined with the optimization procedure
approaches are demonstrated by utilizing numerous standards and of the GSA. Every agent in GSA has a probability (p), and LST with
benchmark functions. Thus, the solution quality, convergence, and probability (1 − 𝑝). Fuzzy logic is used to get the probability p.
stability are measured. To assess the proposed algorithm proficiency
few standard test benchmark functions are used. 3.1.2. Results and analysis
(i) Sphere: Termination criteria are that the global optimum is found or the
It is a unimodal function with single minima. The main objective number of iterations is reached. Experiments are run in MATLAB on a
of using a sphere benchmark is to test the convergence rate of the PC with the 64-bit win10 professional operating system, 16 GB RAM,
algorithm. and 2.90 GHz processor. The values of the parameters of the com-
pared algorithms are taken from their respective literature as shown

𝐷
𝑓1 (𝑥) = 𝑥2𝑖 (25) in Table 1. The simulation conditions used for the algorithm are:
𝑖=1

(ii) Rastrigin: • Dimension 𝐷 = 30; 𝐺0 = 1


It is a multi-modal function comprising many local minima. • No. of agents= 30


𝐷 3.1.2.1. Statistical analysis. HGSPSO results are equated with the SGSA,
𝑓2 (𝑥) = [𝑥2𝑖 − 10 cos(2𝜋𝑥𝑖 ) + 10] (26) SPSO, different variants of the modified PSO, and improved GSA
𝑖=1
algorithms. Mean and standard deviation are used as the evaluating
(iii) Rosenbrock: criteria as shown in Table 2.
It is a unimodal function with a single minimum (i) T-Test:
∑[
𝐷−1
( )2 ] T-Test value is one of the main testing criteria to compare the two
𝑓3 (𝑥) = 100 𝑥𝑖+1 − 𝑥2𝑖 + (1 − 𝑥𝑖 )2 (27) algorithms. Mean ‘‘𝛼’’, 𝜉 value of the degree of freedom, and standard
𝑖=1 deviation ‘‘𝜎’’ values of the comparing methods are utilized to find
(iv) Griewank out the 𝑡-test value. The negative values of the test show the inferior

4
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 1
Parameters of the compared algorithms.
Algorithms Parameters
IHGSA Swarm size = 30, 𝐺0 = 1, 𝛼 = 20, learning factors 𝑆1 and 𝑆2 of IHGSA are in the range of [0.618, 1.236].
MGBPSO-GSA 𝐺𝑜 = 1; Swarm Size = 30; 𝐶1 = 0.5; D = 30, and 𝐶2 = 1.5.
SLPSO Swarm Size = 30, 𝑃 = 1, 𝛾 = 0.01
0.166 0.166
DNLGSA 𝐺𝑜 = 100, 𝛽 = 20, 𝑘 = 10, gm = 5, 𝐶1 = 0.5 − 0.5𝑡
𝑇 0.166
, 𝐶2 = 1.5𝑡
𝑇 0.166
, 𝐷 = 30
𝑚𝑎𝑥 𝑚𝑎𝑥
GSADMSPSO Swarm Size = 30, 𝐶1 = 0.5, 𝐶2 = 1.5, w is decreased linearly from 0.9 to 0.2, 𝐺0 = 1, 𝛼 = 20, 𝑅 = 5.
HGSA 𝐺𝑜 = 100, 𝛼 = 20, 𝜃 = (0.5)D , 𝛾 = 0.2

performance of the first approach than the second method or vice versa.
The 𝑡-value can be calculated as:
𝛼 −𝛼
𝑡 = √( 1) 2( ) (30)
𝜎12 𝜎22
𝜉+1
+ 𝜉+1

Larger values of the 𝑡-test than 1.645 implies enhanced performance


of the first algorithm as compared to the second. The 𝑡-values for the
proposed method compare with the SGSA, SPSO, and other algorithms
are summarized in Table 2.
(ii) Wilcoxon Signed-Rank Test:
The standard deviation and mean values of every method in the
assumed function evaluations are used to assess the final solution’s
Fig. 1. Comparison between HGSPSO and other standard versions of GSA and PSO
quality. Furthermore, the Wilcoxon signed-rank test at a 0.05 signif- algorithm for sphere function.
icance level (𝛼) is used to analyze the dissimilarity between the two
methods. Table 3 illustrates the results attained by the Wilcoxon signed-
rank test that depends on the results of Table 2. Where win, tie, and lose
show that HGSPSO, successes on w functions, draws on t functions, and
fails on l functions than the other methods

3.1.3. Parameters analysis of HGSPSO


To evaluate the performance of the proposed method testing is
done on the different values of the design variables. The tuning of the
parameters of the proposed method such as the acceleration constants
are examined on the several ranges of 𝐶1 and 𝐶2 . The values of the
𝐶1 and 𝐶2 are chosen in such a way that it will fulfill the HGSPSO
convergence condition 𝐶1 + 𝐶2 ≤ 4 (Bergh and Engelbrecht, 2010).
Another parameter that influences the performance of the HGSPSO Fig. 2. Comparison between HGSPSO and other standard versions of GSA and PSO
is the inertial weight. Shi and Eberhart (1998)used different constant algorithm for Rastrigin function.
values for inertial weight and highlight that larger values of the inertial
weight i.e. w > 1.2, tends the PSO to performs a weak exploration, on
the other hand, smaller values of inertial weight, i.e. w < 0.8, make
the PSO to traps in local optima. They concluded that the constant
values of the inertial weight must be within the bounds of [0.8, 1.2].
Therefore, in this paper for making a comparison of the proposed
adaptive inertial weight strategy and the constant inertial weight, the
value of the inertial weight is set to 𝑤 = 0.9. A test is conducted on
some unimodal and multimodal function by using different ranges of
the 𝐶1 and 𝐶2 , a constant inertial weight value, and adaptive inertial
weight approach by using (16). Table 4 shows the study of acceleration
constants at different values for unimodal and multimodal functions
and by applying a constant value of ‘‘w’’ and adaptive approach. The
best results of the HGSPSO are obtained when changing the 𝐶1 from
2.5 to 0.5 and 𝐶2 from 0.5 to 2.5 and having adaptive inertial weight.
So, these values are used for the rest of the work.
Fig. 3. Comparison between HGSPSO and other standard versions of GSA and PSO
3.1.4. Graphical analysis algorithm for Rosenbrock function.

Figs. 1–5 show a comparison between the standard GSA, PSO,


and HGSPSO algorithm. The HGSPSO algorithm demonstrates better
convergence than the standard version of both algorithms. 3.2.1. Comparison with state-of-the-art new methods
HGSPSO is compared with the following newly developed and
3.2. Modern benchmark functions
modified versions of the existing methods are available in the literature.
In addition to five standard benchmark functions, the performance These algorithms are briefly described below.
of the HGSPSO is also tested on some modern benchmark functions,
which are taken from CEC 2017 (Awad et al., 2017). The performance 3.2.1.1. LSHADE-RSP. Eugene et al. proposed a modified version of
of the HGSPSO is compared with the following algorithms. These the L-SHADE algorithm based on a rank-based selective pressure strat-
functions are defined in Table 5. egy called LSHADE-RSP (Stanovov et al., 2018). The primary changes

5
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 2
The standard deviation, 𝑡-test and mean values evaluation between HGSPSO and other approaches.
HGSPSO
Functions SPSO SGSA IHGSA MGBPSO-GSA SLPSO DNLGSA GSADMSPSO HGSA
𝒄𝟏 = [2.5, 0.5]
𝒄𝟐 = [0.5, 2.5]
Mean 6.256e−𝟓𝟐 1.821e−8 1.71e−06 2.38e−19 2.58e+03 2.78e−50 1.53e−27 1.50e−26 0.0000
Std. Dev 9.293e−𝟓𝟑 2.664e−8 7.2e−07 5.58e−20 159.203 8.09e−49 5.11e−26 5.73e−25 0.0000
𝑓1 (𝑥)
𝑡-test – 4.8335 16.793 30.1598 114.59 0.2375 0.2117 0.1851 0.000
Rank 1 6 7 5 8 2 3 4 1
Mean 1.095e−𝟖 20.392 1.94e+2 3.38e1 41.5262 0.000 0.000 8.19 0.000
Std. Dev 0.25302 5.389 1.08e+2 1.05e1 10.4930 0.000 0.000 6.11 0.000
𝑓2 (𝑥)
𝑡-test – 26.72 12.70 22.75 27.98 0.000 0.000 9.4701 0.000
Rank 1 2 6 5 4 1 1 3 1
Mean 1.341e−𝟎𝟖 3.3426e1 1.70e+2 2.28e+1 9.164e6 2.06 2.33e1 2.38e1 23.7073
Std. Dev 6.508e−𝟎𝟗 2.1794 2.12 1.29 3.46e5 12.3 3.24e1 3.56e0 0.3860
𝑓3 (𝑥)
𝑡-test – 108.39 567.01 124.97 187.28 1.184 5.085 47.272 4.3415
Rank 1 7 8 3 9 2 4 6 5
Mean 3.884e−𝟐𝟒 9.44e−2 2.4589e−3 1.55e−16 18.1456 0.0227 0.000 7.40e−3 0.0000
Std. Dev 7.128e−𝟐𝟒 1.025e−1 2.53e−3 1.02e−16 1.1171 0.144 0.000 2.31e−2 0.0000
𝑓4 (𝑥)
𝑡-test 6.512 6.845 10.74 114.58 1.1146 0.000 2.265 0.000
Rank 1 6 4 2 7 5 1 3 1
Mean 2.339e−𝟒𝟐 4.5989e−3 1.54e−06 6.58e−10 1.4094 3.47e−14 6.8e−14 1.84e−14 8.881e−16
Std. Dev 9.426e−𝟒𝟐 3.2724e−3 6.78e−07 5.22e−11 0.2447 4.45e−14 5.46e−14 4.80e−13 2.742e−30
𝑓5 (𝑥)
𝑡-test – 9.938 16.061 89.133 40.72 5.5138 8.8064 0.2710 2.2902e15
Rank 1 8 7 6 9 4 5 3 2
Overall ranking 1 6 7 5 8 3 3 4 2
(Average ranking (1.0) (5.8) (6.4) (4.2) (7.4) (2.8) (2.8) (3.8) (2.0)
number)

Table 3
Results of Wilcoxon signed-rank test between HGSPSO and other methods on 30-D benchmark functions.
HGSPSO v.s. SPSO SGSA IHGSA MGBPSO-GSA SLPSO DNLGSA GSADMSPSO HGSA
w (+) 5 5 5 5 4 3 5 2
t (=) 0 0 0 0 1 2 0 3
l (−) 0 0 0 0 0 0 0 0

Table 4
Analysis of acceleration constants variations and inertial weight at different values for 𝐷 = 30.
HGSPSO 𝒄𝟏 = [2.5, 0.5] 𝒄𝟏 = [2.0, 0.0] 𝒄𝟏 = [2.0, 0.25]
𝒄𝟐 = [0.5, 2.5] 𝒄𝟐 = [0.0, 2.0] 𝒄𝟐 = [0.25, 2.0]
𝑓1 (𝑥) Mean = 6.256e−52 Mean = 1.804e−10 Mean = 7.442e−28
w = using (16) SD = 9.293e−53 SD = 5.308e−10 SD = 2.175e−28
𝑓1 (𝑥) Mean = 9.386e−22 Mean = 3.418e−2 Mean = 6.463e−14
w = 1.0 SD = 3.458e−22 SD = 1.994e−2 SD = 5.335e−14
𝑓3 (𝑥) Mean = 1.341e−08 Mean = 9.04e1 Mean = 4.422e−2
w = using (16) SD = 6.508e−09 SD = 8.22e1 SD = 7.093e−2
𝑓3 (𝑥) Mean = 4.39e1 Mean = 8.20e2 Mean = 9.53e1
w = 1.0 SD = 1.802e1 SD = 2.653e2 SD = 8.339e1
𝑓5 (𝑥) Mean = 2.339e−42 Mean = 1.866e−10 Mean = 5.003e−28
w = using (16) SD = 9.426e−42 SD = 3.790e−10 SD = 4.370e−27
𝑓5 (𝑥) Mean = 8.551e−28 Mean = 2.0024e−05 Mean = 7.452e−16
w = 1.0 SD = 6.951e−28 SD = 5.849e−04 SD = 9.942e−16
𝑓13 (𝑥) Mean = 0.0000 Mean = 6.046e−19 Mean = 3.114e−32
w = using (16) SD = 0.0000 SD = 4.70e−18 SD = 1.990e−32
𝑓13 (𝑥) Mean = 5.318e−15 Mean = 9.884e−4 Mean = 7.6042e−8
w = 1.0 SD = 8.927e−15 SD = 4.994e−4 SD = 2.947e−8

in this variant are the modification in the mutation and crossover 3.2.1.4. Teaching learning based optimization with focused learning (TLBO-
operators using selective pressure. FL). Remya et al. presented a modified version of the TLBO algorithm
by introducing the student focus learning approach (Kommadath and
3.2.1.2. Hybrid prey–predatorPSO (PP-PSO). A hybrid prey–predator Kotecha, 2017).
PSO (PP-PSO) algorithm, based on the bio-inspired prey–predator rela-
3.2.1.5. Improved Cuckoo search (ICS) algorithm. Rohit et al. proposed
tionship is proposed that used the three new approaches of breeding, es-
an improved variant of the Cuckoo search algorithm (Salgotra et al.,
cape and catch (Zhang et al., 2018b). The inactive particles are changed 2018). Three modifications have been made in this paper. Firstly,
or removed so the active particles will speed up the convergence. by reducing switch probability for maintaining a balance between
global and local search. Introducing new equations for global and local
3.2.1.3. jSO. Janez et al. introduced a modified version of the iL-
searching are the second and third amendments.
SHADE algorithm that is called as jSO (Brest et al., 2017) in which
the weighted version of the mutation strategy is improved. This algo- 3.2.2. Statistical analysis
rithm stands at the second position in IEEE Congress on Evolutionary The parameters values of the compared techniques in Table 7 have
Computation (CEC) conference competition. been taken from the literature as sown in Table 6.

6
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 5
Modern Benchmark Functions.
Category No Functions 𝐹𝑖∗ = F 𝑖 = (x ∗ )
𝑓6 (𝑥) Shifted and Rotated Bent Cigar Function 100
Unimodal 𝑓7 (𝑥) Shifted and Rotated Sum of Different Power Function 200
𝑓8 (𝑥) Shifted and Rotated Zakharov Function 300
𝑓9 (𝑥) Shifted and Rotated Rosenbrock’s Function 400
𝑓10 (𝑥) Shifted and Rotated Rastrigin’s Function 500
𝑓11 (𝑥) Shifted and Rotated Expanded Scaffer’s F6 Function 600
Multi-modal functions
𝑓12 (𝑥) Shifted and Rotated Lunacek Bi_Rastrigin Function 700
𝑓13 (𝑥) Shifted and Rotated Levy Function 900
𝑓14 (𝑥) Shifted and Rotated Schwefel’s Function 1000
𝑓15 (𝑥) Hybrid Function 1 (N = 3) 1100
𝑓16 (𝑥) Hybrid Function 5 (N = 4) 1500
Hybrid functions
𝑓17 (𝑥) Hybrid Function 6 (N = 5) 1700
𝑓18 (𝑥) Hybrid Function 6 (N = 6) 2000
𝑓19 (𝑥) Composition Function 1 (N = 3) 2100
𝑓20 (𝑥) Composition Function 3 (N = 4) 2300
Composite functions
𝑓21 (𝑥) Composition Function 5 (N = 5) 2500
𝑓22 (𝑥) Composition Function 7 (N = 6) 2700

Table 6 (i) T-Test:


Parameters of the compared algorithms.
𝑡-test criteria are the same as defined in Section 3.1
Algorithms Parameters
(ii) Wilcoxon Signed-Rank Test:
LSHADE-RSP 𝜇F𝑟 = 0.3, 𝜇Cr𝑟 = 0.8, 𝐻 = 5, 𝑘 = 3, and 𝑁𝑚𝑎𝑥 = 75 ⋅ 𝐷(2∕3)
PP-PSO Population size = 200, K 1 = 0.5, K 2 = 0.3 The mean and standard deviation values of every method within the
jSO 𝑝𝑚𝑎𝑥 = 0.25; 𝑝𝑚𝑖𝑛 = p𝑚𝑎𝑥 /2, H = 5, 𝑀F = 0.3 given function evaluations are used to assess the final solution’s quality.
TLBO-FL Population size = 100, 𝑇𝐹 = 1, Furthermore, the Wilcoxon signed-rank test at a 0.05 significance level
ICS Initial population=50, 𝑝𝑚𝑎𝑥 = 0.25; 𝑝𝑚𝑖𝑛 = 0.8
(𝛽) is used to analyze the dissimilarity between the two methods.
Table 8 shows the results achieved by the Wilcoxon signed-rank test,
which is based on the results of Table 7. Where win, tie, and lose show
that HGSPSO, successes on w functions, draws on t functions, and fails
on l functions than the other methods.
The HGSPSO results on the CEC 2017 benchmark functions are
compared with its competitors. The CEC functions are more complex
and difficult than the standard benchmark functions. These results are
summarized in terms of mean, standard deviation, and 𝑡-test value in
Table 5.
Simulations are performed to compare HGSPSO with other algo-
rithms with 𝐷 = 30 dimensions. Table 7 shows that HGSPSO is the most
successful method for solving the CEC functions in terms of the mean
and standard deviation values. Also, the HGSPSO algorithm is most
Fig. 4. Comparison between HGSPSO and other standard versions of GSA and PSO effective in searching better global optimal solutions particularly for
algorithm for Griewank’s function. finding out the theoretical global optimal at four functions (𝑓06 , 𝑓07 , 𝑓08 ,
and 𝑓13 ) except for solving 𝑓16 function. HGPSO achieved superior
accuracy in all functions. LSHADE-RSP algorithm stands at the second
position for achieving better values followed by jSO, which is the third
most effective algorithm, achieving improved solutions. The 𝑡-test value
of the HGSPSO indicates that it has 95% superior performance than the
rest of the methods.

4. DNA problem

In 1994, Adleman firstly introduced DNA computation. Deoxyri-


bonucleic acid (DNA) is a nucleic acid that comprises the genetic
instructions used in the growth and working of every living creatures
Fig. 5. Comparison between HGSPSO and other standard versions of GSA and PSO
and viruses (Xu et al., 2008).
algorithm for Ackley Function. Computation of the DNA depends mainly on the biochemical reac-
tions of the DNA molecules which may cause an inappropriate result
due to the poor quality of the DNA sequences (Ezziane, 2005). DNA
All these algorithms are compared with the HGSPSO in terms of 𝑡- sequences used in DNA computation must fulfill different constraints
test value, standard deviation, and mean. The results of the HGSPSO that emphasis on the design of the DNA sequences that limit the
on these new benchmark test functions are compared with the algo- probability of unwanted reactions (Guangzhao and Xiaoguang, 2010).
rithms and it clear from Table 7 that the HGSPSO gives an improved Some of the previous work related to DNA computation done by various
performance as compared to the other algorithms. authors is briefly summarized in Table 9.

7
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 7
The standard deviation, mean and 𝑡-test value evaluation between HGSPSO and other algorithms for Modern Benchmark Functions for D = 30.
HGSPSO
Functions LSHADE-RSP PP-PSO jSO TLBO-FL ICS
𝒄𝟏 = 2.5–0.5
𝒄𝟐 = 0.5–2.5
Mean 0.0000 0.0000 167 0.0000 3.5e+3 1.00e+10
Std. Dev 0.0000 0.0000 1.40e+02 0.0000 3.6e+3 0.0000
𝑓6 (𝑥)
𝑡-test - – 8.4348 – 6.8746 –
Rank 1 1 2 1 3 4
Mean 0.0000 0.0000 4.88e+07 0.0000 8.5e+16 1.00e+10
Std. Dev 0.0000 0.0000 9.18e+07 0.0000 5.8e+17 0.0000
𝑓7 (𝑥)
𝑡-test - – 3.7589 – 1.0363 –
Rank 1 1 2 1 4 3
Mean 0.0000 0.0000 300 0.0000 3.0e+03 1.39e+02
Std. Dev 0.0000 0.0000 1.173e−03 0.0000 1.1e+03 9.30e+01
𝑓8 (𝑥)
𝑡-test - – 1.8085e+06 – 19.2847 10.5686
Rank 1 1 3 1 4 2
Mean 1.185e+𝟏 5.8779e+01 411 5.8670e+01 9.0e+01 1.43e+01
Std. Dev 2.868e−𝟐 1.0784 1.19e+01 7.7797e−01 2.4e+01 2.18e+01
𝑓9 (𝑥)
𝑡-test – 307.6046 237.1772 425.3022 230.2352 7.9462
Rank 1 4 6 3 5 2
Mean 7.3692 7.5953 605 8.5568 4.0e+01 1.10e+02
Std. Dev 2.4225 2.0252 2.35e+01 2.0980 2.1e+01 2.51e+01
𝑓10 (𝑥)
𝑡-test – 0.5063 178.8771 2.6204 10.9150 28.7790
Rank 1 2 6 3 4 5
Mean 4.0892e−𝟏𝟎 2.6838e−09 611 6.0385e−09 4.9e−01 1.96e+01
Std. Dev 9.5029e−𝟏𝟎 1.8977e−08 3.56 2.7122e−08 4.2e−01 7.90
𝑓11 (𝑥)
𝑡-test – 0.8466 1.2136e+03 1.4668 8.2496 17.5434
Rank 1 2 6 3 4 5
Mean 2.2803e+𝟎𝟏 3.7959e+01 856 3.8927e+1 1.4e+02 1.15e+02
Std. Dev 1.4394 1.7138 3.23e+01 1.4594 4.7e+01 2.06e+01
𝑓12 (𝑥)
𝑡-test – 47.8845 182.2214 55.6217 17.6238 31.5702
Rank 1 2 6 3 5 4
Mean 0.0000 0.0000 1404 0.0000 3.4e+01 1.95e+03
Std. Dev 0.0000 0.0000 2.32e+02 0.0000 2.7e+01 1.04e+03
𝑓13 (𝑥)
𝑡-test – – 42.7922 – 8.9043 13.2583
Rank 1 1 3 1 2 4
Mean 1.1791e+𝟎𝟑 1.4688e+03 4448 1.5277e+03 6.7e+03 3.31e+03
Std. Dev 2.6792e+𝟎𝟐 3.0108e+02 5.66e+02 2.7716e+02 2.8e+02 2.68e+02
𝑓14 (𝑥)
𝑡-test – 5.0695 36.9120 6.3945 100.7366 39.7615
Rank 1 2 5 3 6 4
Mean 3.0347 3.0526 1249 3.0375 8.2e+01 4.20e+01
Std. Dev 4.4853 8.5462 5.16e+01 2.6464 4.1e+01 1.68e+01
𝑓15 (𝑥)
𝑡-test – 0.0131 170.1009 0.0038 13.5380 15.8454
Rank 1 3 6 2 5 4
Mean 1.0621 7.2992e−𝟎𝟏 1679 1.0879 2.2e+04 3.41e+01
Std. Dev 9.493e−01 5.3154e−𝟎𝟏 1.03e+02 6.9133e−01 2.3e+04 8.39
𝑓16 (𝑥)
𝑡-test – −2.1589 115.1875 0.1553 6.7633 27.6677
Rank 2 1 5 3 6 4
Mean 2.1502e+𝟎𝟏 3.1535e+01 1961 3.2925e+01 1.4e+02 1.93e+02
Std. Dev 6.5818 6.8728 1. 37e+02 8.0767 6.6e+01 7.33e+01
𝑓17 (𝑥)
𝑡-test – 7.4552 99.9892 7.7525 12.6329 16.4777
Rank 1 2 6 3 4 5

(continued on next page)

{ }
4.1. Elementary concepts and descriptions 1, 𝑎=𝑏
𝑒𝑞 (𝑎, 𝑏) = (32)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
{ }
In this section, some of the elementary representations and relations 1, 𝑎 = 𝑏′
𝑏𝑝 (𝑎, 𝑏) = (33)
are explained. 𝛬 = {𝐴, 𝐶, 𝐺, 𝑇 , −} is the representation of the nucleotide 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
and a gap, each nucleotide is denoted by a single alphabet and gaps
as ‘‘-’’. Similarly, if the gap is removed then nucleotide without a gap In a given sequence the number of nonblank nucleotides 𝑥 ∈ 𝛬∗ is
is denoted as 𝛬𝑛𝑏 = {𝐴, 𝐶, 𝐺, 𝑇 }. For all the DNA sequences sets it is defined as:
denoted by 𝛬∗ = 𝛬 and 𝛬𝑛𝑏 . Suppose 𝑎, 𝑏 ∈ 𝛬 𝑎, 𝑏 = {𝐴, 𝐶, 𝐺, 𝑇 , −} and |𝑥|

𝑥, 𝑦 ∈ 𝛬∗ 𝑥, 𝑦 = {𝐴, 𝐶, 𝐺, 𝑇 } and {𝐴, 𝐶, 𝐺, 𝑇 , −}.DNA sequence length is 𝑙𝑒𝑛𝑔𝑡ℎ𝑛𝑏 (𝑥) = 𝑛𝑏(𝑥𝑖 ) (34)
represented as |𝑥| where 𝑥𝑖 = (1 ≤ 𝑖 ≤ |𝑥|) that means 𝑖th nucleotide 𝑖=1
{ }
from 5′ −𝑒𝑛𝑑 of sequence x. A set of ‘‘𝑛’’ number of sequences and length 1, 𝑎 ∈ 𝛬𝑛𝑏
𝑤ℎ𝑒𝑟𝑒 𝑛𝑏 = 𝑛𝑏 (𝑎) = (35)
‘‘𝑙’’ is denoted by ‘‘𝛴’’. Similarly, for the 𝑖th member, it is defined as 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝛴𝑖 . Base ‘‘𝑎’’ complementary is defined by ā. A few basic conditions and
notations are defined as under (Xiao et al., 2012). A shift of the sequence x by i bases is represented as
{ } { 𝑖 }
𝑖, 𝑖>𝑗 (−) 𝑥1 … 𝑥𝑙−𝑖 , 𝑖 ≥ 0
𝑇 (𝑖, 𝑗) = (31) 𝑠ℎ𝑖𝑓 𝑡 (𝑥, 𝑦) = (36)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝑥𝑖+1 … 𝑥𝑙 (−)𝑖 , 𝑖 < 0

8
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 7 (continued).
HGSPSO
Functions LSHADE-RSP PP-PSO jSO TLBO-FL ICS
𝒄𝟏 = 2.5–0.5
𝒄𝟐 = 0.5–2.5
Mean 2.3701e+𝟎𝟏 2.8843e+01 2269 2.9368e+01 2.2e+02 2.83e+02
Std. Dev 3.9628 6.8284 1.03e+02 5.8548 1.2e+02 8.51e+01
𝑓18 (𝑥)
𝑡-test – 4.6054 154.0284 5.6680 11.5607 21.5222
Rank 1 2 6 3 4 5
Mean 2.02639e+𝟎𝟐 2.0758e+02 2388 2.0929e+02 2.3e+02 2.50e+02
Std. Dev 1.3727 1.6596 2.70e+01 1.9554 1.2e+01 9.93e+01
𝑓19 (𝑥)
𝑡-test – 16.2221 571.5890 19.6849 16.0182 3.3722
Rank 1 2 6 3 4 5
Mean 3.1978e+𝟎𝟐 3.5038e+02 2787 3.5075e+02 4.0e+02 4.34e+02
Std. Dev 3.1269 3.3514 3.27e+01 3.2992 1.6e+01 5.30e+01
𝑓20 (𝑥)
𝑡-test – 47.2063 531.0905 48.1783 34.7943 15.2124
Rank 1 2 6 3 4 5
Mean 3.8171e+𝟎𝟐 3.8670e+02 2878 3.8670e+02 4.0e+02 3.83e+02
Std. Dev 4.6180e−𝟎𝟒 5.5921e−03 4.07 7.6811e−03 1.8e+01 8.20e−01
𝑓21 (𝑥)
𝑡-test – 6.2883e+03 4.3370e+03 4.5854e+03 7.1850 11.1240
Rank 1 3 6 4 5 2
Mean 4.51964e+𝟎𝟐 4.9504e+02 3187 4.9739e+02 5.3e+02 5.12e+02
Std. Dev 6.1792 6.9738 2.38e+01 7.0017 2.1e+01 9.32
𝑓22 (𝑥)
𝑡-test – 32.6903 786.5130 34.3966 25.2075 37.9633
Rank 1 2 6 3 5 4
Overall ranking 1 (1.05) 2 (1.82) 6 (4.76) 3 (2.52) 5 (4.35) 4 (3.94)
(Average ranking
number)

All Results Are Averaged (Rank: 1—Best, 6—Worst).

Table 8 where 𝛴𝑖 and 𝛴𝑗 are nonparallel to each other. It comprises of two


Results of Wilcoxon signed-rank test between HGSPSO and other methods on 30-D parts. In the first term, which is known as the penalty, the term for the
benchmark functions.
continuous complementary region, and the second part is called the
HGSPSO v.s. LSHADE-RSP PP-PSO jSO TLBO-FL ICS
overall complementarity. It is illustrated as under:
w (+) 16 17 17 17 17 ( ( ))
t (=) 0 0 0 0 0 𝐻 − 𝑚𝑒𝑎𝑠𝑢𝑟𝑒 (𝑥, 𝑦) = 𝑚𝑎𝑥𝑔,𝑖 (ℎ𝑑𝑖𝑠 𝑥, 𝑠ℎ𝑖𝑓 𝑡 𝑦 (−)𝑔 𝑦, 𝑖 )
( ( )) (43)
l (−) 1 0 0 0 0 + ℎ𝑐𝑜𝑛 𝑥, 𝑠ℎ𝑖𝑓 𝑡 𝑦 (−)𝑔 𝑦, 𝑖
Where 0 ≤ 𝑔 ≤ 𝑙 − 3 𝑎𝑛𝑑 |𝑖| ≤ 𝑙 − 1;

𝑙
(i) Continuity (𝑓1 ): ℎ𝑐𝑜𝑛 (𝑥, 𝑦) = 𝑇 (𝑐𝑏𝑝 (𝑥, 𝑦, 𝑖) , 𝐻𝑐𝑜𝑛 ) (44)
For calculating the recurrence of the same bases in a DNA sequence con- 𝑖=1
tinuity is used. The DNA sequence might have an unexpected structure ∑
𝑙
( )
if the same bases are occurring repetitively. ℎ𝑑𝑖𝑠 (𝑥, 𝑦) = 𝑇 ( 𝑏𝑝 𝑥𝑖 , 𝑦𝑖 , 𝐻𝑑𝑖𝑠 × 𝑙𝑒𝑛𝑔𝑡ℎ𝑛𝑏 (𝑦)) (45)
𝑖=1

𝑛
𝑓𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 (𝛴) = 𝐶𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦(𝛴𝑖 ) (37) ⎧ ∋
( ) ⎫
⎪ (𝑐 𝑖𝑓 𝑐, 𝑠.𝑡. 𝑏𝑝 𝑥𝑖 , 𝑦 = 0 ⎪
𝑖=1 )
∑ ⎪𝑏𝑝 𝑥𝑖+𝑗 , 𝑦𝑖+𝑗 = 1 𝑓 𝑜𝑟 1 ≤ 𝑗 ≤ 𝑐,⎪
𝑐𝑏𝑝 (𝑥, 𝑦, 𝑖) = ⎨ ( ) ⎬ (46)
𝐶𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 (𝑥) = 𝑚𝑎𝑥1≤𝑖≤𝑙 ( 𝑐(𝑎, 𝑖)) (38) 𝑏𝑝 𝑥𝑖+𝑐+1 , 𝑦𝑖+𝑐+1 = 0
𝑎∈𝛬𝑛𝑏
⎪ ⎪
⎪ 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 ⎪
⎧𝑛𝑖𝑓 ∋ 𝑛, 𝑠.𝑡.𝑒𝑞 (𝑎 , 𝑎 ) = 1, 𝑓 𝑜𝑟 1 ≤ 𝑗 < 𝑛⎫ ⎩ ⎭
⎪ 𝑖( 𝑖+𝑗 ) ⎪
𝑐 (𝑎, 𝑖) = ⎨ 𝑒𝑞 𝑎𝑖 , 𝑎𝑖+𝑛 = 0, ⎬ (39) (iv) Similarity (𝑓4 ):
⎪ 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 ⎪ The main objective for calculating the similarity for two DNA
⎩ ⎭ sequences is to retain each of the sequence unique as possible.
(ii) Hairpin (𝑓2 ): ∑
𝑛 ∑
𝑛
Hairpin is used to calculate the likelihood to form a secondary 𝑓𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 (𝛴) = 𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦(𝛴𝑖, 𝛴𝑗 ) (47)
structure. It is defined as under: 𝑖=1 𝑗=1

∑𝑛
( ) ( ( ))
𝑓𝐻𝑎𝑖𝑟𝑝𝑖𝑛 (𝛴) = 𝐻𝑎𝑖𝑟𝑝𝑖𝑛 𝛴𝑖 (40) 𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 (𝑥, 𝑦) = 𝑚𝑎𝑥𝑔,𝑖 (𝑠𝑑𝑖𝑠 𝑥, 𝑠ℎ𝑖𝑓 𝑡 𝑦 (−)𝑔 𝑦, 𝑖 )
𝑖=1 ( ( )) (48)
+ 𝑠𝑐𝑜𝑛 𝑥, 𝑠ℎ𝑖𝑓 𝑡 𝑦 (−)𝑔 𝑦, 𝑖
𝐻𝑎𝑖𝑟𝑝𝑖𝑛
𝑙−2×𝑝𝑚𝑖𝑛 𝑙−𝑝𝑚𝑖𝑛 −𝑟
(𝑝𝑖𝑛𝑙𝑒𝑛(𝑝,𝑟) ) Where 0 ≤ 𝑔 ≤ 𝑙 − 3 𝑎𝑛𝑑 |𝑖| ≤ 𝑙 − 1;
∑ ∑ ∑ 𝑝𝑖𝑛𝑙𝑒𝑛(𝑝, 𝑟)
= 𝑇 𝑏𝑝(𝑥𝑃 +1−𝑖 , 𝑥𝑝+𝑟+𝑖 , ) (41) ∑
𝑙

𝑟=𝑅𝑚𝑖𝑛 𝑝=𝑃𝑚𝑖𝑛 𝑖=1


2 𝑠𝑐𝑜𝑛 (𝑥, 𝑦) = 𝑇 (𝑐𝑒𝑞 (𝑥, 𝑦, 𝑖) , 𝑆𝑐𝑜𝑛 ) (49)
𝑖=1
𝑝𝑖𝑛𝑙𝑒𝑛 (𝑝, 𝑟, 𝑖) = min (𝑝 + 𝑖, 𝑙 − 𝑟 − 𝑖 − 𝑝)

𝑙
( )
𝑝 = 𝑟 = 6 (Krishna Veni et al., 2011). 𝑠𝑑𝑖𝑠 (𝑥, 𝑦) = 𝑇 ( 𝑒𝑞 𝑥𝑖 , 𝑦𝑖 , 𝑆𝑑𝑖𝑠 × 𝑙𝑒𝑛𝑔𝑡ℎ𝑛𝑏 (𝑦)) (50)
𝑖=1
(iii) H-measure (𝑓3 ):
H-measure is used to stop the cross-hybridization between the two ⎧ ∋
( ) ⎫
⎪ ( 𝑐 𝑖𝑓 𝑐, 𝑠.𝑡. 𝑒𝑞 𝑥𝑖 , 𝑦 = 0 ⎪
sequences. It is represented as )
⎪𝑒𝑞 𝑥𝑖+𝑗 , 𝑦𝑖+𝑗 = 1 𝑓 𝑜𝑟 1 ≤ 𝑗 ≤ 𝑐,⎪
𝑐𝑒𝑞 (𝑥, 𝑦, 𝑖) = ⎨ ( ) ⎬ (51)

𝑛 ∑
𝑛
⎪ 𝑒𝑞 𝑥𝑖+𝑐+1 , 𝑦𝑖+𝑐+1 = 0 ⎪
𝑓𝐻−𝑚𝑒𝑎𝑠𝑢𝑟𝑒 (𝛴) = 𝐻 − 𝑚𝑒𝑎𝑠𝑢𝑟𝑒(𝛴𝑖, 𝛴𝑗 ) (42) ⎪ 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 ⎪
𝑖=1 𝑗=1 ⎩ ⎭

9
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 9
Shows the previous work done in the literature on various objective functions and constraints.
Objective functions used in the literature previously
Objective function
Tanaka et al. Deaton et al. Hartemink Kurniawan Ibrahim et al. Feldkamp Zhou et al. Arita and Kobayashi
(2002) (2002) et al. (1999) et al. (2008) (2009) et al. (2002) (2007) (2002)
Similarity (with Shift) X – – X X – X X
Similarity (without Shift) X – – X X X
H-measure (with Shift) X – – X X – X X
H-measure (without Shift) X – – X X – X X
Hairpin X – – X X – X –
GC-content X – – X X X X X
Free energy – X X – – – – –
Melting temperature X – X X X – X X
The occurrence of specific – – – – – X – X
subsequences
Constraints on DNA bases X – – – – – – –
Secondary structure X – X – – X – –
Bio Lab Methods – X – – – – – –
Continuity X – – X X – X –
Reverse complement Hamming – – – – – X – –
3′ end H measure X – – – – X – –

Fig. 6. Comparison for the H-measure objective function results between mean values
of HGSPSO and its competitors.
Fig. 7. Comparison for the hairpin Objective function results between mean values of
HGSPSO and its competitors.

𝑆𝑐𝑜𝑛 is an integer between one and l and 𝑆𝑑𝑖𝑠 is a real value between
zero and one. 𝑇 = 2; 𝑆𝑑𝑖𝑠 = 𝐻𝑑𝑖𝑠 = 0.17 𝑎𝑛𝑑 𝑆𝑐𝑜𝑛 = 𝐻𝑐𝑜𝑛 =
6 (Krishna Veni et al., 2011).
(v) GC-content
GC-𝑐𝑜𝑛𝑡𝑒𝑛𝑡 is the percentage of G and C in a DNA sequence, for
instance, if 10-mer DNA sequence GGTCCATCCA has two G’s and four
C’s. (52) calculates 𝐺𝐶 -𝑐𝑜𝑛𝑡𝑒𝑛𝑡 as follows:

𝐺𝐶-𝑐𝑜𝑛𝑡𝑒𝑛𝑡 = (𝑋𝐺 + 𝑋𝐶 )∕(𝑋𝐴 + 𝑋𝑇 + 𝑋𝐺 + 𝑋𝐶 ) (52)

The GC-𝑐𝑜𝑛𝑡𝑒𝑛𝑡 of this DNA sequence is 60%.


(vi) Melting temperature (Tm)
For the experimental purpose, melting temperature ‘‘𝑇𝑚 ’’ is the
significant constraint to be controlled. The half of double-stranded DNA
begins to break into its single-stranded form at this temperature (Zhang
et al., 2009).
Fig. 8. Comparison for the Similarity Objective function results between mean values
of HGSPSO and its competitors.
4.2. Results and analysis

Minimizing the objective functions is the key motive in this re-


search. The fitness value of each objective function is calculated by
applying the weighted function technique that transforms the multi- In the binary form A, C, G, and T are denoted as 0, 1, 2, 3 in
objective function into a single-objective function. this paper. A DNA sequence entails various bases where each sequence
represents one dimension. The DNA sequence length can be calculated
𝑂𝑝𝑡𝑖𝑚𝑖𝑧𝑒𝑓𝑖 (𝑥) as (4𝑙 − 1) which is equal to the boundary of the search area. For
{ { }}
𝑖 ∈ ℎ𝑚𝑒𝑎𝑠𝑢𝑟𝑒 , 𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦, ℎ𝑎𝑖𝑟𝑝𝑖𝑛, 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦 instance, the 20-mer sequence, the search area length is calculated as
(420 − 1) that is from 0 to 1.0995116e+12 in a decimal number, in the
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑔𝑗 (𝑥) = 0, 𝑗 ∈ {𝑇𝑚 , 𝐺𝐶𝑐𝑜𝑛𝑡𝑒𝑛𝑡 }
same way, the boundary for (43 − 1), i.e. from 0 to 63 and the sequence
The fitness value is calculated as ranging from AAA to TTT. A three DNA sequence set can be obtained
∑ by using three dimensions. Therefore, each particle in the search area
𝐹 𝑖𝑡𝑛𝑒𝑠𝑠 𝑣𝑎𝑙𝑢𝑒 = 𝑊𝑖 𝑓𝑖 ; 𝑤ℎ𝑒𝑟𝑒 𝑊𝑖 = 1
comprises of three DNA sequences. Since continuous HGSPSO is used
𝑖

10
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 10
Comparison of the sequences produced by HGSPSO and other algorithms in terms of objective functions.
Sequences Continuity Hairpin H-measure Similarity GC content (%)
DNA sequences produced by HGSPSO
CCTAGTTCGTCCGTATTGTC 0 0 47 37 50
TTTATCGGTAGCGTGGTCTT 0 0 45 39 50
GGGTTCAATGCAGTGATCTG 0 0 44 35 50
ACTTGACGCTAGATGCCCTA 0 0 37 38 50
AAAGGCTGCTAGTCGATACC 9 3 41 34 50
CTAGGTTGCGGTAATCCATG 0 0 39 33 50
GATCATGTGATCGCAGTCCT 0 0 35 39 50
Mean 0 0 41.14285714 30.42857143
Standard deviation 0 0 4.413183712 1.511857892
DNA sequences produced by BPSON (Liu et al., 2019)
CCTTAGAACGTCTCGAGCTT 0 0 88 78 50
CCGTATACATGCTCGGTCTT 0 0 86 81 50
ACTCGCGACTACCAACATCT 0 0 98 75 50
CGTATCTGTGCTTCTCGTCA 0 0 92 84 50
ATTGCTGGATGAGGTGCCTA 0 0 94 80 50
TTCACTACTGAGGATCCGCA 0 0 98 76 50
CCTTAGAACGTCTCGAGCTT 0 0 88 78 50
Mean 0 0 92 78.85714286
Standard deviation 0 0 4.898979486 3.078342164
DNA sequences produced by INSGA-II (Wang et al., 2018)
CGTCTAGGCCGGATCAATAT 0 3 91 81 50
GGTTGTCCTGAGTGTTGTGT 0 0 84 80 50
AGAGTCAGCAGCGTAGAGAT 0 0 93 82 50
TACAGTCGGTTCGGTTATGG 0 0 90 73 50
GCGGAAGTAATCGGAAGTGA 0 0 88 81 50
ACGCCACAGTATATCATCGC 0 3 82 78 50
AGTCATTCTCCTGGCATTGC 0 6 90 76 50
Mean 0 1.714285714 88.28571429 78.71428571
Standard deviation 0 2.360387377 3.946064948 3.251373336
DNA sequences produced by DMEA (Xiao et al., 2012)
GCCGGAGCCTTCTTGATAAT 0 0 68 53 50
AATCCTGCTTGTCCTCCTAC 0 0 63 50 50
TGAGCTCTCTGTTCCAACGA 0 0 64 52 50
ATGTAACACGCGGCCACTAA 0 0 63 50 50
ACTCGGATTGTGTTGAACGC 0 0 71 51 50
CGTTGTTGGCACCTACGTTA 0 0 68 54 50
ATCCAGACTACCAAGGCCAA 0 0 61 48 50
Mean 0 0 65.42857143 51.14285714
Standard deviation 0 0 3.598941643 2.035400978
DNA sequences produced by NACST/Seq (Soo-Yong et al., 2005)
CTCTTCATCCACCTCTTCTC 0 0 43 58 50
CTCTCATCTCTCCGTTCTTC 0 0 37 58 50
TATCCTGTGGTGTCCTTCCT 0 0 45 57 50
ATTCTGTTCCGTTGCGTGTC 0 0 52 56 50
TCTCTTACGTTGGTTGGCTG 0 0 51 53 50
GTATTCCAAGCGTCCGTGTT 0 0 55 49 50
AAACCTCCACCAACACACCA 0 0 55 43 50
Mean 0 0 48.28571429 53.42857143
Standard deviation 0 0 6.799859943 5.623081683
DNA sequences produced by IGA (Wang et al., 2008)
GTCGAAACCTGAGGTACAGA 9 3 72 58 50
GTGACTGTATGCACTCGAGA 0 3 74 57 50
ACGTAGCTCGAATCAGCACT 0 3 77 55 50
CGCTGATCTCAGTGCGTATA 0 0 74 55 50
GTCAGCCAATACGAGAGCTA 0 3 71 56 50
GCATGTCTAACTCAGTCGTC 0 3 68 55 50
TACTGACTCGAGTAGGACTC 0 0 78 60 50
Mean 1.28571428 2.142857143 73.42857143 56.57142857
Standard deviation 3.40168025 1.463850109 3.457221565 1.902379462

as an alternative for the binary domain, therefore before converting by the HGSPSO gives better results than the other algorithms. The
all the decimal numbers into binary, the decimal numbers are rounded sequences generated by the proposed algorithm have lesser H-measure
off to the nearest decimal value, then changed into binary, and then and similarity values. The obtained results indicate a greater possibil-
transformed into DNA sequences. For example, 403.810 is rounded off ity to hybridize with accurate complementary sequences through the
to 40410 that is changed into binary form as 1100101002 , and DNA
HGSPSO. Additionally, the zero values of continuity and hairpin point
representation is ‘‘CGCCA’’. Table 10 highlights the sequences produced
towards the low probability of the secondary structure.
and the fitness values of the four objective functions.
The comparison results of the DNA codes produced by the HSPSO Figs. 6–8 illustrate the pictorial representation of the mean values
and other algorithms are presented in Table 10. DNA codes produced of all the DNA codes presented in Table 10 for the three objective

11
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Table 11
Results of the HGSPSO and its competitors in terms of mean and standard deviation.
Functions HGSPSO BPSON INSGA II DMEA NACST IGA
𝜇 0 0 0 0 0 1.285
𝑓1
𝜎 0 0 0 0 0 3.401
𝜇 0 0 1.71428 0 0 2.142
𝑓2
𝜎 0 0 2.36038 0 0 1.463
𝜇 41.1428 92 88.2857 65.428 48.285 73.42
𝑓3
𝜎 4.41318 4.8989 3.94606 3.5989 6.7998 3.457
𝜇 30.4285 78.857 78.7142 51.1428 53.428 56.57
𝑓4
𝜎 1.51185 3.0783 3.25137 2.03540 5.6230 1.902
Fitness value 71.85 170.8 168.7 116.57 101.7 133.4
(weighted sum of all
functions)

Where ‘‘𝜇’’ represents mean and ‘‘𝜎’’ represents standard deviation.

functions as the results for the continuity function is similar for all the Brest, J., Maučec, M.S., Bošković, B., 2017. Single objective real-parameter optimiza-
methods tion: Algorithm jSO. In: 2017 IEEE Congress on Evolutionary Computation, CEC,
5-8 June 2017. pp. 1311–1318. http://dx.doi.org/10.1109/CEC.2017.7969456.
Table 11 shows a comparison of the results summarized in terms of
Deaton, R., Chen, J., Bi, H., Rose, J., 2002. A Software Tool for Generating
the mean and standard deviation values of all the objective functions. It Non-Crosshybridizing Libraries of DNA Oligonucleotides. pp. 252–261.
is clear from the table that the weighted sum of the HGSPSO for all the Eiben, A.E., Schippers, C.A., 1998. On evolutionary exploration and exploitation. Fund.
functions is lesser than the other compared algorithm that demonstrates Inform. 35 (1–4), 35–50.
the better performance of the proposed method. Ezziane, Z., 2005. DNA computing: Applications and challenges. Nanotechnology 17
(2), R27–R39. http://dx.doi.org/10.1088/0957-4484/17/2/r01.
Feldkamp, U., Saghafi, S., Banzhaf, W., Rauhe, H., 2002. DNASequenceGenerator: A
5. Conclusions Program for the construction of DNA sequences. In: DNA Computing. Springer
Berlin Heidelberg, Berlin, Heidelberg, pp. 23–32.
Guangzhao, C., Xiaoguang, L., 2010. The optimization of DNA encodings based on
In this paper, a hybrid algorithm is proposed through the fusion of
modified PSO/GA algorithm. In: 2010 International Conference On Computer
the strong attributes GSA, PSO, and further modifying the velocity and Design and Applications, 25-27 June 2010, 1, pp. V1–609–V1–614. http://dx.doi.
position strategies. The primary concept behind this hybridization is to org/10.1109/ICCDA.2010.5540892.
use the strong capacity of PSO in exploitation and the exploration capa- Han, Y., Li, M., Liu, J., 2017. Improving hybrid gravitational search algorithm for
bilities of GSA. To validate the performance five standard and modern adaptive adjustment of parameters. In: 2017 13th International Conference on
Computational Intelligence and Security, CIS, 15-18 Dec. 2017. http://dx.doi.org/
benchmark functions are used. The results highlight that the proposed
10.1109/CIS.2017.00013.
method gives improved results for all the benchmark functions. It is Hartemink, A.J., Gifford, D.K., Khodor, J., 1999. Automated constrain𝑡-based nucleotide
also noticeable that the convergence of the HGSPSO is faster than sequence selection for DNA computation. Biosystems 52 (1), 227–235. http://dx.
PSO and GSA. Additionally, a DNA problem is also solved. HGSPSO doi.org/10.1016/S0303-2647(99)00050-7.
Ibrahim, Z., Kurniawan, T., Khalid, N., Sudin, S., Khalid, M., 2009. Implementation of
outperformed the other algorithms in terms of the objective function
ant colony system for DNA sequence optimization. Artif. Life Robot. 14, 293–296.
results. It shows that the HGSPSO can translate DNA sequences with http://dx.doi.org/10.1007/s10015-009-0683-0.
less probability of hybridization and greater precision. Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. In: Neural Networks,
1995. Proceedings., IEEE International Conference on, Nov/Dec 1995, vol. 4, pp.
1942–1948. http://dx.doi.org/10.1109/ICNN.1995.488968.
CRediT authorship contribution statement
Kommadath, R., Kotecha, P., 2017. Teaching learning based optimization with focused
learning and its performance on CEC2017 functions. In: 2017 IEEE Congress on
Talha Ali Khan: Conceptualization, Methodology, Data curation, Evolutionary Computation, CEC, 5-8 June 2017. pp. 2397–2403. http://dx.doi.org/
Formal analysis, Software, Writing - original draft, Validation, Visual- 10.1109/CEC.2017.7969595.
ization, Investigation. Sai Ho Ling: Supervision, Project administration, Krishna Veni, S., Muhammad, M.S., Masra, S.M.W., Ibrahim, Z., Kian Sheng, L.,
2011. Dna words based on an enhanced algorithm of multi-objective particle
Writing - review & editing. swarm optimization in a continuous search space. In: International Conference on
Electrical, Control and Computer Engineering 2011, InECCE, 21-22 June 2011. pp.
Declaration of competing interest 154–159. http://dx.doi.org/10.1109/INECCE.2011.5953867.
Kurniawan, T.B., Khalid, N.K., Ibrahim, Z., Khalid, M., Middendorf, M., 2008. An Ant
Colony System for DNA sequence design based on thermodynamics. In: presented
The authors declare that they have no known competing finan- at the Proceedings of the Fourth IASTED International Conference on Advances in
cial interests or personal relationships that could have appeared to Computer Science and Technology, Langkawi, Malaysia.
influence the work reported in this paper. Li, C., Yang, S., Nguyen, T.T., 2012. A self-learning particle swarm optimizer for global
optimization problems. IEEE Trans. Syst. Man Cybern. B 42 (3), 627–646.
Liang, J., Qu, B., Suganthan, P., 2013. Problem definitions and evaluation criteria for
Data availability the CEC 2014 special session and competition on single objective real-parameter
numerical optimization.
There is no data associated with this study. Liu, K., Wang, B., Lv, H., Wei, X., Zhang, Q., 2019. A bpson algorithm applied to DNA
codes design. IEEE Access 7, 88811–88821. http://dx.doi.org/10.1109/ACCESS.
2019.2924708.
References Nagra, A.A., Han, F., Ling, Q., Mehta, S., 2019. An improved hybrid method com-
bining gravitational search algorithm with dynamic multi swarm particle swarm
Arita, M., Kobayashi, S., 2002. DNA sequence design using templates. New Gener. optimization. IEEE Access 7, 50388–50399.
Comput. 20 (3), 263–277. http://dx.doi.org/10.1007/bf03037360. Qian, K., Li, W., Qian, W., 2017. Hybrid gravitational search algorithm based on fuzzy
Awad, M.Z.A.N.H., Suganthan, P.N., Liang, J.J., Qu, B.Y., 2017. Problem Definitions logic. IEEE Access 5, 24520–24532.
and Evaluation Criteria for the CEC 2017 Special Session and Competition on Rashedi, E., Nezamabadi-pour, H., Saryazdi, S., 2009. GSA: A gravitational search
Single Objective Real-Parameter Numerical Optimization, IEEE Congress on Evo- algorithm. Inform. Sci. 179 (13), 2232–2248. http://dx.doi.org/10.1016/j.ins.2009.
lutionary Computation, CEC 2017, Donostia - San Sebastián, Spain, 05/06/2017 to 03.004.
08/06/2017. Sabeti, M., Boostani, R., Davoodi, B., 2018. Improved particle swarm optimisation to
Bergh, F., Engelbrecht, A., 2010. A convergence proof for the particle swarm optimiser. estimate bone age. IET Image Process. 12 (2), 179–187. http://dx.doi.org/10.1049/
Fund. Inform. 105, 341–374. http://dx.doi.org/10.3233/FI-2010-370. iet-ipr.2017.0545.

12
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263

Salgotra, R., Singh, U., Saha, S., 2018. Improved cuckoo search with better search Wang, B., Zhang, Q., Zhang, R., 2008. Design of DNA sequence based on improved
capabilities for solving CEC2017 benchmark problems. In: 2018 IEEE Congress on genetic algorithm. In: Advanced Intelligent Computing Theories and Applications.
Evolutionary Computation, CEC, 8-13 July 2018. pp. 1–7. http://dx.doi.org/10. with Aspects of Theoretical and Methodological Issues. Springer Berlin Heidelberg,
1109/CEC.2018.8477655. Berlin, Heidelberg, pp. 9–14.
Shi, Y., Eberhart, R., 1998. A modified particle swarm optimizer. In: 1998 IEEE Xiao, J., Zhang, X., Xu, J., 2012. A membrane evolutionary algorithm for DNA sequence
International Conference on Evolutionary Computation Proceedings. IEEE World design in DNA computing. Chin. Sci. Bull. 57 (6), 698–706. http://dx.doi.org/10.
Congress on Computational Intelligence (Cat. No.98TH8360), 4-9 May 1998. pp. 1007/s11434-011-4928-7.
69–73. http://dx.doi.org/10.1109/ICEC.1998.699146. Xu, C., Zhang, Q., Wang, B., Zhang, R., 2008. Research on the DNA sequence
Singh, N., Singh, S., Singh, S.B., 2017. A new hybrid MGBPSO-GSA variant for design based on GA/PSO algorithms. In: 2008 2nd International Conference on
improving function optimization solution in search space. Evol. Bioinform. 13, Bioinformatics and Biomedical Engineering, 16-18 May 2008. pp. 816–819. http:
1176934317699855. http://dx.doi.org/10.1177/1176934317699855. //dx.doi.org/10.1109/ICBBE.2008.200.
Soo-Yong, S., In-Hee, L., Dongmin, K., Byoung-Tak, Z., 2005. Multiobjective evolution- Zhang, A., Sun, G., Ren, J., Li, X., Wang, Z., Jia, X., 2018a. A dynamic neighbor-
ary optimization of DNA sequences for reliable DNA computing. IEEE Trans. Evol. hood learning-based gravitational search algorithm. IEEE Trans. Cybern. 48 (1),
Comput. 9 (2), 143–158. http://dx.doi.org/10.1109/TEVC.2005.844166. 436–447.
Stanovov, V., Akhmedova, S., Semenkin, E., 2018. Lshade algorithm with rank-based Zhang, X., Wang, Y., Cui, G., Niu, Y., Xu, J., 2009. Application of a novel IWO to the
selective pressure strategy for solving cec 2017 benchmark problems. In: 2018 design of encoding sequences for DNA computing. Comput. Math. Appl. 57 (11),
IEEE Congress on Evolutionary Computation, CEC, 8-13 July 2018. pp. 1–8. http: 2001–2008. http://dx.doi.org/10.1016/j.camwa.2008.10.038.
//dx.doi.org/10.1109/CEC.2018.8477977. Zhang, H., Yuan, M., Liang, Y., Liao, Q., 2018b. A novel particle swarm optimization
Tanaka, F., Nakatsugawa, M., Yamamoto, M., Shiba, T., Ohuchi, A., 2002. Developing based on prey-predator relationship. Appl. Soft Comput. 68, 202–218. http://dx.
support system for sequence design in DNA computing. In: DNA Computing. doi.org/10.1016/j.asoc.2018.04.008.
Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 129–137. Zhou, S., Zhang, Q., Zhao, J., Li, J., 2007. DNA encodings based on multi-objective
Wang, Y., Shen, Y., Zhang, X., Cui, G., Sun, J., 2018. An improved non-dominated particle swarm. J. Comput. Theor. Nanosci. 4, 1249–1252. http://dx.doi.org/10.
sorting genetic algorithm-II (INSGA-II) applied to the design of DNA codewords. 1166/jctn.2007.005.
Math. Comput. Simulation 151, 131–139. http://dx.doi.org/10.1016/j.matcom.
2018.03.011.

13

You might also like