Khan 2021
Khan 2021
∗ Corresponding author.
E-mail address: [email protected] (T.A. Khan).
https://doi.org/10.1016/j.engappai.2021.104263
Received 11 July 2020; Received in revised form 14 March 2021; Accepted 19 April 2021
Available online xxxx
0952-1976/© 2021 Elsevier Ltd. All rights reserved.
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
the last best position is stored in the memory. 𝑉𝑚𝑎𝑥 is set to limit the 𝑣𝐷 𝐷 𝐷
𝑖 (𝑡 + 1) = 𝑟𝑎𝑛𝑑 × 𝑣𝑖 (𝑡) + 𝑎𝑖 (𝑡) (9)
redundant mobility of the agents outside the search zone. If the velocity
where ‘‘t ’’ is the present iteration. Let us suppose that the gravitational
goes above 𝑣𝑚𝑎𝑥 it is set to zero. Every particle travels in the search
space for calculating the best solution. The position of each particle is and inertial masses are equal then masses will be calculated as follows:
calculated as 𝑀𝑎𝑖 = 𝑀𝑖𝑖 = 𝑀𝑝𝑖 = 𝑀𝑖 𝑖 = 1, 2, 3, … 𝑁 (10)
𝑥𝑖 (𝑡 + 1) = 𝑥𝑖 (𝑡) + 𝑣𝑖 (𝑡) (1) 𝑓 𝑖𝑡 (𝑡) − 𝑋 𝑤 (𝑡)
𝑚𝑖 (𝑡) = 𝑏𝑖 (11)
𝑋 (𝑡) − 𝑋 𝑤 (𝑡)
The information of every particle is based upon its knowledge and the 𝑚 (𝑡)
surrounding particle’s experience. These fundamentals have identical 𝑀𝑖 (𝑡) = ∑𝑁 𝑖 (12)
significance and might be altered based on the choice of the particle so 𝑗=1 𝑚𝑗 (𝑡)
the velocity equation will be Where 𝑓 𝑖𝑡𝑖 (𝑡) is the optimal value of the agent and 𝑋 𝑤 (𝑡) is the worst
{ ( ) ( )} and 𝑋 𝑏 (t) is the best value for this function in case of the minimization
𝑣𝑖 = 𝜒. 𝑤.𝑣𝑖 + 𝐶1 𝑅1 𝑃𝑖 − 𝑥𝑖 + 𝐶2 𝑅2 𝑃𝑔 − 𝑥𝑖 (2)
problem it can be calculated as:
where
𝑋 𝑏 (𝑡) = 𝑚𝑖𝑛𝑗𝜖{1,…,𝑁} 𝑓 𝑖𝑡𝑗 (𝑡) (13)
𝑃𝑖 = [𝑃(𝑖,1) , 𝑃(𝑖,2) , 𝑃(𝑖,3) , 𝑃(𝑖,𝐷) ] 𝑖 = 1, 2…𝑁 𝑤
𝑋 (𝑡) = 𝑚𝑎𝑥𝑗𝜖{1,…,𝑁} 𝑓 𝑖𝑡𝑗 (𝑡) (14)
𝑃𝑔 = [𝑃 𝑔1 , 𝑃 𝑔2 , 𝑃 𝑔3 , 𝑃 𝑔𝐷 ]
To maintain a trade-off between exploitation and exploration of the
𝑃𝑔 is the global best position, the 𝑣𝑖 = {𝑣1 , 𝑣2 … 𝑣𝐷 } is the velocity of algorithm over the iteration the number of the agents is reducing so
the particles, R is the random number [0, 1], D is the dimension of that the agents with heavier masses acting force to one another are
the search space that is the integer, 𝑃𝑖 is the local best, and 𝑥𝑖 is the evaluated. ‘‘𝐾𝑏𝑒𝑠𝑡 ’’ are the best agents having heavier masses. Therefore,
current position. Each particle is assessed by a given fitness function. the value of the 𝐾𝑏𝑒𝑠𝑡 is reducing slowly until a single agent is exerting
The primary purpose of the PSO is to decrease the cost values of the force to the remaining agents. Consequently, (4) is modified as:
particles iteratively for the given benchmark function. ∑
𝐹𝑖𝐷 (𝑡) = 𝑟𝑎𝑛𝑑𝑗 𝐹𝑖𝑗𝐷 (𝑡) (15)
𝑗𝐾𝑏𝑒𝑠𝑡 ,𝑗≠𝑖
1.2. Standard gravitational search algorithm (SGSA)
2
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
• A new method is implemented for the velocity clamping by coefficients adaptively as there is no specific boundary in the evo-
adding a new velocity term in the velocity update equation so lutionary computation for the transiting between these two stages.
that the particles can move faster towards the optimal solution. Moreover, this adaptive method helps the exploration in the beginning
Since the particle velocity is a presumptive parameter, so, it forms and exploitation in the later stage. The best results of the HGSPSO are
an uncurbed path permitting the particles to create wider cycles obtained when changing the 𝐶1 from 2.5 to 0.5, 𝐶2 from 0.5 to 2.5, 𝑃𝑔
in the search area. To circumvent these oscillations, the limits for is the global best, and 𝑃𝑖 is the local best.
the velocity are implemented in terms of upper and lower bounds In HGSPSO, every agent is considered as a potential solution. The
in the HGSPSO. One other primary factor for amending the ve- gravitational constant, resultant forces and gravitational force are mea-
locity update equation is that during exploration, the particles sured among other agents through (19), (20), and (21). (22) calculates
trapped in the local minima and cannot leave which resulted in the acceleration of the agents.
a premature convergence. The concept of increasing the velocity At time ‘‘t ’’ the force of attraction by agent (mass) ‘‘i’’ from an agent
of the particles helps to reach the optimal position swiftly and (mass) ‘‘j’’ is defined as:
ameliorate the convergence.
𝑀𝑝𝑖 (𝑡) ∗ 𝑀𝑎𝑗 (𝑡)
• Adaptive time-varying acceleration constants presented for 𝐹𝑖𝑗𝐷 (𝑡) = 𝐺(𝑡) (𝑥𝐷 𝐷
𝑗 (𝑡) − 𝑥𝑖 (𝑡)) (19)
𝑅𝑖𝑗 (𝑡) + 𝜀
changing the constants as the iteration progresses.
• Inertial weight is also introduced in this paper that makes a where 𝑀𝑝𝑖 is the passive gravitational mass associated to object i, 𝜀 is
balance between the exploration and exploitation of the HGSPSO. a small constant, R𝑖𝑗 (𝑡) is the distance between two agents i and j, M 𝑎𝑗
• The proposed method is applied to real-world DNA problem is an active gravitational mass of agent j, and 𝐺(𝑡) is a gravitational
to demonstrate the performance of the HGSPSO. constant at time t.
To signify the importance of the exploration in the initial phase of
So, in this paper, a fusion of the GSA and PSO is presented that uses the algorithm and exploitation in the later phase, G has been used adap-
the exploitation capability of the PSO and exploration proficiency of the tively so its value rises as the iteration increases. The main objective
GSA. The efficiency of the HGSPSO is then compared with the standard for designing the G adaptively is to motivate the agents to move with
versions of both GSA and PSO by using five standard test benchmark bigger steps in the early stage of the algorithm, however, agents are
functions, CEC benchmark functions, and solving a DNA problem. bound to travel gradually at the end of the iterations. 𝐺(𝑡) is modified
The layout of the remaining paper is as follows. Section 2 contains as follows:
the hybrid gravitational search particle swarm optimization algorithm 𝑇 −𝑡
discussion. Section 3 shows the application of the proposed method on 𝐺(𝑡) = 𝐺𝑜 × 𝑒−𝛾×( 𝑇
)
(20)
the benchmark functions, Section 4 contains the fundamental concepts
where 𝐺0 is the initial gravitational constant and 𝛾 is the coefficient of
of the DNA; and the comparison of the presented technique is made
decrease.
with the contemporary algorithms for DNA computation. The paper is
Assume that in dimension ‘‘D’’ the overall force experienced on
culminated by presenting the key conclusions in Section 5.
agent i is a random weighted sum of all the other parts of the forces
exerted on the other agents, (19) can be modified by the following
2. Hybrid gravitational search particle swarm optimization algo-
equation.
rithm (HGSPSO)
∑
𝑁
Many hybrid algorithms are available in the literature that use the 𝐹𝑖𝐷 (𝑡) = 𝑟𝑎𝑛𝑑𝑗 𝐹𝑖𝑗𝐷 (𝑡) (21)
𝑗=1,𝑗≠𝑖
properties of two or more algorithms to improve the capability of
the hybrid version. Different approaches are used to combine the two Similarly, acceleration can be calculated as:
algorithms that are at the low level and high level in a homogeneous 𝐹𝑖𝐷 (𝑡)
and heterogeneous way. In this paper, GSA is combined with PSO, 𝑎𝐷
𝑖 (𝑡) = (22)
𝑀𝑖𝑖 (𝑡)
and the properties of both the algorithms are utilized i.e. both the
algorithms were executed at the same time and the hybrid algorithm The gravitational masses, inertial mass for the minimization problem
i.e. HGSPSO is utilized for obtaining the results. The key rationale for can be calculated in the same way as defined from (10)–(14).
the proposed fusion is to benefit from the exploitation capability of the As discussed earlier, that the GSA is a memoryless algorithm there-
PSO and exploration proficiency of the GSA. fore the best solution might not be saved as the best mass is at-
In HGSPSO, the parameters of both these algorithms are used. tracted away by other less fitted masses. To overcome This limitation
The inertial weight and acceleration constants are the two important is mitigated through proposing a novel strategy in the velocity update
parameters in PSO. Therefore, in HGSPSO inertial weight ‘‘w’’ are procedure. The new equation for the velocity update is defined as:
modified in such a way that either it is not used as a linear reducing ( )
𝐶1 ( )
or not it is set as a constant value, however, it is used as a function of 𝑉𝑖 (𝑡 + 1) = 𝑤 × 𝑣𝑖 (𝑡) + 𝐶1 × 𝑎𝑖 (𝑡) + × 𝑝𝑔 − 𝑥𝑖 (23)
𝐶2
the fitness function values of the global and the local best. Similarly,
the acceleration constants are used in this paper are also adaptively Where 𝐶1 and 𝐶2 are the acceleration constants, w is the inertial
changes as the iteration progresses. These two parameters are defined weight, a is the acceleration, 𝑝𝑔 is the global best (𝑔𝑏𝑒𝑠𝑡 ), 𝑥𝑖 is the
as follows: current position 𝑣𝑖 is the velocity of the agent. The third term in (23)
( ) is somewhat analogous to the social term of the PSO velocity equation.
𝑃𝑔
𝑤𝑖 = 1 − (16) A higher value of 𝐶1 biases towards GSA behavior, whereas a higher
𝑃𝑖
( ) value of 𝐶2 encourages the social factor of PSO in the execution of the
𝑡
𝐶1 = 𝐶3 − 𝐶4 ∗ (1 − ) + 𝐶4 (17) search procedure. The adaptive technique permits GSA to examine the
𝑇
( ) 𝑡 search area more effectively and a PSO alike exploitation of the best
𝐶2 = 𝐶5 − 𝐶6 ∗ (1 − ) + 𝐶6 (18) solution.
𝑇
where 𝐶3 𝐶4 , C 5 , and 𝐶6 are constant values, ‘‘T ’’ is the maximum Finally, the position of agents is updated as follow:
iteration, and ‘‘t ’’ is the current iteration. The values for 𝐶1 are reducing
𝑥𝐷 𝐷 𝐷
𝑖 (𝑡 + 1) = 𝑤 ∗ 𝑥𝑖 + 𝑣𝑖 (𝑡 + 1) (24)
adaptively and 𝐶2 is increasing over the iteration, as a result, the
masses move closer to the best solution as the proposed method moves In the presented algorithm, all agents are initialized randomly. The
in the exploitation stage. The reason for adjusting the acceleration gravitational force, gravitational constant and resulting forces between
3
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
them are obtained using (19)–(21), correspondingly. Later, the accel- It is a multi-modal function with many local minima as a result; it tends
eration of particles is measured using (22). At each iteration, the best to convergence in the wrong direction
solution attained up to now should be updated. Then, the velocities
1 ∑ 2 ∏
𝐷 𝐷
𝑥
of all agents are computed using (23), and lastly, (24) update the 𝑓4 (𝑥) = 𝑥𝑖 − cos( √𝑖 ) + 1 (28)
positions of agents. The procedure ends after reaching the desired 4000 𝑖=1 𝑖=1 𝑖
stopping criteria. (v) Ackley:
A few advantages and remarks on the proposed method are high- Ackley is a multi-modal function with many local minima.
lighted as under:
( √ ) ( )
1 ∑ 2 1 ∑
𝐷 𝐷
( )
• The presented algorithm used memory for storing the best solu- 𝑓5 (𝑥) = −20 exp −0.2 𝑥 − exp cos 2𝜋𝑥𝑖 + 20 + 𝑒
tion achieved, in the updating process the quality of the agent’s 𝐷 𝑖=1 𝑖 𝐷 𝑖=1
solutions is also considered. The particles that are closer to the (29)
potential solutions attempted to allure the other particles that are
exploring the search space. In the process of exploring the search
3.1.1. Comparison with the state-of-the-art algorithms
area, the particles that are, closer to the potential solution started
The comparison is made with the standard PSO, GSA, and the
to move slowly towards the optimal point. The effectiveness of
following techniques.
the global best searching topology criteria helps the particles to
explore the global best. Since HGSPSO utilizes a memory-based 3.1.1.1. Improved hybrid gravitational search algorithm (IHGSA) (Han
global best topology to save the potential solution available so et al., 2017). In IHGSA, a new parameter that is known as the learning
the best solution will not be missing and will be easily available factor is introduced. It helps to maintain a balance between exploitation
at any point in time. and exploration of this method.
• Each particle will explore the best solution and travel in the
3.1.1.2. Mean Gbest Particle Swarm Optimization Gravitational Search Al-
direction of it, so masses are providing with a kind of social
gorithm (MGBPSO-GSA) (Singh et al., 2017). MGBPSO-GSA is a hybrid
intelligence.
version of the Mean Gbest Particle swarm optimization algorithm and
• The computational cost of this algorithm is very low.
the gravitational search algorithm.
• The influence of 𝑃𝑔 is noticeable in the exploitation stage by using
adaptive 𝐶1 and 𝐶2 . 3.1.1.3. Self-learning Particle Swarm Optimizer (SLPSO) (Li et al., 2012).
• The impact of 𝑃𝑔 on agents is free of their masses and it will be Li et al. introduced a new four different learning approaches where
considered as an exterior force not depending on gravitational every particle adaptively adopts one according to the local fitness
rules. This efficiently stops particles from ge𝑡-together and having landscape.
very slow movement.
3.1.1.4. Dynamic Neighborhood learning-based GSA (DNLGSA) (Zhang
Due to these modifications in the presented algorithm, it delivers better et al., 2018a). Aizhu et al. presented a dynamic neighborhood learn-
results than the standard GSA and PSO. ing (DNL) strategy to replace the 𝐾𝑏𝑒𝑠𝑡 model to maintain a balance
between exploration and exploitation in GSA.
3. HGSPSO for functions optimization
3.1.1.5. Dynamic multi swarm particle swarm optimization and gravita-
tional search algorithm (GSADMSPSO) (Nagra et al., 2019). In this
Five test benchmark functions (Liang et al., 2013) are used to
method, the main population of masses is divided into smaller sub-
confirm the efficiency of the proposed method.
swarms and also stabilizing them by presenting a new neighborhood
strategy.
3.1. Standard benchmark functions
3.1.1.6. Hybrid gravitational search algorithm (Qian et al., 2017). A local
The strength, efficacy, and ability tests of different optimization search technique (LST) is combined with the optimization procedure
approaches are demonstrated by utilizing numerous standards and of the GSA. Every agent in GSA has a probability (p), and LST with
benchmark functions. Thus, the solution quality, convergence, and probability (1 − 𝑝). Fuzzy logic is used to get the probability p.
stability are measured. To assess the proposed algorithm proficiency
few standard test benchmark functions are used. 3.1.2. Results and analysis
(i) Sphere: Termination criteria are that the global optimum is found or the
It is a unimodal function with single minima. The main objective number of iterations is reached. Experiments are run in MATLAB on a
of using a sphere benchmark is to test the convergence rate of the PC with the 64-bit win10 professional operating system, 16 GB RAM,
algorithm. and 2.90 GHz processor. The values of the parameters of the com-
pared algorithms are taken from their respective literature as shown
∑
𝐷
𝑓1 (𝑥) = 𝑥2𝑖 (25) in Table 1. The simulation conditions used for the algorithm are:
𝑖=1
∑
𝐷 3.1.2.1. Statistical analysis. HGSPSO results are equated with the SGSA,
𝑓2 (𝑥) = [𝑥2𝑖 − 10 cos(2𝜋𝑥𝑖 ) + 10] (26) SPSO, different variants of the modified PSO, and improved GSA
𝑖=1
algorithms. Mean and standard deviation are used as the evaluating
(iii) Rosenbrock: criteria as shown in Table 2.
It is a unimodal function with a single minimum (i) T-Test:
∑[
𝐷−1
( )2 ] T-Test value is one of the main testing criteria to compare the two
𝑓3 (𝑥) = 100 𝑥𝑖+1 − 𝑥2𝑖 + (1 − 𝑥𝑖 )2 (27) algorithms. Mean ‘‘𝛼’’, 𝜉 value of the degree of freedom, and standard
𝑖=1 deviation ‘‘𝜎’’ values of the comparing methods are utilized to find
(iv) Griewank out the 𝑡-test value. The negative values of the test show the inferior
4
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 1
Parameters of the compared algorithms.
Algorithms Parameters
IHGSA Swarm size = 30, 𝐺0 = 1, 𝛼 = 20, learning factors 𝑆1 and 𝑆2 of IHGSA are in the range of [0.618, 1.236].
MGBPSO-GSA 𝐺𝑜 = 1; Swarm Size = 30; 𝐶1 = 0.5; D = 30, and 𝐶2 = 1.5.
SLPSO Swarm Size = 30, 𝑃 = 1, 𝛾 = 0.01
0.166 0.166
DNLGSA 𝐺𝑜 = 100, 𝛽 = 20, 𝑘 = 10, gm = 5, 𝐶1 = 0.5 − 0.5𝑡
𝑇 0.166
, 𝐶2 = 1.5𝑡
𝑇 0.166
, 𝐷 = 30
𝑚𝑎𝑥 𝑚𝑎𝑥
GSADMSPSO Swarm Size = 30, 𝐶1 = 0.5, 𝐶2 = 1.5, w is decreased linearly from 0.9 to 0.2, 𝐺0 = 1, 𝛼 = 20, 𝑅 = 5.
HGSA 𝐺𝑜 = 100, 𝛼 = 20, 𝜃 = (0.5)D , 𝛾 = 0.2
performance of the first approach than the second method or vice versa.
The 𝑡-value can be calculated as:
𝛼 −𝛼
𝑡 = √( 1) 2( ) (30)
𝜎12 𝜎22
𝜉+1
+ 𝜉+1
5
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 2
The standard deviation, 𝑡-test and mean values evaluation between HGSPSO and other approaches.
HGSPSO
Functions SPSO SGSA IHGSA MGBPSO-GSA SLPSO DNLGSA GSADMSPSO HGSA
𝒄𝟏 = [2.5, 0.5]
𝒄𝟐 = [0.5, 2.5]
Mean 6.256e−𝟓𝟐 1.821e−8 1.71e−06 2.38e−19 2.58e+03 2.78e−50 1.53e−27 1.50e−26 0.0000
Std. Dev 9.293e−𝟓𝟑 2.664e−8 7.2e−07 5.58e−20 159.203 8.09e−49 5.11e−26 5.73e−25 0.0000
𝑓1 (𝑥)
𝑡-test – 4.8335 16.793 30.1598 114.59 0.2375 0.2117 0.1851 0.000
Rank 1 6 7 5 8 2 3 4 1
Mean 1.095e−𝟖 20.392 1.94e+2 3.38e1 41.5262 0.000 0.000 8.19 0.000
Std. Dev 0.25302 5.389 1.08e+2 1.05e1 10.4930 0.000 0.000 6.11 0.000
𝑓2 (𝑥)
𝑡-test – 26.72 12.70 22.75 27.98 0.000 0.000 9.4701 0.000
Rank 1 2 6 5 4 1 1 3 1
Mean 1.341e−𝟎𝟖 3.3426e1 1.70e+2 2.28e+1 9.164e6 2.06 2.33e1 2.38e1 23.7073
Std. Dev 6.508e−𝟎𝟗 2.1794 2.12 1.29 3.46e5 12.3 3.24e1 3.56e0 0.3860
𝑓3 (𝑥)
𝑡-test – 108.39 567.01 124.97 187.28 1.184 5.085 47.272 4.3415
Rank 1 7 8 3 9 2 4 6 5
Mean 3.884e−𝟐𝟒 9.44e−2 2.4589e−3 1.55e−16 18.1456 0.0227 0.000 7.40e−3 0.0000
Std. Dev 7.128e−𝟐𝟒 1.025e−1 2.53e−3 1.02e−16 1.1171 0.144 0.000 2.31e−2 0.0000
𝑓4 (𝑥)
𝑡-test 6.512 6.845 10.74 114.58 1.1146 0.000 2.265 0.000
Rank 1 6 4 2 7 5 1 3 1
Mean 2.339e−𝟒𝟐 4.5989e−3 1.54e−06 6.58e−10 1.4094 3.47e−14 6.8e−14 1.84e−14 8.881e−16
Std. Dev 9.426e−𝟒𝟐 3.2724e−3 6.78e−07 5.22e−11 0.2447 4.45e−14 5.46e−14 4.80e−13 2.742e−30
𝑓5 (𝑥)
𝑡-test – 9.938 16.061 89.133 40.72 5.5138 8.8064 0.2710 2.2902e15
Rank 1 8 7 6 9 4 5 3 2
Overall ranking 1 6 7 5 8 3 3 4 2
(Average ranking (1.0) (5.8) (6.4) (4.2) (7.4) (2.8) (2.8) (3.8) (2.0)
number)
Table 3
Results of Wilcoxon signed-rank test between HGSPSO and other methods on 30-D benchmark functions.
HGSPSO v.s. SPSO SGSA IHGSA MGBPSO-GSA SLPSO DNLGSA GSADMSPSO HGSA
w (+) 5 5 5 5 4 3 5 2
t (=) 0 0 0 0 1 2 0 3
l (−) 0 0 0 0 0 0 0 0
Table 4
Analysis of acceleration constants variations and inertial weight at different values for 𝐷 = 30.
HGSPSO 𝒄𝟏 = [2.5, 0.5] 𝒄𝟏 = [2.0, 0.0] 𝒄𝟏 = [2.0, 0.25]
𝒄𝟐 = [0.5, 2.5] 𝒄𝟐 = [0.0, 2.0] 𝒄𝟐 = [0.25, 2.0]
𝑓1 (𝑥) Mean = 6.256e−52 Mean = 1.804e−10 Mean = 7.442e−28
w = using (16) SD = 9.293e−53 SD = 5.308e−10 SD = 2.175e−28
𝑓1 (𝑥) Mean = 9.386e−22 Mean = 3.418e−2 Mean = 6.463e−14
w = 1.0 SD = 3.458e−22 SD = 1.994e−2 SD = 5.335e−14
𝑓3 (𝑥) Mean = 1.341e−08 Mean = 9.04e1 Mean = 4.422e−2
w = using (16) SD = 6.508e−09 SD = 8.22e1 SD = 7.093e−2
𝑓3 (𝑥) Mean = 4.39e1 Mean = 8.20e2 Mean = 9.53e1
w = 1.0 SD = 1.802e1 SD = 2.653e2 SD = 8.339e1
𝑓5 (𝑥) Mean = 2.339e−42 Mean = 1.866e−10 Mean = 5.003e−28
w = using (16) SD = 9.426e−42 SD = 3.790e−10 SD = 4.370e−27
𝑓5 (𝑥) Mean = 8.551e−28 Mean = 2.0024e−05 Mean = 7.452e−16
w = 1.0 SD = 6.951e−28 SD = 5.849e−04 SD = 9.942e−16
𝑓13 (𝑥) Mean = 0.0000 Mean = 6.046e−19 Mean = 3.114e−32
w = using (16) SD = 0.0000 SD = 4.70e−18 SD = 1.990e−32
𝑓13 (𝑥) Mean = 5.318e−15 Mean = 9.884e−4 Mean = 7.6042e−8
w = 1.0 SD = 8.927e−15 SD = 4.994e−4 SD = 2.947e−8
in this variant are the modification in the mutation and crossover 3.2.1.4. Teaching learning based optimization with focused learning (TLBO-
operators using selective pressure. FL). Remya et al. presented a modified version of the TLBO algorithm
by introducing the student focus learning approach (Kommadath and
3.2.1.2. Hybrid prey–predatorPSO (PP-PSO). A hybrid prey–predator Kotecha, 2017).
PSO (PP-PSO) algorithm, based on the bio-inspired prey–predator rela-
3.2.1.5. Improved Cuckoo search (ICS) algorithm. Rohit et al. proposed
tionship is proposed that used the three new approaches of breeding, es-
an improved variant of the Cuckoo search algorithm (Salgotra et al.,
cape and catch (Zhang et al., 2018b). The inactive particles are changed 2018). Three modifications have been made in this paper. Firstly,
or removed so the active particles will speed up the convergence. by reducing switch probability for maintaining a balance between
global and local search. Introducing new equations for global and local
3.2.1.3. jSO. Janez et al. introduced a modified version of the iL-
searching are the second and third amendments.
SHADE algorithm that is called as jSO (Brest et al., 2017) in which
the weighted version of the mutation strategy is improved. This algo- 3.2.2. Statistical analysis
rithm stands at the second position in IEEE Congress on Evolutionary The parameters values of the compared techniques in Table 7 have
Computation (CEC) conference competition. been taken from the literature as sown in Table 6.
6
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 5
Modern Benchmark Functions.
Category No Functions 𝐹𝑖∗ = F 𝑖 = (x ∗ )
𝑓6 (𝑥) Shifted and Rotated Bent Cigar Function 100
Unimodal 𝑓7 (𝑥) Shifted and Rotated Sum of Different Power Function 200
𝑓8 (𝑥) Shifted and Rotated Zakharov Function 300
𝑓9 (𝑥) Shifted and Rotated Rosenbrock’s Function 400
𝑓10 (𝑥) Shifted and Rotated Rastrigin’s Function 500
𝑓11 (𝑥) Shifted and Rotated Expanded Scaffer’s F6 Function 600
Multi-modal functions
𝑓12 (𝑥) Shifted and Rotated Lunacek Bi_Rastrigin Function 700
𝑓13 (𝑥) Shifted and Rotated Levy Function 900
𝑓14 (𝑥) Shifted and Rotated Schwefel’s Function 1000
𝑓15 (𝑥) Hybrid Function 1 (N = 3) 1100
𝑓16 (𝑥) Hybrid Function 5 (N = 4) 1500
Hybrid functions
𝑓17 (𝑥) Hybrid Function 6 (N = 5) 1700
𝑓18 (𝑥) Hybrid Function 6 (N = 6) 2000
𝑓19 (𝑥) Composition Function 1 (N = 3) 2100
𝑓20 (𝑥) Composition Function 3 (N = 4) 2300
Composite functions
𝑓21 (𝑥) Composition Function 5 (N = 5) 2500
𝑓22 (𝑥) Composition Function 7 (N = 6) 2700
4. DNA problem
7
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 7
The standard deviation, mean and 𝑡-test value evaluation between HGSPSO and other algorithms for Modern Benchmark Functions for D = 30.
HGSPSO
Functions LSHADE-RSP PP-PSO jSO TLBO-FL ICS
𝒄𝟏 = 2.5–0.5
𝒄𝟐 = 0.5–2.5
Mean 0.0000 0.0000 167 0.0000 3.5e+3 1.00e+10
Std. Dev 0.0000 0.0000 1.40e+02 0.0000 3.6e+3 0.0000
𝑓6 (𝑥)
𝑡-test - – 8.4348 – 6.8746 –
Rank 1 1 2 1 3 4
Mean 0.0000 0.0000 4.88e+07 0.0000 8.5e+16 1.00e+10
Std. Dev 0.0000 0.0000 9.18e+07 0.0000 5.8e+17 0.0000
𝑓7 (𝑥)
𝑡-test - – 3.7589 – 1.0363 –
Rank 1 1 2 1 4 3
Mean 0.0000 0.0000 300 0.0000 3.0e+03 1.39e+02
Std. Dev 0.0000 0.0000 1.173e−03 0.0000 1.1e+03 9.30e+01
𝑓8 (𝑥)
𝑡-test - – 1.8085e+06 – 19.2847 10.5686
Rank 1 1 3 1 4 2
Mean 1.185e+𝟏 5.8779e+01 411 5.8670e+01 9.0e+01 1.43e+01
Std. Dev 2.868e−𝟐 1.0784 1.19e+01 7.7797e−01 2.4e+01 2.18e+01
𝑓9 (𝑥)
𝑡-test – 307.6046 237.1772 425.3022 230.2352 7.9462
Rank 1 4 6 3 5 2
Mean 7.3692 7.5953 605 8.5568 4.0e+01 1.10e+02
Std. Dev 2.4225 2.0252 2.35e+01 2.0980 2.1e+01 2.51e+01
𝑓10 (𝑥)
𝑡-test – 0.5063 178.8771 2.6204 10.9150 28.7790
Rank 1 2 6 3 4 5
Mean 4.0892e−𝟏𝟎 2.6838e−09 611 6.0385e−09 4.9e−01 1.96e+01
Std. Dev 9.5029e−𝟏𝟎 1.8977e−08 3.56 2.7122e−08 4.2e−01 7.90
𝑓11 (𝑥)
𝑡-test – 0.8466 1.2136e+03 1.4668 8.2496 17.5434
Rank 1 2 6 3 4 5
Mean 2.2803e+𝟎𝟏 3.7959e+01 856 3.8927e+1 1.4e+02 1.15e+02
Std. Dev 1.4394 1.7138 3.23e+01 1.4594 4.7e+01 2.06e+01
𝑓12 (𝑥)
𝑡-test – 47.8845 182.2214 55.6217 17.6238 31.5702
Rank 1 2 6 3 5 4
Mean 0.0000 0.0000 1404 0.0000 3.4e+01 1.95e+03
Std. Dev 0.0000 0.0000 2.32e+02 0.0000 2.7e+01 1.04e+03
𝑓13 (𝑥)
𝑡-test – – 42.7922 – 8.9043 13.2583
Rank 1 1 3 1 2 4
Mean 1.1791e+𝟎𝟑 1.4688e+03 4448 1.5277e+03 6.7e+03 3.31e+03
Std. Dev 2.6792e+𝟎𝟐 3.0108e+02 5.66e+02 2.7716e+02 2.8e+02 2.68e+02
𝑓14 (𝑥)
𝑡-test – 5.0695 36.9120 6.3945 100.7366 39.7615
Rank 1 2 5 3 6 4
Mean 3.0347 3.0526 1249 3.0375 8.2e+01 4.20e+01
Std. Dev 4.4853 8.5462 5.16e+01 2.6464 4.1e+01 1.68e+01
𝑓15 (𝑥)
𝑡-test – 0.0131 170.1009 0.0038 13.5380 15.8454
Rank 1 3 6 2 5 4
Mean 1.0621 7.2992e−𝟎𝟏 1679 1.0879 2.2e+04 3.41e+01
Std. Dev 9.493e−01 5.3154e−𝟎𝟏 1.03e+02 6.9133e−01 2.3e+04 8.39
𝑓16 (𝑥)
𝑡-test – −2.1589 115.1875 0.1553 6.7633 27.6677
Rank 2 1 5 3 6 4
Mean 2.1502e+𝟎𝟏 3.1535e+01 1961 3.2925e+01 1.4e+02 1.93e+02
Std. Dev 6.5818 6.8728 1. 37e+02 8.0767 6.6e+01 7.33e+01
𝑓17 (𝑥)
𝑡-test – 7.4552 99.9892 7.7525 12.6329 16.4777
Rank 1 2 6 3 4 5
{ }
4.1. Elementary concepts and descriptions 1, 𝑎=𝑏
𝑒𝑞 (𝑎, 𝑏) = (32)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
{ }
In this section, some of the elementary representations and relations 1, 𝑎 = 𝑏′
𝑏𝑝 (𝑎, 𝑏) = (33)
are explained. 𝛬 = {𝐴, 𝐶, 𝐺, 𝑇 , −} is the representation of the nucleotide 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
and a gap, each nucleotide is denoted by a single alphabet and gaps
as ‘‘-’’. Similarly, if the gap is removed then nucleotide without a gap In a given sequence the number of nonblank nucleotides 𝑥 ∈ 𝛬∗ is
is denoted as 𝛬𝑛𝑏 = {𝐴, 𝐶, 𝐺, 𝑇 }. For all the DNA sequences sets it is defined as:
denoted by 𝛬∗ = 𝛬 and 𝛬𝑛𝑏 . Suppose 𝑎, 𝑏 ∈ 𝛬 𝑎, 𝑏 = {𝐴, 𝐶, 𝐺, 𝑇 , −} and |𝑥|
∑
𝑥, 𝑦 ∈ 𝛬∗ 𝑥, 𝑦 = {𝐴, 𝐶, 𝐺, 𝑇 } and {𝐴, 𝐶, 𝐺, 𝑇 , −}.DNA sequence length is 𝑙𝑒𝑛𝑔𝑡ℎ𝑛𝑏 (𝑥) = 𝑛𝑏(𝑥𝑖 ) (34)
represented as |𝑥| where 𝑥𝑖 = (1 ≤ 𝑖 ≤ |𝑥|) that means 𝑖th nucleotide 𝑖=1
{ }
from 5′ −𝑒𝑛𝑑 of sequence x. A set of ‘‘𝑛’’ number of sequences and length 1, 𝑎 ∈ 𝛬𝑛𝑏
𝑤ℎ𝑒𝑟𝑒 𝑛𝑏 = 𝑛𝑏 (𝑎) = (35)
‘‘𝑙’’ is denoted by ‘‘𝛴’’. Similarly, for the 𝑖th member, it is defined as 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝛴𝑖 . Base ‘‘𝑎’’ complementary is defined by ā. A few basic conditions and
notations are defined as under (Xiao et al., 2012). A shift of the sequence x by i bases is represented as
{ } { 𝑖 }
𝑖, 𝑖>𝑗 (−) 𝑥1 … 𝑥𝑙−𝑖 , 𝑖 ≥ 0
𝑇 (𝑖, 𝑗) = (31) 𝑠ℎ𝑖𝑓 𝑡 (𝑥, 𝑦) = (36)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝑥𝑖+1 … 𝑥𝑙 (−)𝑖 , 𝑖 < 0
8
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 7 (continued).
HGSPSO
Functions LSHADE-RSP PP-PSO jSO TLBO-FL ICS
𝒄𝟏 = 2.5–0.5
𝒄𝟐 = 0.5–2.5
Mean 2.3701e+𝟎𝟏 2.8843e+01 2269 2.9368e+01 2.2e+02 2.83e+02
Std. Dev 3.9628 6.8284 1.03e+02 5.8548 1.2e+02 8.51e+01
𝑓18 (𝑥)
𝑡-test – 4.6054 154.0284 5.6680 11.5607 21.5222
Rank 1 2 6 3 4 5
Mean 2.02639e+𝟎𝟐 2.0758e+02 2388 2.0929e+02 2.3e+02 2.50e+02
Std. Dev 1.3727 1.6596 2.70e+01 1.9554 1.2e+01 9.93e+01
𝑓19 (𝑥)
𝑡-test – 16.2221 571.5890 19.6849 16.0182 3.3722
Rank 1 2 6 3 4 5
Mean 3.1978e+𝟎𝟐 3.5038e+02 2787 3.5075e+02 4.0e+02 4.34e+02
Std. Dev 3.1269 3.3514 3.27e+01 3.2992 1.6e+01 5.30e+01
𝑓20 (𝑥)
𝑡-test – 47.2063 531.0905 48.1783 34.7943 15.2124
Rank 1 2 6 3 4 5
Mean 3.8171e+𝟎𝟐 3.8670e+02 2878 3.8670e+02 4.0e+02 3.83e+02
Std. Dev 4.6180e−𝟎𝟒 5.5921e−03 4.07 7.6811e−03 1.8e+01 8.20e−01
𝑓21 (𝑥)
𝑡-test – 6.2883e+03 4.3370e+03 4.5854e+03 7.1850 11.1240
Rank 1 3 6 4 5 2
Mean 4.51964e+𝟎𝟐 4.9504e+02 3187 4.9739e+02 5.3e+02 5.12e+02
Std. Dev 6.1792 6.9738 2.38e+01 7.0017 2.1e+01 9.32
𝑓22 (𝑥)
𝑡-test – 32.6903 786.5130 34.3966 25.2075 37.9633
Rank 1 2 6 3 5 4
Overall ranking 1 (1.05) 2 (1.82) 6 (4.76) 3 (2.52) 5 (4.35) 4 (3.94)
(Average ranking
number)
∑𝑛
( ) ( ( ))
𝑓𝐻𝑎𝑖𝑟𝑝𝑖𝑛 (𝛴) = 𝐻𝑎𝑖𝑟𝑝𝑖𝑛 𝛴𝑖 (40) 𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 (𝑥, 𝑦) = 𝑚𝑎𝑥𝑔,𝑖 (𝑠𝑑𝑖𝑠 𝑥, 𝑠ℎ𝑖𝑓 𝑡 𝑦 (−)𝑔 𝑦, 𝑖 )
𝑖=1 ( ( )) (48)
+ 𝑠𝑐𝑜𝑛 𝑥, 𝑠ℎ𝑖𝑓 𝑡 𝑦 (−)𝑔 𝑦, 𝑖
𝐻𝑎𝑖𝑟𝑝𝑖𝑛
𝑙−2×𝑝𝑚𝑖𝑛 𝑙−𝑝𝑚𝑖𝑛 −𝑟
(𝑝𝑖𝑛𝑙𝑒𝑛(𝑝,𝑟) ) Where 0 ≤ 𝑔 ≤ 𝑙 − 3 𝑎𝑛𝑑 |𝑖| ≤ 𝑙 − 1;
∑ ∑ ∑ 𝑝𝑖𝑛𝑙𝑒𝑛(𝑝, 𝑟)
= 𝑇 𝑏𝑝(𝑥𝑃 +1−𝑖 , 𝑥𝑝+𝑟+𝑖 , ) (41) ∑
𝑙
9
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 9
Shows the previous work done in the literature on various objective functions and constraints.
Objective functions used in the literature previously
Objective function
Tanaka et al. Deaton et al. Hartemink Kurniawan Ibrahim et al. Feldkamp Zhou et al. Arita and Kobayashi
(2002) (2002) et al. (1999) et al. (2008) (2009) et al. (2002) (2007) (2002)
Similarity (with Shift) X – – X X – X X
Similarity (without Shift) X – – X X X
H-measure (with Shift) X – – X X – X X
H-measure (without Shift) X – – X X – X X
Hairpin X – – X X – X –
GC-content X – – X X X X X
Free energy – X X – – – – –
Melting temperature X – X X X – X X
The occurrence of specific – – – – – X – X
subsequences
Constraints on DNA bases X – – – – – – –
Secondary structure X – X – – X – –
Bio Lab Methods – X – – – – – –
Continuity X – – X X – X –
Reverse complement Hamming – – – – – X – –
3′ end H measure X – – – – X – –
Fig. 6. Comparison for the H-measure objective function results between mean values
of HGSPSO and its competitors.
Fig. 7. Comparison for the hairpin Objective function results between mean values of
HGSPSO and its competitors.
𝑆𝑐𝑜𝑛 is an integer between one and l and 𝑆𝑑𝑖𝑠 is a real value between
zero and one. 𝑇 = 2; 𝑆𝑑𝑖𝑠 = 𝐻𝑑𝑖𝑠 = 0.17 𝑎𝑛𝑑 𝑆𝑐𝑜𝑛 = 𝐻𝑐𝑜𝑛 =
6 (Krishna Veni et al., 2011).
(v) GC-content
GC-𝑐𝑜𝑛𝑡𝑒𝑛𝑡 is the percentage of G and C in a DNA sequence, for
instance, if 10-mer DNA sequence GGTCCATCCA has two G’s and four
C’s. (52) calculates 𝐺𝐶 -𝑐𝑜𝑛𝑡𝑒𝑛𝑡 as follows:
10
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 10
Comparison of the sequences produced by HGSPSO and other algorithms in terms of objective functions.
Sequences Continuity Hairpin H-measure Similarity GC content (%)
DNA sequences produced by HGSPSO
CCTAGTTCGTCCGTATTGTC 0 0 47 37 50
TTTATCGGTAGCGTGGTCTT 0 0 45 39 50
GGGTTCAATGCAGTGATCTG 0 0 44 35 50
ACTTGACGCTAGATGCCCTA 0 0 37 38 50
AAAGGCTGCTAGTCGATACC 9 3 41 34 50
CTAGGTTGCGGTAATCCATG 0 0 39 33 50
GATCATGTGATCGCAGTCCT 0 0 35 39 50
Mean 0 0 41.14285714 30.42857143
Standard deviation 0 0 4.413183712 1.511857892
DNA sequences produced by BPSON (Liu et al., 2019)
CCTTAGAACGTCTCGAGCTT 0 0 88 78 50
CCGTATACATGCTCGGTCTT 0 0 86 81 50
ACTCGCGACTACCAACATCT 0 0 98 75 50
CGTATCTGTGCTTCTCGTCA 0 0 92 84 50
ATTGCTGGATGAGGTGCCTA 0 0 94 80 50
TTCACTACTGAGGATCCGCA 0 0 98 76 50
CCTTAGAACGTCTCGAGCTT 0 0 88 78 50
Mean 0 0 92 78.85714286
Standard deviation 0 0 4.898979486 3.078342164
DNA sequences produced by INSGA-II (Wang et al., 2018)
CGTCTAGGCCGGATCAATAT 0 3 91 81 50
GGTTGTCCTGAGTGTTGTGT 0 0 84 80 50
AGAGTCAGCAGCGTAGAGAT 0 0 93 82 50
TACAGTCGGTTCGGTTATGG 0 0 90 73 50
GCGGAAGTAATCGGAAGTGA 0 0 88 81 50
ACGCCACAGTATATCATCGC 0 3 82 78 50
AGTCATTCTCCTGGCATTGC 0 6 90 76 50
Mean 0 1.714285714 88.28571429 78.71428571
Standard deviation 0 2.360387377 3.946064948 3.251373336
DNA sequences produced by DMEA (Xiao et al., 2012)
GCCGGAGCCTTCTTGATAAT 0 0 68 53 50
AATCCTGCTTGTCCTCCTAC 0 0 63 50 50
TGAGCTCTCTGTTCCAACGA 0 0 64 52 50
ATGTAACACGCGGCCACTAA 0 0 63 50 50
ACTCGGATTGTGTTGAACGC 0 0 71 51 50
CGTTGTTGGCACCTACGTTA 0 0 68 54 50
ATCCAGACTACCAAGGCCAA 0 0 61 48 50
Mean 0 0 65.42857143 51.14285714
Standard deviation 0 0 3.598941643 2.035400978
DNA sequences produced by NACST/Seq (Soo-Yong et al., 2005)
CTCTTCATCCACCTCTTCTC 0 0 43 58 50
CTCTCATCTCTCCGTTCTTC 0 0 37 58 50
TATCCTGTGGTGTCCTTCCT 0 0 45 57 50
ATTCTGTTCCGTTGCGTGTC 0 0 52 56 50
TCTCTTACGTTGGTTGGCTG 0 0 51 53 50
GTATTCCAAGCGTCCGTGTT 0 0 55 49 50
AAACCTCCACCAACACACCA 0 0 55 43 50
Mean 0 0 48.28571429 53.42857143
Standard deviation 0 0 6.799859943 5.623081683
DNA sequences produced by IGA (Wang et al., 2008)
GTCGAAACCTGAGGTACAGA 9 3 72 58 50
GTGACTGTATGCACTCGAGA 0 3 74 57 50
ACGTAGCTCGAATCAGCACT 0 3 77 55 50
CGCTGATCTCAGTGCGTATA 0 0 74 55 50
GTCAGCCAATACGAGAGCTA 0 3 71 56 50
GCATGTCTAACTCAGTCGTC 0 3 68 55 50
TACTGACTCGAGTAGGACTC 0 0 78 60 50
Mean 1.28571428 2.142857143 73.42857143 56.57142857
Standard deviation 3.40168025 1.463850109 3.457221565 1.902379462
as an alternative for the binary domain, therefore before converting by the HGSPSO gives better results than the other algorithms. The
all the decimal numbers into binary, the decimal numbers are rounded sequences generated by the proposed algorithm have lesser H-measure
off to the nearest decimal value, then changed into binary, and then and similarity values. The obtained results indicate a greater possibil-
transformed into DNA sequences. For example, 403.810 is rounded off ity to hybridize with accurate complementary sequences through the
to 40410 that is changed into binary form as 1100101002 , and DNA
HGSPSO. Additionally, the zero values of continuity and hairpin point
representation is ‘‘CGCCA’’. Table 10 highlights the sequences produced
towards the low probability of the secondary structure.
and the fitness values of the four objective functions.
The comparison results of the DNA codes produced by the HSPSO Figs. 6–8 illustrate the pictorial representation of the mean values
and other algorithms are presented in Table 10. DNA codes produced of all the DNA codes presented in Table 10 for the three objective
11
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Table 11
Results of the HGSPSO and its competitors in terms of mean and standard deviation.
Functions HGSPSO BPSON INSGA II DMEA NACST IGA
𝜇 0 0 0 0 0 1.285
𝑓1
𝜎 0 0 0 0 0 3.401
𝜇 0 0 1.71428 0 0 2.142
𝑓2
𝜎 0 0 2.36038 0 0 1.463
𝜇 41.1428 92 88.2857 65.428 48.285 73.42
𝑓3
𝜎 4.41318 4.8989 3.94606 3.5989 6.7998 3.457
𝜇 30.4285 78.857 78.7142 51.1428 53.428 56.57
𝑓4
𝜎 1.51185 3.0783 3.25137 2.03540 5.6230 1.902
Fitness value 71.85 170.8 168.7 116.57 101.7 133.4
(weighted sum of all
functions)
functions as the results for the continuity function is similar for all the Brest, J., Maučec, M.S., Bošković, B., 2017. Single objective real-parameter optimiza-
methods tion: Algorithm jSO. In: 2017 IEEE Congress on Evolutionary Computation, CEC,
5-8 June 2017. pp. 1311–1318. http://dx.doi.org/10.1109/CEC.2017.7969456.
Table 11 shows a comparison of the results summarized in terms of
Deaton, R., Chen, J., Bi, H., Rose, J., 2002. A Software Tool for Generating
the mean and standard deviation values of all the objective functions. It Non-Crosshybridizing Libraries of DNA Oligonucleotides. pp. 252–261.
is clear from the table that the weighted sum of the HGSPSO for all the Eiben, A.E., Schippers, C.A., 1998. On evolutionary exploration and exploitation. Fund.
functions is lesser than the other compared algorithm that demonstrates Inform. 35 (1–4), 35–50.
the better performance of the proposed method. Ezziane, Z., 2005. DNA computing: Applications and challenges. Nanotechnology 17
(2), R27–R39. http://dx.doi.org/10.1088/0957-4484/17/2/r01.
Feldkamp, U., Saghafi, S., Banzhaf, W., Rauhe, H., 2002. DNASequenceGenerator: A
5. Conclusions Program for the construction of DNA sequences. In: DNA Computing. Springer
Berlin Heidelberg, Berlin, Heidelberg, pp. 23–32.
Guangzhao, C., Xiaoguang, L., 2010. The optimization of DNA encodings based on
In this paper, a hybrid algorithm is proposed through the fusion of
modified PSO/GA algorithm. In: 2010 International Conference On Computer
the strong attributes GSA, PSO, and further modifying the velocity and Design and Applications, 25-27 June 2010, 1, pp. V1–609–V1–614. http://dx.doi.
position strategies. The primary concept behind this hybridization is to org/10.1109/ICCDA.2010.5540892.
use the strong capacity of PSO in exploitation and the exploration capa- Han, Y., Li, M., Liu, J., 2017. Improving hybrid gravitational search algorithm for
bilities of GSA. To validate the performance five standard and modern adaptive adjustment of parameters. In: 2017 13th International Conference on
Computational Intelligence and Security, CIS, 15-18 Dec. 2017. http://dx.doi.org/
benchmark functions are used. The results highlight that the proposed
10.1109/CIS.2017.00013.
method gives improved results for all the benchmark functions. It is Hartemink, A.J., Gifford, D.K., Khodor, J., 1999. Automated constrain𝑡-based nucleotide
also noticeable that the convergence of the HGSPSO is faster than sequence selection for DNA computation. Biosystems 52 (1), 227–235. http://dx.
PSO and GSA. Additionally, a DNA problem is also solved. HGSPSO doi.org/10.1016/S0303-2647(99)00050-7.
Ibrahim, Z., Kurniawan, T., Khalid, N., Sudin, S., Khalid, M., 2009. Implementation of
outperformed the other algorithms in terms of the objective function
ant colony system for DNA sequence optimization. Artif. Life Robot. 14, 293–296.
results. It shows that the HGSPSO can translate DNA sequences with http://dx.doi.org/10.1007/s10015-009-0683-0.
less probability of hybridization and greater precision. Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. In: Neural Networks,
1995. Proceedings., IEEE International Conference on, Nov/Dec 1995, vol. 4, pp.
1942–1948. http://dx.doi.org/10.1109/ICNN.1995.488968.
CRediT authorship contribution statement
Kommadath, R., Kotecha, P., 2017. Teaching learning based optimization with focused
learning and its performance on CEC2017 functions. In: 2017 IEEE Congress on
Talha Ali Khan: Conceptualization, Methodology, Data curation, Evolutionary Computation, CEC, 5-8 June 2017. pp. 2397–2403. http://dx.doi.org/
Formal analysis, Software, Writing - original draft, Validation, Visual- 10.1109/CEC.2017.7969595.
ization, Investigation. Sai Ho Ling: Supervision, Project administration, Krishna Veni, S., Muhammad, M.S., Masra, S.M.W., Ibrahim, Z., Kian Sheng, L.,
2011. Dna words based on an enhanced algorithm of multi-objective particle
Writing - review & editing. swarm optimization in a continuous search space. In: International Conference on
Electrical, Control and Computer Engineering 2011, InECCE, 21-22 June 2011. pp.
Declaration of competing interest 154–159. http://dx.doi.org/10.1109/INECCE.2011.5953867.
Kurniawan, T.B., Khalid, N.K., Ibrahim, Z., Khalid, M., Middendorf, M., 2008. An Ant
Colony System for DNA sequence design based on thermodynamics. In: presented
The authors declare that they have no known competing finan- at the Proceedings of the Fourth IASTED International Conference on Advances in
cial interests or personal relationships that could have appeared to Computer Science and Technology, Langkawi, Malaysia.
influence the work reported in this paper. Li, C., Yang, S., Nguyen, T.T., 2012. A self-learning particle swarm optimizer for global
optimization problems. IEEE Trans. Syst. Man Cybern. B 42 (3), 627–646.
Liang, J., Qu, B., Suganthan, P., 2013. Problem definitions and evaluation criteria for
Data availability the CEC 2014 special session and competition on single objective real-parameter
numerical optimization.
There is no data associated with this study. Liu, K., Wang, B., Lv, H., Wei, X., Zhang, Q., 2019. A bpson algorithm applied to DNA
codes design. IEEE Access 7, 88811–88821. http://dx.doi.org/10.1109/ACCESS.
2019.2924708.
References Nagra, A.A., Han, F., Ling, Q., Mehta, S., 2019. An improved hybrid method com-
bining gravitational search algorithm with dynamic multi swarm particle swarm
Arita, M., Kobayashi, S., 2002. DNA sequence design using templates. New Gener. optimization. IEEE Access 7, 50388–50399.
Comput. 20 (3), 263–277. http://dx.doi.org/10.1007/bf03037360. Qian, K., Li, W., Qian, W., 2017. Hybrid gravitational search algorithm based on fuzzy
Awad, M.Z.A.N.H., Suganthan, P.N., Liang, J.J., Qu, B.Y., 2017. Problem Definitions logic. IEEE Access 5, 24520–24532.
and Evaluation Criteria for the CEC 2017 Special Session and Competition on Rashedi, E., Nezamabadi-pour, H., Saryazdi, S., 2009. GSA: A gravitational search
Single Objective Real-Parameter Numerical Optimization, IEEE Congress on Evo- algorithm. Inform. Sci. 179 (13), 2232–2248. http://dx.doi.org/10.1016/j.ins.2009.
lutionary Computation, CEC 2017, Donostia - San Sebastián, Spain, 05/06/2017 to 03.004.
08/06/2017. Sabeti, M., Boostani, R., Davoodi, B., 2018. Improved particle swarm optimisation to
Bergh, F., Engelbrecht, A., 2010. A convergence proof for the particle swarm optimiser. estimate bone age. IET Image Process. 12 (2), 179–187. http://dx.doi.org/10.1049/
Fund. Inform. 105, 341–374. http://dx.doi.org/10.3233/FI-2010-370. iet-ipr.2017.0545.
12
T.A. Khan and S.H. Ling Engineering Applications of Artificial Intelligence 102 (2021) 104263
Salgotra, R., Singh, U., Saha, S., 2018. Improved cuckoo search with better search Wang, B., Zhang, Q., Zhang, R., 2008. Design of DNA sequence based on improved
capabilities for solving CEC2017 benchmark problems. In: 2018 IEEE Congress on genetic algorithm. In: Advanced Intelligent Computing Theories and Applications.
Evolutionary Computation, CEC, 8-13 July 2018. pp. 1–7. http://dx.doi.org/10. with Aspects of Theoretical and Methodological Issues. Springer Berlin Heidelberg,
1109/CEC.2018.8477655. Berlin, Heidelberg, pp. 9–14.
Shi, Y., Eberhart, R., 1998. A modified particle swarm optimizer. In: 1998 IEEE Xiao, J., Zhang, X., Xu, J., 2012. A membrane evolutionary algorithm for DNA sequence
International Conference on Evolutionary Computation Proceedings. IEEE World design in DNA computing. Chin. Sci. Bull. 57 (6), 698–706. http://dx.doi.org/10.
Congress on Computational Intelligence (Cat. No.98TH8360), 4-9 May 1998. pp. 1007/s11434-011-4928-7.
69–73. http://dx.doi.org/10.1109/ICEC.1998.699146. Xu, C., Zhang, Q., Wang, B., Zhang, R., 2008. Research on the DNA sequence
Singh, N., Singh, S., Singh, S.B., 2017. A new hybrid MGBPSO-GSA variant for design based on GA/PSO algorithms. In: 2008 2nd International Conference on
improving function optimization solution in search space. Evol. Bioinform. 13, Bioinformatics and Biomedical Engineering, 16-18 May 2008. pp. 816–819. http:
1176934317699855. http://dx.doi.org/10.1177/1176934317699855. //dx.doi.org/10.1109/ICBBE.2008.200.
Soo-Yong, S., In-Hee, L., Dongmin, K., Byoung-Tak, Z., 2005. Multiobjective evolution- Zhang, A., Sun, G., Ren, J., Li, X., Wang, Z., Jia, X., 2018a. A dynamic neighbor-
ary optimization of DNA sequences for reliable DNA computing. IEEE Trans. Evol. hood learning-based gravitational search algorithm. IEEE Trans. Cybern. 48 (1),
Comput. 9 (2), 143–158. http://dx.doi.org/10.1109/TEVC.2005.844166. 436–447.
Stanovov, V., Akhmedova, S., Semenkin, E., 2018. Lshade algorithm with rank-based Zhang, X., Wang, Y., Cui, G., Niu, Y., Xu, J., 2009. Application of a novel IWO to the
selective pressure strategy for solving cec 2017 benchmark problems. In: 2018 design of encoding sequences for DNA computing. Comput. Math. Appl. 57 (11),
IEEE Congress on Evolutionary Computation, CEC, 8-13 July 2018. pp. 1–8. http: 2001–2008. http://dx.doi.org/10.1016/j.camwa.2008.10.038.
//dx.doi.org/10.1109/CEC.2018.8477977. Zhang, H., Yuan, M., Liang, Y., Liao, Q., 2018b. A novel particle swarm optimization
Tanaka, F., Nakatsugawa, M., Yamamoto, M., Shiba, T., Ohuchi, A., 2002. Developing based on prey-predator relationship. Appl. Soft Comput. 68, 202–218. http://dx.
support system for sequence design in DNA computing. In: DNA Computing. doi.org/10.1016/j.asoc.2018.04.008.
Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 129–137. Zhou, S., Zhang, Q., Zhao, J., Li, J., 2007. DNA encodings based on multi-objective
Wang, Y., Shen, Y., Zhang, X., Cui, G., Sun, J., 2018. An improved non-dominated particle swarm. J. Comput. Theor. Nanosci. 4, 1249–1252. http://dx.doi.org/10.
sorting genetic algorithm-II (INSGA-II) applied to the design of DNA codewords. 1166/jctn.2007.005.
Math. Comput. Simulation 151, 131–139. http://dx.doi.org/10.1016/j.matcom.
2018.03.011.
13