0% found this document useful (0 votes)
49 views80 pages

09-2021 - Design and Applications of An Advanced Hybrid

The paper presents a novel hybrid meta-heuristic algorithm, haDEPSO, which combines advanced differential evolution (aDE) and particle swarm optimization (aPSO) to enhance optimization problem-solving capabilities. The algorithm addresses common issues in conventional methods, such as local stagnation and slow convergence, by integrating unique strategies for mutation, crossover, and inertia weight adjustments. Experimental results demonstrate haDEPSO's superior performance across various test suites and real-world applications compared to existing algorithms.

Uploaded by

SAURAV CHANDRA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views80 pages

09-2021 - Design and Applications of An Advanced Hybrid

The paper presents a novel hybrid meta-heuristic algorithm, haDEPSO, which combines advanced differential evolution (aDE) and particle swarm optimization (aPSO) to enhance optimization problem-solving capabilities. The algorithm addresses common issues in conventional methods, such as local stagnation and slow convergence, by integrating unique strategies for mutation, crossover, and inertia weight adjustments. Experimental results demonstrate haDEPSO's superior performance across various test suites and real-world applications compared to existing algorithms.

Uploaded by

SAURAV CHANDRA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Artificial Intelligence Review

https://doi.org/10.1007/s10462-021-09962-6

Design and applications of an advanced hybrid


meta‑heuristic algorithm for optimization problems

Raghav Prasad Parouha1 · Pooja Verma1

Accepted: 20 January 2021


© The Author(s), under exclusive licence to Springer Nature B.V. part of Springer Nature 2021

Abstract
This paper designed an advanced hybrid algorithm (haDEPSO) to solve the optimization
problems, based on multi-population approach. It integrated with suggested advanced DE
(aDE) and PSO (aPSO). Where in aDE a novel mutation strategy and crossover proba-
bility along with the slightly changed selection scheme are introduced, to avoid prema-
ture convergence. And aPSO consists of the novel gradually varying inertia weight and
acceleration coefficient parameters, to escape stagnation. So, convergence characteristic of
aDE and aPSO provides different approximation to the solution space. Thus, haDEPSO
achieve better solutions due to integrating merits of aDE and aPSO. Also in haDEPSO
individual population is merged with other in a pre-defined manner, to balance between
global and local search capability. The algorithms efficiency is verified through 23 basic,
30 CEC 2014 and 30 CEC 2017 test suite and comparing the results with various state-of-
the-art algorithms. The numerical, statistical and graphical analysis shows the effectiveness
of these algorithms in terms of accuracy and convergence speed. Finally, three real world
problems have been solved to confirm problem-solving capability of proposed algorithms.
All these analyses confirm the superiority of the proposed algorithms over the compared
algorithms.

Keywords Optimization · Meta-heuristic algorithm · Particle swarm optimization ·


Differential evolution · Hybrid algorithm

1 Introduction

Nowadays optimization problems (OPs) increasingly encountered in the fields of engineer-


ing, agricultural, manufacturing, economics, finance and various other fields. Also, due to
the development of technology, many OPs related to engineering and science is becoming
more complex. The single-objective OPs can be represented as follows, generally.

* Raghav Prasad Parouha


[email protected]
Pooja Verma
[email protected]
1
Department of Mathematics, Indira Gandhi National Tribal University, Amarkantak, M.P, India

13
Vol.:(0123456789)
R. P. Parouha, P. Verma

( ) }
Minimize or Maximize f (x), x = x1 , x2 , … , xj ∈ RD
gl (x) ≤ 0, l = 1, 2, … , L;hk (x) = 0, k = 1, 2, … , K and lj ≤ xj ≤ uj , j = 1, 2, … , D
(1)
where f : real valued function, gl : inequality constraint and hk : equality constraint (these
may be linear or nonlinear), x ∈ RD: D-dimensional decision vector, lj and uj : lower and
upper limits for jth decision vector. L and K : total number of inequality and equality
constraints.
Many optimization methods have been developed to solve complex and/or crucial opti-
mization problems. However, conventional optimization methods have certain inherent
drawbacks like high computational complexity, local optimal stagnation and rely on the
derivatives of the fitness function to decide the direction of movement or search in the
next steps (Simpson et al. 1994). Also, it is difficult to find the optimal solution because
they are sensitive to initial estimates and converge into local optimal solution. Presently,
to overcome the drawbacks of conventional optimization methods, a bunch of optimization
methods known as meta-heuristics algorithms (MAs) has been introduced for solving real
world design optimization problems. According to the mechanical differences the MAs can
be categorized into four groups as follows: swarm intelligence algorithms (SIAs)—inspired
from behavior of social insects or animals, evolutionary algorithms (EAs)—inspired from
biology, physics based algorithms (PBAs)—inspired by the rules governing a natural phe-
nomenon and human behavior based algorithms (HBAs)—inspired from the human being.
Some recent past instances of these algorithms with brief study and their applications are
presented in Table 1.
The primary advantage of these algorithms is their use of the ‘trial-and-error’ princi-
ple in searching for solutions. Thus, these algorithms were successfully applied to solve
complex optimization problems like mechanical and structural design optimization prob-
lems (Yıldız 2017a, b, 2020a; Hamza et al. 2018; Yıldız and Yıldız 2019; Abderazek et al.
2019a, b; Sarangkum et al. 2019; Panagant et al. 2019, 2020; Aye et al. 2019; Abderazek
et al. 2020a, b; Yıldız et al. 2020a, b, c, d; Ozkaya et al. 2020; Yıldız et al. 2020a, b, c,
d; Champasak et al. 2020; Meng et al. 2020), clustering optimization problems (Abuali-
gah and Khader 2017; Abualigah et al. 2017a, b, 2018a; Abualigah 2019), manufacturing
optimization problems (Abualigah and Hanandeh 2015; Yıldız 2018, 2020b; Yıldız et al.
2019a, b; Yıldız 2020a, b, c; Yıldız et al. 2020a, b, c, d; Abderazek et al. 2020a, b) and
many more. To our limited knowledge, differential evolution (DE) and particle swarm
optimization (PSO) are easy to implement for solving optimization problems. Both of DE
and PSO are population-based algorithms. DE yields a new candidate solution (individual)
using the vector difference of two randomly selected individuals from the population. It has
a good global search capability but usually converges slowly in the later stage of population
evolution. Mimicking the behavior of bird flocking and fish swarming, PSO evolves a pop-
ulation (swarm) by making each particle (solution) in the swarm be attracted by two attrac-
tors Pbest (personal best) and Gbest (global best). This evolutionary behavior allows PSO
be able to converge quickly but at the same time easy to get the local optima. Moreover,
some shortcomings of DE and PSO cause limitation in applying them in real life optimiza-
tion environments. Preferably, the DE and PSO should be mutually consistent and com-
plementary as well as learn from each other efficiently through a common population of
points. So that an optimal synergy will develop between DE and PSO with different search
operators makes an efficient optimization algorithm as result. Hence, avoiding shortcom-
ings of these algorithms many variants and their hybridization are introduced in the litera-
ture (presented in related work section).

13
Table 1  Brief study of traditional meta-heuristic algorithms with applications
References Algorithm Type Brief description Application

Goldberg and Holland (1988) Genetic algorithm (GA) EAs It simulated natural process of evolution Function and real-life optimization
like selection, mutation and crossover
Kennedy and Eberhart (1995) Particle swarm optimization (PSO) SIAs It inspired by flocking of birds where each Benchmark function optimization
particles keeps personal and global best
value and changed the velocity with
position in each step
Storn and Price (1997) Differential evolution (DE) EAs It worked with mutation, crossover and Global optimization
selection to generate better off-springs
Murase and Wadano (1998) Photosynthetic learning algorithm (PLA) PBAs It utilized rules of Benson-Calvin cycle Travelling salesman problem
and Photorespiration reactions which
governing the conversion of carbon mol-
ecules from one substance to another
de Castro and Zuben (2000) Clonal selection algorithm (CSA) SIAs It is related to artificial immune system Engineering applications
which described a general learning
strategy (an adaptive information subject
to competitive selection process)
Design and applications of an advanced hybrid meta‑heuristic…

Geem et al. (2001) Harmony search (HS) PBAs It is developed by mimicking the improvi- Traveling salesman problem
sation of music players
Eusuff and Lansey (2003) Shuffled frog leaping algorithm (SFLA) SIAs It is settled by mimicking of cooperative Optimal network design problem
behavior of frogs while they search for
their food
Wedde et al. (2004) Beehive algorithm (BeeHive) SIAs It inspired by the communicative and Real network topologies
evaluative procedures of honey bees
Pinto et al. (2005) Wasp swarm optimization (WSO) SIAs It imitates social behavior of wasps that Logistics system optimization
exhibit while foraging and caring of
their broods
Du et al. (2006) Small-world optimization algorithm PBAs It inspired by the mechanism of small- Complex networks problems
(SWOA) world phenomenon using local short
range and random long range operators

13
Table 1  (continued)
References Algorithm Type Brief description Application

13
Mehrabian and Lucas (2006) Invasive weed optimization (IWO) SIAs It is developed by natural mimicking Multi-dimensional optimization
behavior of weed colony in opportunity
spaces
Karaboga and Basturk (2007) Artificial bee colony (ABC) algorithm SIAs It is based on intelligent behavior of the multivariable functions optimization
honey bees groups as employed, onlook-
ers and scouts bees
Havens et al. (2008) Roach Infestation optimization (Rio) SIAs RIO mimics the collective and separate Multi dimensional functions
behavior of cockroaches which search
for darkest place and communicate with
predefined probability
Simon (2008) Biogeography based optimization (BBO) SIAs It inspired by geographical distribution of Function optimization and sensor selection
biological organisms problem
Yang and Deb (2009) Cuckoo search (CS) SIAs It stimulated by brood parasitic behavior Multimodal functions
of some cuckoo species and levy flights
of some birds and flies
Yang (2009) Firefly algorithm (FA) SIAs It mimics the flashing light property of Multimodal optimization
fireflies where each firefly gets attracted
by another in the magnitude of its light
intensity and mutual distance
Rashedi et al. (2009) Gravitational search algorithm (GSA) PBAs GSO based on the law of gravity and mass Unimodal and multimodal functions
interactions
Yang (2010) Bat algorithm (BA) SIAs It stirred by echolocation behavior of bats Constrained optimization problems
to search directions and location
Rao et al. (2011) Teaching–learning-based optimization HBAs It works on the effect of influence of a Engineering design optimization
(TLBO) teacher on learners
Eskandar et al. (2012) Water cycle algorithm (WCA) PBAs It inspired from nature and based on the Engineering design problems
observation of water cycle process as
how rivers and streams flow to the sea in
the real world
R. P. Parouha, P. Verma
Table 1  (continued)
References Algorithm Type Brief description Application

Gandomi and Alavi (2012) Krill herd (KH) SIAs It is based on simulation of the herding High and low dimension function
behavior of krill individuals with adap-
tive genetic operators
Cuevas et al. (2013) Social spider optimization (SSO) SIAs It develop by cooperative behavior of Complex optimization problems
social spiders in a colony, which interact
to each other based on the biological
laws of the cooperative colony
Bansal et al. (2014) Spider monkey optimization (SMO) SIAs It inspired by the foraging behavior of spi- Numerical optimization problem
der monkeys and follow fission–fusion
based social structure
Mirjalili et al. (2014a, b) Grey wolf optimizer (GWO) SIAs It motivated by the hierarchical leadership Multimodal functions
and hunting approach of grey wolves
by searching, encircling and attacking
process
Zheng (2015) Water wave optimization (WWO) PBAs It inspired by shallow water wave theory Train scheduling problem
with implication of propagation, refrac-
Design and applications of an advanced hybrid meta‑heuristic…

tion and breaking operators


Mirjalili (2015) Moth-flame optimization (MFO) SIAs It motivated by navigation method of Engineering problems
moths in which they fly in night by
maintaining a fixed angle with respect
to the moon
Mirjalili and Lewis (2016) Whale optimization algorithm (WOA) SIAs It mimics the social behavior of humpback Multimodal functions
whales and bubble-net hunting strategy
Mirjalili (2016) Dragonfly algorithm (DA) SIAs It inspired by static and dynamic behavior Single and multi-objective problem
of dragonflies swarms which explore and
exploit the search space
Saremi et al. (2017) Grasshopper optimization algorithm SIAs It enthused by nymph and adulthood Real life problems
(GOA) nature of grasshopper swarms

13
Table 1  (continued)
References Algorithm Type Brief description Application

13
Pierezan and Dos (2018) Coyote optimization algorithm (COA) SIAs It is based on the mechanisms and social Global optimization problems
life of canis latrans species for balancing
exploration and exploitation
Shabani et al. (2019) Search and rescue optimization (SAR) HBAs It is inspired by search and rescue opera- Single-objective continuous optimization
tions of the humans being problems
Marzbali (2020) Bear smell search algorithm (BSSA) SIAs It imitates by sense (smell) and move- Real world engineering problems
ment of bears for search of their food in
thousand miles farther
Abualigah (2020a) Multi-verse optimizer algorithm (MOA) EAs It is based on three concepts in cosmol- Global optimization problems
ogy: white hole, black hole, and worm-
hole to perform exploration, exploita-
tion, and local search, respectively
Abualigah and Diabat (2020b) Grasshopper optimization algorithm SIAs It mimicked the swarming behaviour of Engineering design, wireless networking,
(GOA) grasshoppers in nature and its math- machine learning, image processing,
ematical model simulate repulsion (to control of power systems, and others
explore the search space) and attraction
(to exploit promising regions) forces
between the grasshoppers. Also, GOA
was equipped with a coefficient that
adaptively decreases the comfort zone
of the grasshoppers, to balance between
exploration and exploitation
Abualigah (2020b) Group search optimizer (GSO) SIAs It is based on animal searching behavior Artificial neural networks, real-world
(e.g. animal scanning mechanisms) and benchmark problems and various optimi-
group living theory as well as pro- zation problems
ducer–scrounger model, which assumes
that group members search either for
“finding” (producer) or for “joining”
(scrounger) opportunities
R. P. Parouha, P. Verma
Design and applications of an advanced hybrid meta‑heuristic…

Although a large number of MAs are introduced in the literature but they couldn’t able
to solve variety of problems (Wolpert and Macready 1997). In other words, a method
may have acceptable results for some problems, but not for others. Thus, there is a need
to introduce some effective algorithms to solve a wider range of optimization problems.
Also, hybrid techniques are now more favoured over their individual effort for solving
complex optimization problems (Karen et al. 2006; Abualigah et al. 2017b, a; Abualigah
et al. 2018b; Yıldız 2019; Yıldız et al. 2019a, b; Abualigah and Diabat 2020a; Yıldız et al.
2020a, b, c, d), because of the maintenance of population diversity and prevent premature
convergence with high precision solution’s quality. Hence, it is the motivation of this study
to present novel variants of DE and PSO with their hybridization. Moreover, after an exten-
sive literature review on different variants of DE and PSO with their hybridization, the fol-
lowing points are analysed and motivated from them.

(i) In DE mutation and crossover strategy with their associate control parameters utilized
to produce a global best solution and beneficial for improving convergence behavior.
Therefore, appropriate strategies and their associated parameter values of DE are
considered a vital research study.
(ii) The Performance of PSO greatly depends on its parameters like acceleration coef-
ficients (guide particles to the optimum) and inertia weight (balancing diversity).
Hence, many researchers have tried to modify control parameter of PSO to achieve
better accuracy and higher speed.
(iii) Hybrid algorithms have aroused the interest of researchers due to their effective-
ness for complex optimization problems. Since DE and PSO have complementary
properties therefore their hybrids have gained prominence recently. To best of our
knowledge, finding ways to combine DE and PSO is still an open problem.

Inspired/motivated by the above observations and literature survey, the following major
contributions have been outlined for solving optimization problems.

(i) Developed an advanced differential evolution (aDE) where novel mutation strategy
and crossover probability along with slightly changed selection scheme are familiar-
ized.
(ii) Suggested an advanced particle swarm optimization (aPSO) which consists of the
novel gradually varying (decreasing and/or increasing) parameters.
(iii) Designed an advanced hybrid algorithm by hybridizing advanced DE (aDE) and
PSO (aPSO) (haDEPSO: hybridization of aDE and aPSO).

The remainder of this paper is structured as follows: Sect. 2 studied the different and
hybrid variants of DE and PSO. Section 3 briefed the fundamentals of DE and PSO. Sec-
tion 4 described the proposed algorithms designed in this study. The proposed algorithms are
substantiated using a wide set of benchmark functions and real world problems in Sect. 5. Sec-
tion 6 concludes this study with future perspectives.

13
R. P. Parouha, P. Verma

2 Related work

During the past decade, development of the several powerful MAs to solve high-dimen-
sional optimization problems has become a popular study area. Among many MAs, PSO
and DE have been widely used in continuous/discrete, constrained as well as uncon-
strained optimization problems. DE has remarkable performance and becomes a pow-
erful optimizer in the field of real life problems. However, it has a few issues such as
convergence speed and local exploitation ability. In order to overcome its shortcomings,
lots of robust and effective DEs have been designed in the literature. A brief review
summary of different DE variants is listed in Table 2 with the applied fields.
Also, PSO has attracted attention to solving many complex optimization problems
due to its efficient search ability and simplicity. However, the main drawback of the PSO
is that it may easily get stuck at a local optimal solution region. Therefore, accelerating
the convergence speed and avoiding the local optimal solutions are two critical issues in
PSO. To overcome such issues many different modifications of the PSO proposed in the
recent-past literature as briefed in Table 3.
Furthermore, a hybrid strategy is one of the main research directions to improve the
performance of single algorithm. Because of different optimization algorithms have dif-
ferent search behaviors and advantages. Therefore, in order to enhance the performance
of DE and PSO, lots of their hybrid algorithms are presented in the literature and men-
tioned in Table 4. Nevertheless, to overcome their individual shortcomings, either the
solution leads to a premature convergence or stacks at some local optima, hybrid tech-
niques are now more favored over their individual effort.
The classical DE and PSO algorithm are considered in main contribution and novel-
ties of the present work which briefed in the next section.

3 Brief on DE and PSO

The basics of original DE and PSO are presented below.

3.1 Differential evolution (DE)

It initializes a population randomly of(np individuals


) in D-dimensional search space
within the lower and upper boundaries xl , xu . After initialization, DE is conducted by
three main operations defined as follows.
Mutation: in order to generate a new offspring mutation operator modifies an individ-
ual through random changes. At iteration t , for each target vector ( xi,j t
) a mutant vector
( vti,j ) is generated as follows.
( ) ( )
vti,j = xrt + F xrt − xrt
1 2 3
(2)

where r1 , r2 , r3 ∈ {1, 2,…,np} are randomly chosen integers and F denotes the scaling vec-
tor that employed to control the amplification of differential variation.

13
Table 2  Brief study of DE variants with applications
References Algorithm Brief description Application

Brest et al. (2006) Self-adaptive DE It is a new version of DE that contained self- Numerical benchmark problems
adaptive control parameter and obtained
quality solutions
Ali (2007) DE with preferential crossover (DEPC) A preferential crossover rule employed in Real-world applications
DEPC that takes points from an auxiliary
population set and variable scaling parameter
in mutation
Rahnamayan et al. (2008) Opposition based DE (ODE) The idea of opposition based learning was Unimodal and multimodal functions
used in ODE for population initialization
and generation jumping which accelerate
convergence rate
Zhang and Sanderson (2009) Adaptive DE with optional external archive A new mutation strategy (DE/current-to-pbest) High dimensional problems
(JADE) in an adaptive manner with optional external
archive and updating control parameters
applied in JADE
Amjady and Sharifzadeh (2010) Modified differential evolution (MDE) A new mutation and selection mechanism Non-convex economic dispatch problem
Design and applications of an advanced hybrid meta‑heuristic…

included in MDE inspired from the positive


characteristics of GA (Genetic Algorithm),
PSO and SA (Simulated Annealing)
Fu et al. (2011) Self-adaptive DE (SADE) In order to preserve the diversity and speed up Constraint satisfaction problems
the convergence rate, it adapted mutation and
crossover rate with each generation evolution
Ghosh et al. (2011) Fitness adaptive DE (FiADE) A novel automatic tuning method adopted in Constrained benchmark functions
FiADE for the scale factor and crossover rate
Islam et al. (2012) Modified DE with p-best crossover (MDE_ A new mutation and fitness selection scheme is Unimodal and multimodal functions
pBX) proposed in MDE_pBX. Also to achieve bet-
ter performance an active scheme (based on
successful values) is adapted for F and CR
Cai and Wang (2013) DE With neighborhood and direction Informa- An direction induced mutation and neighbor Numerical optimization
tion (NDi-DE) guided selection strategy are introduced in
NDi-DE for direction information of the

13
population and exploit the neighborhood
Table 2  (continued)
References Algorithm Brief description Application

13
Gong and Cai (2013) DE with ranking-based mutation operators A ranking-based mutation operator imple- Unimodal and multimodal functions
(rank-DE) mented in rank-DE where some parents are
evenly selected according to their ranks in
current population
Li and Yin (2014) Modified DE (MDE) Two mutation rules are used alternately through Numerical optimization problems
a probability rule and self-adaptive parameter
are introduced in MDE to enhance the diver-
sity of the population
Guo and Yang (2015) Enhancing differential evolution utilizing It consisting rotationally invariant crossover Real world problems
eigenvector-based crossover operator (eigenvectors of covariance matrix based
crossover) and donor vectors are modified by
projecting them onto the eigenvector basis
Mohamed (2015) Improved DE (IDE) To enhance diversity and avoiding premature Global optimization problems
convergence a new triangular mutation rule
(based on convex combination triplet vector),
BGA (modified breeder GA) and a random
mutation (as restart mechanism) are sched-
uled in IDE
Yang et al. (2015) Auto-enhanced population diversity DE In AEPD-DE, implemented AEPD mechanism Unimodal and multimodal functions
(AEPD-DE) which identify vector moments once a popu-
lation converging/stagnating and enhance
diversity automatically
Mallipeddi and Lee (2015) Evolving surrogate model-based DE (ESMDE) To generate competitive offspring, it con- Constrained problems
structed by a surrogate model (Kriging)
which is based on population member of the
current generation with suitable parameter
setting
R. P. Parouha, P. Verma
Table 2  (continued)
References Algorithm Brief description Application

Do et al. (2016) Modified DE (mDE) Best individual based mutation and elitist selec- Tensegrity structures optimization
tion scheme with adjusted scale and crossover
factor are introduced in mDE, to increase
exploitation ability and/or convergence speed
and reduce computational cost
Liu and Guo (2016) Clustering-based DE with random-based sam- One-step k-means clustering is used to generate Unimodal, multimodal problems
pling and Gaussian sampling (GRCDE) k search spaces in GRCDE, where random
and Gaussian sampling mutation operators
are castoff to improve the search efficiency
Salehpour et al. (2017) Fuzzy DE (FDE) Fuzzy logic inference system is used to dynami- Single objective problem
cally adapt the mutation factor as output with
population diversity and number of genera-
tion are considered as inputs in FDE
Qiu et al. (2017) Multiple exponential recombination for DE Here a multiple exponential crossover is Unimodal, multimodal problems
devised where several segments will be
exchanged among involved vectors
Design and applications of an advanced hybrid meta‑heuristic…

Qiu et al. (2018) Minimax DE (MMDE) A novel bottom-boosting and partial-regenera- Robust design problem
tion strategy as well as new mutation operator
are introduced in MMDE to enhance both
reliability and efficiency of the algorithm
Zhang and Li (2018) DE parent selection (DEPS) A modified parent selection technique is pro- Numerical optimization
posed in DEPS where successful experience
of the parents is used
Huang et al. (2018) Hypercube-based NCDE It adopted hypercube neighborhood based Multimodal optimization problems
mutation where to control the radius vector
of a hypercube a self-adaptive method is used
towards guarantee the neighborhood size
always in a reasonable range

13
Table 2  (continued)
References Algorithm Brief description Application

13
Yang et al. (2019) Improved DE algorithm (daDE) An efficient mutation rule by using the informa- Quantum computation problems
tion of the current and the former individuals
together is designed in daDE
Prabha and Yadav (2019) DE with biological-based mutation operator It consists a bio-inspired mutation operator Real world problems
(hemostatic operator) that gives promising
solutions and enhances diversity
Liu et al. (2019) Hierarchical DE combined with multi-cross For fast convergence and better global search Unimodal and multimodal functions
operation (HDEMCO) ability in HDEMCO, MCO (multi-cross
operation) i.e. hierarchical heterogeneous
concept performs in top whereas SHADE
(success-history-based adaptive DE) con-
ducted in bottom layer
Gui et al. (2019) Multi-role based DE (MRDE) It utilizes distinct advantages of some popular Unimodal and multimodal functions
generation strategies and control parameters
with adaptive and regroup scheme which is
beneficial for speeding up the convergence
and diversifying the search behavior
Li et al. (2020) Enhanced adaptive DE (EJADE) CR sorting mechanism (rationally assign) and Photovoltaic models
a dynamic population reduction strategy is
employed in EJADE to accelerate conver-
gence and maintained the diversity
Hu et al. (2020) Boltzmann annealing DE (BADE) It combines a multi-population DE and an Directional resistivity logging-while-
annealing strategy where different differential drilling (DRLWD)
strategies are employed at different stages of
annealing to improve the convergence as well
as to save computation time
R. P. Parouha, P. Verma
Table 2  (continued)
References Algorithm Brief description Application
Ben (2020) Accelerated DE (ADE) To avoid the untimely convergence damage Inverse damage detection problem
scenario structure concept and a new differ-
ence mutation vector based on dispersion of
individuals as well as a new exchange opera-
tor are used in ADE
Design and applications of an advanced hybrid meta‑heuristic…

13
Table 3  Brief study of PSO variants with applications
References Algorithm Brief description Application

13
He and Han (2006) Improved PSO with disturbance term (PSO-DT) In PSO-DT, a disturbance term added to the Unconstrained problems
existing velocity updates formula of the PSO
which effectively mends the defects
Yang et al. (2007) A modified adaptive PSO with dynamic adapta- In this method the inertia weight (based on func- Unimodal and multimodal function
tion (DAPSO) tion evolution speed and aggregation degree
factor) dynamically changes the run and evolu-
tion state
Jie et al. (2008) Knowledge-based cooperative PSO (KCPSO) It introduced multi-swarm (to maintain diversity) Complex multimodal functions
and knowledge billboard (to record varieties of
search information) mechanism
Cai et al. (2009) Predicted modified PSO with time varying accel- Social and cognitive learning factors adjusted Multi-modal high-dimensional problems
erator coefficients (PMPSO-TVAC) according to a predefined predicted velocity
index in PMPSO-TVAC
Azadani et al. (2010) Constrained PSO (CPSO) In CPSO, particles initializing and updating Multi-area electricity market
under uniform distribution which leads to an
easier updating and faster convergence
Kang and He (2011) Discrete particle swarm optimization (DPSO) Position and velocity of the particle in DPSO Task assignment problems
is extended from the real to integer vector in
discrete domain
Sun et al. (2012) Quantum-behaved PSO (QPSO) It motivated by concept from quantum mechanics Global optimization problem
belonging to bare-bones PSO family
Tatsumi et al. (2013) Chaotic PSO-virtual quartic objective function A new chaotic system (derived from steepest Global optimization problem
(CPSO-VQO) descent method and gradient perturbation)
presented in CPSO-VQO
Zhang et al. (2014) A parameter selection strategy for PSO based on Based on overshoot and peak time of transition Engineering problems
particle positions process authors developed a novel parameter
selection strategy for PSO which provides new
way to analyze particle trajectories
Jordehi (2015) Enhanced leader PSO (ELPSO) For justifying early convergence, it applied five- Global optimization problems
staged successive mutation strategy to swarm
leader at each iteration
R. P. Parouha, P. Verma
Table 3  (continued)
References Algorithm Brief description Application

Tanweer et al. (2016) Dynamic mentoring and self-regulation based To empower the searching particles in DMeSR- Unimodal and multimodal function
PSO (DMeSR-PSO) PSO, a human-like dynamic characteristic
scheme incorporated with a self-regulation
Ngoa et al. (2016) Extraordinariness PSO (EPSO) In this study an extraordinary movement concept Unimodal and multimodal functions
for particles are introduced where particle can
move toward a target it can be global best, local
best or worst individual
Liu and Liu (2017) Multi-leader PSO (MLPSO) Based on the game theory and instead of a Global optimization problems
random selection a multi-leader mechanism
introduced on canonical PSO, to escape the
local optimum
Kiran (2017) PSO with a new update mechanism (PSOd) To improve loss of diversity and stagnation it Nonlinear global optimization
works with mean and standard deviation of
current particle
Mishra et al. (2018) Direction aware PSO algorithm with Sensitive It maps the basic human nature like awareness, Real life optimization problem
Swarm Leader (DAPSO-SSL) maturity, leader and follower’s relationship and
Design and applications of an advanced hybrid meta‑heuristic…

leadership qualities to the standard PSO


Chen et al. (2018a, b) Dynamic multi-swarm differential learning PSO To enhance the exploration and exploitation each Unimodal and multimodal functions
(DMSDL-PSO) sub-swarms of the DMSDL-PSO merge in to
the differential mutation operator
Espitia and Sofrony (2018) Vortex PSO (VPSO) It mimics foraging and predator avoidance of Multimodal function
living organism with use of translational and
dispersion behaviors, to avoid local minima
Yu et al. (2018) Surrogate-assisted hierarchical PSO (SHPSO) To explore and exploit the search space it consist- High-dimensional problems
ing of PSO and SL-PSO (social learning PSO)
Chen et al. (2018a, b) PSO algorithm with crossover operation In order to breed promising exemplar two new Unimodal and multimodal functions
(PSOCO) crossover are employed in PSOCO to maintain
diversity

13
Table 3  (continued)
References Algorithm Brief description Application

13
Isiet and Gadal (2019) Unique adaptive PSO (UAPSO) Through evolutionary state unique control param- Constrained optimization problems
eters establish for each particle in UAPSO
where particles learn from their personal
feasible solution only
Hosseini et al. (2019) Hunter-attack fractional-order PSO (HAFPSO) To avoid stagnation and accelerate convergence Optimum design of power amplifier
hunter-attack strategy and ractional-order
derivatives are used in HAFPSO
Kohler et al. (2019) New PSO algorithm (PSO+) For maintaining diversity and speed convergence Linear and nonlinear constraints problems
rate feasibility repair operator, two swarms, a
new particle update method, two heuristics (to
initialize a feasible swarm) and neighborhood
topology added in PSO+
Khajeh et al. (2019) Modified PSO with novel population initializa- For covering the search space uniformly, a novel Benchmark function optimization
tion (MPSO) particle initializing scheme was established in
MPSO, to improve the usual PSO performance
Ang et al. (2020) Constrained multi-swarm PSO without velocity To improve the robustness and convergence Constrained optimization problems
(CMPSOWV) current and memory swarm evolution, two
diversity maintenance schemes, probabilistic
mutation operator and appropriate constraint
handling techniques are included in CMP-
SOWV
Lanlan et al. (2020) Non-inertial opposition-based PSO (NOPSO) To accelerate convergence rate it made by Deep learning
generalized opposition-based learning and an
adaptive elite mutation strategy
Xiong et al. (2020) Novel multi-swam PSO (NMSPSO) Three novel improved strategies as information Real-world applications problem
exchange, leaning and mutation strategy is
introduced in NMSPSO to balance exploration
and exploitation abilities
R. P. Parouha, P. Verma
Table 4  Brief study of DE and PSO hybrids with applications
References Algorithm Brief description Application

Hendtlass (2001) SDEA In SDEA each individual follows PSO and time to Unconstrained global optimization
time run DE which may move one individual from a
poorer area to a better search
Zhang and Xie (2003) DEPSO Bell-shaped mutations integrated in DEPSO which Unconstrained global optimization
consensus the evolution and population diversity
Talbi and Batouche (2004) DEPSO In DEPSO, standard PSO applied on odd and DE is Multimodal image problem
applied on even iteration to improve diversity and
prevent the swarm from unexpected fluctuations
respectively
Hao et al. (2007) DEPSO It combines the memory data extracted by PSO and Unconstrained global optimization
differential facts obtained by DE to maintain diver-
sity and promising solutions
Niu and Li (2008) PSODE To enhance information sharing of the population Unimodal and multimodal function
PSO and DE are executed in parallel in PSODE
Wang and Cai (2009) HMPSO Current swarm distributed into several sub-swarms Constrained optimization problems
in HMPSO where DE incorporated to improve the
Design and applications of an advanced hybrid meta‑heuristic…

personal best of each particle and PSO severed as


search engine for each sub-swarm
Caponio et al. (2009) SFMDE In SFMDE, DE is hybridized with PSO (to gener- Unconstrained and engineering design optimization
ate a super-fit individual) and Nelder mead and
Rosenbrock algorithm two local searchers (for index
measuring quality of the super-fit individual)
Liu et al. (2010) PSO-DE As DE have strong searching ability therefore it used Constrained and engineering optimization
to update the previous best positions of particles in
PSO-DE, to overcome stagnation
Xin et al. (2010) DEPSO Geometric learning strategy adapted to the relative Global numerical optimization
success ratio of alternative methods (DE and PSO)
in DEPSO

13
Table 4  (continued)
References Algorithm Brief description Application

13
Pant et al. (2011) DE-PSO Alternating phase of DE and PSO consisted in DE- Unconstrained global optimization
PSO (starts with DE if trial vector is better than
consistent point then it is included in population if
not algorithm enters the PSO phase to produces a
new candidate solution)
Epitropakis et al. (2012) Evolving cognitive and social In this hybrid to efficiently guide evolution and Multimodal functions
experience in PSO through enhance convergence plus search abilities of PSO
DE memory of the particles is changed by DE
Nwankwor et al. (2013) HPSDE It starts with DE to create the trial vector then PSO Optimal well placement
activates to generate a new candidate solution
Sahu et al. (2014) DEPSO To maintain the diversity in PSO and add memory in PID controller
population of DE, it takes the advantages of both the
algorithms (DE and PSO)
Yu et al. (2014) HPSO-DE When the population clusters around local optima an Unconstrained global optimization
adaptive mutation used in current population and
amidst DE and PSO balanced parameter carried out
in HPSO-DE
Seyedmahmoudian et al. (2015) DEPSO In DEPSO, detrimental effects of the random coef- Photovoltaic power generation
ficients are reduced by DE in parallel with PSO
Parouha and Das (2015) DPD It employed tri-population-breakup besides Elitism Constrained and engineering optimization
and Non-redundant search concept
Tang et al. (2016) HNTVPSO-RBSADE For updating the velocities and positions a nonlinear Global path planning problem
time varying PSO (NTVPSO) and to avoid stagna-
tion a ranking based self-adaptive DE (RBSADE)
included in HNTVPSO-RBSADE
Parouha and Das (2016a) MBDE In this hybrid swarm based mutation and swarm Continuous optimization problems
crossover presented for DE to direct knowledge and
experience of the populations
R. P. Parouha, P. Verma
Table 4  (continued)
References Algorithm Brief description Application

Parouha and Das (2016b) DE-PSO-DE In this mechanism population is divided into A, B and Economic load dispatch problems
C groups and DE is activated in A and C to improve
global and local search and for faster convergence
PSO is applied on B
Famelis et al. (2017) DE-PSO To increase diversity DE is substituted with a velocity- Multimodal optimization problems
update rule of PSO in DE-PSO
Mao et al. (2018) DEMPSO It combined DE with MPSO (modified PSO) where Numerical optimization
search bounds narrowed by applying DE and rate of
convergence is fasten by MPSO
Tang et al. (2018) SAPSO– mSADE It integrated with SAPSO (to balance global and local Real-world problems
search ability of particles) and mSADE (to evolve
the personal best positions to diminish potential
stagnation issue)
Too et al. (2019) BPSODE Binary PSO (BPSO) and binary DE (BDE) is Feature selection problems
computed alternative in BPSODE where dynamic
inertia weight and crossover rate introduced to track
Design and applications of an advanced hybrid meta‑heuristic…

optimal solution and balance diversity


Dash et al. (2020) HDEPSO For enhancing global search ability modified muta- Sharp edge FIR filter (SEFIRF) design problem
tion, crossover and selection operations of DE fused
with most excellent particles of PSO in HDEPSO
Zhao et al. (2020) Improved QPSO Since diversity of quantum particle swarm opti- Economic-environmental dispatch (EED)
mization (QPSO) decline rapidly which leads its
inadequacy and hence QPSO hybridized with DE to
improve the search ability

13
R. P. Parouha, P. Verma

( )
Crossover: a trial vector uti,j produce by combining target ( xi,j
t
) and mutant ( vti,j )
vector as follows.

⎧� t �
� � ⎪ vi,j ; if randj ≤ Cr
uti,j = ⎨� � (3)
t
⎪ xi,j ; otherwise

[ ]
where i ∈ 1, np , j ∈ [1, D] and randj ∈ uniformly [0, 1], Cr ∈ [0, 1] denotes crossover rate.
Selection: it directs movement toward prospective areas in the search space which
expressed as follows.

⎧� t � � � � �
� � t t
⎪ ui,j ; if f ui,j ≤ f xi,j
t+1
xi,j = ⎨� � (4)
t
⎪ xi,j ; Otherwise

where f (·) is the fitness function values of the objective function. Again mutation, crosso-
ver and selection operators are allowed for offspring repeatedly up-to predefined stopping
criteria.

3.2 Particle swarm optimization (PSO)

In classical PSO, swarm flies in the D-dimensional search space ( to seek


( for global opti- ))
mum. Each of the ith ( swarm particles
) has its own position X i = Xi,1 , Xi,2 , … , Xi,D
and velocity Vi = Vi,1 , Vi,2 , … , V(i,D . During the evolution,) each particle tracks
its individual
( best Pbesti = Pbest ) i,1 , Pbesti,2 , … , Pbesti,D and global best
Gbestj = Gbest1 , Gbest2 , … , GbestD , velocity and position of the ith particle are updated
as follows at each iteration.
( ) ( )
Vi,jt+1 = wVi,jt + c1 r1 Pbesti,j (5)
t t
− Xi,j + c2 r2 Gbestjt − Xi,j
t

t+1 t
Xi,j = Xi,j + Vi,jt+1 (6)

where t : iteration index, Vi,jt : velocity of ith particle in D-dimension at tth iteration, c1: cog-
nitive acceleration coefficient, c2: social acceleration coefficient, r1 and r2: two uniform ran-
dom numbers in the range between 0 & 1 and w: inertia weight.

4 Proposed methodology

In this section, following proposed methodology has been described in detail.

(i) advanced differential evolution (aDE)


(ii) advanced particle swarm optimization (aPSO)
(iii) hybrid advanced DEPSO (haDEPSO: hybridization of aDE and aPSO)

13
Design and applications of an advanced hybrid meta‑heuristic…

4.1 Advanced differential evolution (aDE)

In suggested advanced DE (aDE) modified mutation strategy and crossover rate as well as
changed selection scheme are introduced as follows.
( )
Mutation ∶ vti,j = xi,j (7)
t t
+ 𝜏 × rand(0, 1) × bestj − xi,j

where xi,j
t
: target vector, vti,j : mutant vector, rand(0, 1): uniformly spread random number
between 0 and 1, bestj : best vector and 𝜏 : convergence factor (elects searching scale of all
vectors). The dynamic adjustments of convergence factor (𝜏 ) are given as follows. (1) if
𝜏 = 1; then a vector will be randomly generated in the range [ xi,j t
, bestj ]. This can improve
convergence rate of DE, (but it may take ) risk of increasing probability of encountering local
optima and (2) if 𝜏 = 𝜇 1 − t∕tmax + 1; where t and tmax : current and total iteration, 𝜇:
positive constant (determining the maximal searching scale of all vectors). In 1st iteration,
𝜏 ≈ 𝜇 + 1 (as t = 1( is much )smaller than tmax , then term t∕tmax can be ignored). In max
iteration, 𝜏 = 1 (as 1 − t∕tmax = 0). Therefore 𝜏 decreases linearly from 𝜇 + 1 to 1 during
the whole optimization process. This can improve the convergence as well as avoid local
optima.
Since 𝜏 is composed of a series of large values and enlarges the exploring scale of all
vectors earlier, whereas, 𝜏 is composed of a series of small values later. These ensure global
and local search capacity as well as exploring search space of all vectors of the proposed
mutation strategy.
{ t
vi,j ; if rand(0, 1) ≤ Cr (crossover rate)
Crossover ∶ uti,j (trial vector) = t (8)
xi,j ; otherwise

In order to keep the global searching ability and improve convergence speed Cr is set as
(t−tmax )
e t
. It guarantees of individual diversity in the early stage which improves global
max
search ability. Further, reduce the degree of difference among individuals which accelerate
convergence rate in a later stage.
Selection: it emphasizes on the random nature of aDE which is formulated as follows.
{ ( ) ( )
t
xi,j ; if f uti,j > f xi,j
t
and rand (0, 1) < p
(9)
t+1
xi,j = t
ui,j ; otherwise

where f (·): fitness function values and p: random value in (0, 1]. In this selection, each
pioneer vector gets a chance to survive and share its observed information with other vec-
tors in the next steps. It implies searching capabilities are more enriched and advantageous
for stabilizing essential exploration and exploitation trends to aDE. The pseudo-code of the
proposed aDE is presented below.

13
R. P. Parouha, P. Verma

4.2 Advanced particle swarm optimization (aPSO)

Preferably, PSO needs strong exploration ability and exploitation capability at the early
and later phases of the evolution respectively. In the velocity update equation of PSO, iner-
tia weight (w) and acceleration coefficient (c1 and c2 ) are an important factor to satisfy the
above requirement with the following concerning concept.

(i) w is very useful to ensure convergence and its dynamically changing values control
the exploration and exploitation of the search space. Usually, the large w is high at
first, which allows all particles to move freely in the search space at the initial steps
and decreases over time. Moreover, dynamically decreasing w (large and small values
of w assists exploration and exploitation respectively) has produced good results and
controls the balance between global and local search ability.
(ii) c1and c2 values facilitate exploitation and exploration of the search area based on
ensuing strategies such as (a) particles are allowed to move around a wider search
space at the beginning with large c1 and small c2 instead of moving toward a popula-

13
Design and applications of an advanced hybrid meta‑heuristic…

tion best and (b) with small c1 and large c2 particles are allowed to converge to the
global optima in the latter part of the optimization process.

Considering all concerns like advantages, disadvantages and parameter influences of


PSO, an advanced PSO (aPSO) introduced in this study. It relies on novel gradually vary-
ing (decreasing and/or increasing) parameters (w, c1 and c2) stated as follows.
( )2 ( )2
( )2 ( ) t ( ) t
( ) t c1i tmax c2f tmax
w = wf + wi − wf ; c1 = c1f and c2 = c2i
tmax c1f c2i
(10)
where, wi and wf : initial and final values of w; c1i and c1f : initial and final values of c1; c2i
and c2f : initial and final values of c2; t and tmax : iteration index and maximum number of
iteration, Hence velocity and position of the ith particle are updated by following equations
in the proposed aPSO.
� � �2 � �2
� �2 � ⎛ �c � t ⎞ � � ⎛ � c2f � t ⎞ � �
� � t tmax tmax
Vi,jt+1 = wf + wi − w f Vi,jt + ⎜c1f 1i ⎟r Pbestt − X t + ⎜c ⎟r Gbestt − X t
tmax ⎜ c1f ⎟1 i,j i,j ⎜ 2i c2i ⎟2 j i,j
⎝ ⎠ ⎝ ⎠
(11)
t+1 t
Xi,j = Xi,j + Vi,jt+1 (12)

where, t : iteration index and Xi,j


t
, Vi,jt , Pbestti,j and Gbesttj are initial position, initial velocity,
personal and global best position of ith particle respectively. The pseudo-code of proposed
aPSO is presented below.

5 Hybrid advanced DEPSO (haDEPSO)

An advanced hybrid algorithm (haDEPSO) is proposed to further improve solution


quality. In haDEPSO, the entire population is sorted according to fitness function value
and divided into two sub-populations i.e. pop1 (best half) and pop2 (rest half). Since pop1
and pop2 contains the best and rest half of the main population which implies good
global and local search capability respectively. In order to maintain local and global
search capability, applying proposed aDE (due to its good local search ability) and
aPSO (because of its virtuous global search capability) on the respective sub-population

13
R. P. Parouha, P. Verma

Fig. 1  Flowchart of haDEPSO

( pop1 and pop2 ). Evaluating both sub-population then better solution obtained in pop1
(by using aDE) and pop2 (by using aPSO) are named as best and gbest separately. If best
is less than gbest then pop2 is merged with pop1 thereafter merged population evalu-
ated by aDE (as it mitigates the potential stagnation). Otherwise, pop1 is merged with
pop2 afterward merged population evaluated by aPSO (as it established to guide better
movements). Basically, haDEPSO is based on relating superior capability of suggested
aDE and aPSO. The pseudo-code described below and flowchart of the haDEPSO is
presented in Fig. 1.

13
Design and applications of an advanced hybrid meta‑heuristic…

6 Results and discussion

In this section, diverse optimization problems are employed to validate the performance of
the proposed aDE, aPSO and haDEPSO algorithms. At the same time, several state-of-the-
art algorithms are used to compare with proposed algorithms. The following unconstrained
test suite (TS) and real world problems (RWPs) are used to assess the competence of the
proposed algorithms.

(i) TS-1: 23 basic benchmark functions


(ii) TS-2: IEEE CEC 2014
(iii) TS-3: IEEE CEC 2017
(iv) RWP-1: Gear train design
{ }2 { }2
1 T T 1 x x
Minimizef (x) = − d b = − 1 2 ;
6.931 Ta Tf 6.931 x3 x4
subject to ∶ 12 ≤ xi ≤ 600, i = 1, 2, 3, 4.

(v) RWP-2: Frequency modulation sounds parameter identification problem

13
R. P. Parouha, P. Verma

( ) ∑100
( )2
Minimizef a1 , w1 , a2 , w2 , a3 , w3 = y(t) − y0 (t)
t=0

( ( ( )))
y(t) = a1 sin w1 t𝜃 + a2 sin w2 t𝜃 + a3 sin w2 t𝜃
y0 (t) = 1.0sin((5.0)t𝜃 − (1.5)sin((4.8)t𝜃 + (2.0) sin ((4.9)t𝜃)))

where 𝜃 = 2𝜋
100
, and −6.4 ≤ ai , wi ≤ 6.35, i = 1, 2, 3.
(vi) RWP-3: The spread spectrum radar poly-phase code design problem
{ }
{(
Minimizef (x) = Max f1 (X), … , f2m (X) } ,
) |
X = x1 , … , xn ∈ Rn |0 ≤ xj ≤ 2𝜋 , j = 1, 2, … , n and m = 2n − 1
|
( )

n

j
with ∶ f2i−1 (x) = cos xk i = 1, 2, … , n;
j=1 k=|2i−j−1|+1
( )

n

j
f2i (x) = 0.5 + cos xk i = 1, 2, … , n − 1; fm+i (x) = −fi (x), i = 1, 2, … , m.
j=i+1 k=|2i−j|+1

The description of TS-1 is listed in Table 5 which consists of three groups’ unimodal
­(f1–f7), multimodal ­(f8–f13) and fixed-dimension (­f14–f23) functions. The TS-2 uncon-
strained benchmark functions are cited in Table 6 which consists of unimodal (­g1–g3),
multimodal ­(g4–g16), hybrid (­g17–g22) and composition ­ (g23–g30) functions and the
detailed summary of these TS-2 can be found in (Liang et al. 2013). Likewise, TS-3
unconstrained benchmark functions are cited in Table 7 which consists of unimodal
­(h1–h3), multimodal ­ (h4–h10), hybrid (­h11–h20) and composition ­ (h21–h30) functions
and the detailed summary of these TS-3 is given in (Awad et al. 2016). Moreover, the
detailed summary of RWPs can be found in (Dor et al. 2012).
Simulations were conducted on Intel (R) Core (TM) i5-2350M CPU @ 2.30 GHz,
RAM: 4.00 GB, Operating System: Microsoft Windows 10, C-free Standard 4.0. An
extensive analysis has been carried out to decide the values of parameters wi , wf , c1i ,
c1f , c2i and c2f used in proposed haDEPSO. For this the values of ( wi , wf ), ( c1i , c2f )
and ( c1f , c2i ) varies from (0.1–0.9, 0.1–0.9), (0.1–0.9, 0.1–0.9) and (2.1–2.9, 2.1–2.9)
respectively with one step length. The success rate (defined below) of total 81 combina-
tions of these parameters are checked using proposed haDEPSO with population size
(30), stopping criteria (500 iterations) and independent run (30) on test suit TS-1, TS-2
(30D) and TS-3 (30D).
Number of succesful runs
Success rate (SR) =
Total nimber of runs
where a run is declaired as a ‘successful run’ if |f(x) − f(x∗ )| ≤∈, where f(x) is the known
global minima and f(x∗ ) is the obtained minima. In this study ∈ is fixed at 0.0001.
The success rate on best 10 combinations of ( wi,wf ) and ( c1i and c2f , c1f and c2i ) are
presented in Figs. 2 and 3 respectively. From these figures, it is clearly noticeable that
the highest success rate can be found at (0.4, 0.9) in case of inertia weight and (0.5, 2.5)
in case of acceleration coefficients. Hence, wi = 0.4,wf = 0.9, c1i = 0.5, c1f = 2.5, c2i = 2.5
and c2f = 0.5 has been recommended to use in proposed haDEPSO.
The overall best values in each table are highlighted with boldface letters of the cor-
responding algorithms. In all experiments, for fair comparison common parameters such

13
Table 5  Test suite (TS)-1
Function Formulation Type Dimension (D) Range Optimum Graph

D
f1 ∑ Unimodal 30 [− 100,100] 0
xi2
i=1
D D
f2 ∑ ∏ 30 [− 10, 10] 0
�x� + �xi �
i=1 i=1
� �2
f3 D i 30 [− 100,100] 0
∑ ∑
xj
i=1 j=1

f4 maxi ||xi ||, 1 ≤ i ≤ D 30 [− 100,100] 0


D−1 � � �2 � �2 �
f5 ∑ 30 [− 30, 30] 0
100 xi+1 − xi2 + xi − 1
i
D � �2
f6 ∑ 30 [− 100,100] 0
xi + 0.5
i
� �
f7 D 30 [− 1.28, 1.28] 0

+ rand[0, 1)
Design and applications of an advanced hybrid meta‑heuristic…

ixi4
i

D
�� �
f8 ∑ �x �
Multimodal 30 [− 500,500] − 418.9829*D
−xi sin � i�
i
D � �
f9 ∑ 30 [− 5.12,5.12] 0
xi2 − 10cos(2𝜋xi) + 10
i
� � � �
f10 � ∑ D D 30 [− 32,32] 0
1 ∑
−20exp − 0.2 D1 xi2 − D
cos2𝜋xi + 20 + e
i i
exp
D D � �
f11 1 ∑ 2 ∏ x 30 [− 600, 600] 0
4000
xi − cos √i + 1
i i i
� �
f12 D � �2 � � �� � �2 D � � 30 [− 50,50] 0
𝜋 ∑ ∑
D
10sin2 (xy) + xi − 1 1 + sin2 xyi+1) + yD − 1 + U xi , 10, 100, 4
i i

13
Table 5  (continued)
Function Formulation Type Dimension (D) Range Optimum Graph
D

13
f13 � � D �
�� ∑ �2 � � �� � �2 ∑ � � 30 [− 50,50] 0
0.1 sin2 3𝜋xi + xi − 1 1 + sin2 3𝜋xi + 1 + xD − 1 + U xi , 5, 100, 4
i=1 i
� � ��−1
f14 25 Fixed- 2 [− 65, 65]] 1
1 ∑ 1
500
+ ∑ 6
dimension
j=1 j+1+ 1i=0 (xi −aij )

10 �2
f15 ∑� x0 (b2i +bi x1 ) 4 [− 5,5] 0.00030
ai −
i=0 (b2i +bi x2 +x3 )
f16 4x02 − 2.1x04 + 13 x06 + x0 x1 − 4x12 + 4x14 2 [− 5,5] − 1.0316
( )2 ( )
f17 5.1 2 1 2 [− 5,5] 0.398
x1 − 4𝜋 2 0
x + 𝜋5 x0 − 6 + 10 1 − 8𝜋 cosx0 + 10

f18 ( )2 ( ){ ( )2 2 [− 2,2] 3
1 + x0 + x1 + 1 19 − 14x0 + 3x02 − 14x1 − 6x0 x1 + 3x12 30 + 2x0 − 3x1
( )}
18 − 32x0 + 12x02 + 48x1 − 36x0 x1 + 27x12
� �
f19 4 3 � �2 3 [1,3] − 3.86
∑ ∑
− ci exp − aij xj − pij
i=1 j=1
� �
f20 4 6 � �2 6 [0,1] − 3.32
∑ ∑
− ci exp − aij xj − pij
i=1 j=1

5 �T � � �−1
f21 ∑ �� 4 [0,10] − 10.1532
− x − ai x − ai + ci
i=1

f22 7 ��
∑ �T � � �−1 4 [0,10] − 10.4028
− x − ai x − ai + ci
i=1

f23 10 ��
∑ �T � � �−1 4 [0,10] − 10.5363
− x − ai x − ai + ci
i=1
R. P. Parouha, P. Verma
Design and applications of an advanced hybrid meta‑heuristic…

Table 6  Test suite (TS)-2 (IEEE CEC2014 unconstrained benchmark functions)


Type Function Functions Optimum

Unimodal g1 Rotated high conditioned elliptic function 100


g2 Rotated bent cigar function 200
g3 Rotated discus function 300
Multimodal g4 Shifted and rotated Rosenbrock’s function 400
g5 Shifted and rotated Ackley’s function 500
g6 Shifted and rotated Weierstrass function 600
g7 Shifted and Rotated Griewank’s function 700
g8 Shifted Rastrigin’s Function 800
g9 Shifted and rotated Rastrigin’s function 900
g10 Shifted Schwefel’s function 1000
g11 Shifted and rotated Schwefel’s function 1100
g12 Shifted and rotated Katsuura Function 1200
g13 Shifted and rotated HappyCat FUNCTION 1300
g14 Shifted and rotated HGBat function 1400
g15 Shifted and rotated expanded Griewank’s plus Rosen- 1500
brock’s function
g16 Shifted and rotated expanded Scaffer’s F6 function 1600
Hybrid g17 Hybrid function 1 (N = 3) 1700
g18 Hybrid function 2 (N = 3) 1800
g19 Hybrid function 3 (N = 4) 1900
g20 Hybrid function 4 (N = 4) 2000
g21 Hybrid function 5 (N = 5) 2100
g22 Hybrid function 6 (N = 5) 2200
Composition g23 Composition function 1 (N = 5) 2300
g24 Composition function 2 (N = 3) 2400
g25 Composition function 3 (N = 3) 2500
g26 Composition function 4 (N = 5) 2600
g27 Composition function 5 (N = 5) 2700
g28 Composition function 6 (N = 5) 2800
g29 Composition function 7 (N = 3) 2900
g30 Composition function 8 (N = 3) 3000

Search space: [−100, 100]D(dimension)

as population size, stopping criteria and independent run are set the same or minimum
of comparative algorithms. The simulation result analysis on TS-1, TS-2, TS-3 and
RWPs with comparative experiments are presented below.

(i) On TS-1: 23 basic benchmark functions

The results produced by proposed hybrid haDEPSO and its suggested component
aDE and aPSO algorithms on TS-1 is compared with, traditional algorithms- PSO (Ken-
nedy and Eberthart 1995), DE (Storn and Price 1997), GSA (Rashedi et al. 2009), GWO
(Mirjalili et al. 2014a, b), HTS (Patel and Savsani 2015), SSA (Mirjalili et al. 2017), EO

13
R. P. Parouha, P. Verma

Table 7  Test suite (TS)-3 (IEEE CEC 2017 unconstrained benchmark functions)
Type No Functions Optimum

Unimodal h1 Shifted and rotated bent cigar function 100


h2 Shifted and rotated sum of different power function 200
h3 Shifted and rotated Zakharov function 300
Multimodal h4 Shifted and rotated Rosenbrock’s function 400
h5 Shifted and rotated Rastrigin’s function 500
h6 Shifted and rotated Expeanded Scaffer’s F6 function 600
h7 Shifted and rotated LunacekBi_Rastrigin’s function 700
h8 Shifted and rotated non-continuous Rastrigin’s function 800
h9 Shifted and rotated levy function 900
h10 Shifted and rotated Schwefel’s function 1000
Hybrid h11 Hybrid function 1 (N = 3) 1100
h12 Hybrid function 2 (N = 3) 1200
h13 Hybrid function 3 (N = 3) 1300
h14 Hybrid function 4 (N = 4) 1400
h15 Hybrid function 5 (N = 4) 1500
h16 Hybrid function 6 (N = 4) 1600
h17 Hybrid function 7 (N = 5) 1700
h18 Hybrid function 8 (N = 5) 1800
h19 Hybrid function 9 (N = 5) 1900
h20 Hybrid function 10 (N = 6) 2000
Composition h21 Composition function 1 (N = 3) 2100
h22 Composition function 2 (N = 3) 2200
h23 Composition function 2 (N = 4) 2300
h24 Composition function 2 (N = 4) 2400
h25 Composition function 3 (N = 5) 2500
h26 Composition function 4 (N = 5) 2600
h27 Composition function 5 (N = 6) 2700
h28 Composition function 6 (N = 6) 2800
h29 Composition function7 (N = 3) 2900
h30 Composition function 8 (N = 3) 3000

Search space: [−100, 100]D(dimension)

(Faramarzi et al. 2019) and HHO (Heidari et al. 2019); DE variants- SADE (Qin and Sug-
anthan 2005), jDE (Brest et al. 2006), JADE (Zhang and Sanderson 2009) and SHADE
(Tanabe and Fukunaga 2013); PSO variants- LFPSO (Hakli and Uğuz 2014), AGPSO
(Mirjalili et al. 2014a, b), HEPSO (Mahmoodabadi et al. 2014) and RPSOLF (Yan et al.
2017); and hybrid variants- DE-PSO (Pant et al. 2011), DPD (Das and Parouha 2015),
MBDE (Parouha and Das 2016a), IPSODE (Jana and Sil 2016), FAPSO (Xia et al. 2018)
and PSOSCALF (Chegini et al. 2018). The parameters of all above compared and proposed
algorithms are listed in Table 8.
The comparative experimental results in terms of mean, std. (standard deviation) and
ranking of the objective function values are presented in Table 9 (in case of traditional
algorithms), Table 10 (in case of DE variants), Table 11 (in case of PSO variants) and

13
Design and applications of an advanced hybrid meta‑heuristic…

Fig. 2  Influence of different inertia weights of proposed haDEPSO on test suite (TS)-1, TS-2 and TS-3

Fig. 3  Influence of different acceleration coefficients of proposed haDEPSO on test suite (TS)-1, TS-2 and
TS-3

Table 12 (in case of hybrid variants) of 30 independent runs. The results of the compara-
tive algorithms are directly taken from the original references.
From Tables 9, 10, 11 and 12, it can be clearly observed that the mean objective
function values of the proposed algorithms are marginally better and/or equal on TS-1
case. Moreover, the following comparison results (among non-proposed algorithms) are
summarized as follows for TS-1 cases. (1). Unimodal function ­(f1–f7): proposed hybrid
haDEPSO produced best results for all seven functions ­(f1–f7) and perform equally for
three functions ­(f1, ­f2 and ­f3) in case of comparative HTS traditional algorithm. Sug-
gested aDE produced the best result for four functions ­(f1, ­f2, ­f3 and ­f6) and slightly
better or equal for others. Also, anticipated aPSO gives the best result on two functions
­(f1 and ­f2) and marginally better or equal for rest functions. (2). Multimodal function
­(f8–f13): proposed hybrid haDEPSO exhibits better results for all six functions (­ f8–f13).
Projected aDE gives best result for three functions ­(f8, ­f9 and ­f11) and marginally bet-
ter or equal for rest functions. Similarly, suggested aPSO execute better result in one
function ­(f9) and slightly better or equal for others and (3). Fixed dimension func-
tion ­(f14–f23): proposed aDE, aPSO and haDEPSO perform better or equal in all func-
tions. Moreover, all algorithms are individually ranked (as ‘1′for the best and ‘2′ for

13
Table 8  Parameter setting for test suite (TS)-1
Algorithm Reference Control parameter Population size Stopping Run
criterion
Term Values

13
Traditional algorithms
PSO Kennedy and Eberthart (1995) w, C1 and C2 Linear reduction from 0.9 to 0.1, 2 and 2 30 500 30
DE Storn and Price (1997) F, and CR 0.5 and 0.5 30 500 30
GSA Rashedi et al. (2009) alpha, G0, Rnorm and Rpower 20, 100, 2 and 1 30 500 30
GWO Mirjalili et al. (2014a, b) a linear reduction from 2 to 0 30 500 30
HTS Patel and Savsani (2015) R rand [0,1] 50 500 60
SSA Mirjalili et al. (2017) leader position update probability 0.5 30 500 30
EO Faramarzi et al. (2019) a1, a2 and GP {1, 1.5, 2, 2.5, 3}, {0.1, 0.5, 1, 1.5, 2} 30 500 30
and {0.1., 0.25, 0.5, 0.75, 0.9}
HHO Heidari et al. (2019) E E < 0.5, E ≥ 0.5 30 500 30
DE variants
SADE Qin and Suganthan (2005) F and CR N (0.5, 0.3) and N (CRm, 0.1) 50 1000 30
jDE Brest et al. (2006) 𝜏1 , 𝜏2, Fl and Fu 0.1, 0.1, 0.1 and 0.9 50 1000 30
JADE Zhang and Sanderson (2009) Fi and CRi randci (μF, 0.1) and randni (μCR, 0.1) 50 1000 30
SHADE Tanabe and Fukunaga (2013) Pbest and Arc rate 0.1 and 2 30 500 30
PSO variants
LFPSO Hakli and Uğuz (2014) C1 and C2 2 and 2 50 500 30
AGPSO Mirjalili et al. (2014a, b) w [0.9, 0.4], 50 500 30
HEPSO Mahmoodabadi et al. (2014) PC and PB 0.95 and 0.02 50 500 30
RPSOLF Yan et al. (2017) w, C1, C2, C3, β and ε 0.55, 1.49, 1.49, 1.5 and 0.99 50 500 30
Hybrid variants
DE-PSO Pant et al. (2011) w, ­C1 and ­C2 0.729, 1.49 and 1.49 50 6000 30
DPD Das and Parouha (2015) FA, FC, CRA and CRC 0.5, 0.9, 0.9 and 0.9 30 200 30
MBDE Parouha and Das (2016a) Use of memory, swarm mutation and swarm – 30 1000 30
crossover
R. P. Parouha, P. Verma

IPSODE Jana and Sil (2016) C1, C2, CR and 𝜏𝜇 2, 2, 0.1 and [0.03, 0.05] 50 6000 30
Table 8  (continued)
Algorithm Reference Control parameter Population size Stopping Run
criterion
Term Values

FAPSO Xia et al. (2018) – – 50 5000 30


PSOSCALF Chegini et al. (2018) wmin, wmax, ­C1min, ­C1max,C2min, ­C2max and β 0.4, 0.9, 0.5, 2.5, 0.5, 2.5 and 1.5 50 500 30
Proposed algorithms
aDE Proposed – – 30 500 30
aPSO wi , wf , c1i , c1f , c2i and c2f 0.4, 0.9, 0.5, 2.5, 2.5 and 0.5 30 500 30
haDEPSO – – 30 500 30

List of variables w: inertia weight, C1: cognitive acceleration coefficient, C2: social acceleration coefficient, F: scaling vector, CR: crossover rate, G0: initial value of gravita-
tional constant, a: convergence parameter, R: equal probability, rand: random number, a1 and a2: control parameter, GP: generation probability, E: escaping energy, 𝜏1 &𝜏2:
probabilities to adjust factors F and CR, N: normal distribution, CRm: crossover mean, Fl and Fu: lower and upper scaling factor, Fi: mutation factor, CRi: crossover prob-
ability, randci: random Cauchy distribution, μF: location parameter, randni: random normal distribution, μCR: mean crossover rate, Pbest: personal best, PC: multi-crossover
probability, PB: bee colony operator, C3: learning factor, β: levy index, ε: control parameter, FA and FC: mutation factor for group A and C, CRA and CRC: crossover weight
1i
for group A and C, 𝜏𝜇: mutation probability, wmin and wmax/w and wf : minimum and maximum values of inertia weight, C ­ 1min and C
­ 1max/c and c1f : minimum and maximum
i
2i
values of personal learning factors, ­C2min and ­C2max/c and c2f : minimum and maximum values of social learning factors.
Design and applications of an advanced hybrid meta‑heuristic…

13
Table 9  Results comparisons of traditional and proposed algorithms for test suite (TS)-1
Function Criteria Algorithm

Traditional algorithms Proposed algorithms

13
PSO DE GSA HTS SSA GWO EO HHO aDE aPSO haDEPSO

f1 Mean 9.59e−06 3.82e−12 2.53e−16 0.00e+00 1.58e−07 6.59e−28 3.32e−40 2.03e+00 0.00e+000 0.00e+000 0.00e+000
Std 3.35e−05 2.00E−12 9.67e−17 0.00e+00 1.71e−07 1.58e−28 6.78e−40 4.04e−01 0.00e+000 0.00e+000 0.00e+000
Rank 7 5 4 1 6 3 2 8 1 1 1
f2 Mean 0.02560 4.40e−08 0.05565 0.00e+00 2.66e+00 7.18e−17 7.12e−23 1.70e+00 0.00e+000 0.00e+000 0.00e+000
Std 0.04595 1.30e−08 0.19404 0.00e+00 1.66e+00 7.28e−17 6.36e−23 7.37 e−02 0.00e+000 0.00e+000 0.00e+000
Rank 5 4 6 1 8 3 2 7 1 1 1
f3 Mean 82.2687 2.48e+04 896.534 0.00e+00 1.70e+03 3.29e−06 8.06e−09 1.17e+02 0.00e+000 0.17e−129 0.00e+000
Std 97.2105 4.04e+03 318.955 0.00e+00 1.12e+04 1.61e−05 1.60e−08 5.28e+00 0.00e+000 1.67e−131 0.00e+000
Rank 5 9 7 1 8 4 3 6 1 2 1
f4 Mean 4.26128 1.96e+00 7.35487 7.85e−43 1.16e+01 5.61e−07 5.39e−10 2.05e+00 3.28e−101 8.35e−098 0.00e+000
Std 0.67730 3.30e−01 1.74145 2.19e−42 4.17e+00 1.04e−06 1.38e−09 7.40 e−02 7.12e−103 7.08e−098 0.00e+000
Rank 9 7 10 4 11 6 5 8 2 3 1
f5 Mean 92.4310 4.43e+01 67.5430 26.1560 2.96e+02 2.68e+01 2.53e+01 2.95e+00 1.25e−021 4.85e−012 2.21e−033
Std 74.4794 2.21e+01 62.2253 3.03e+00 5.08e+02 0.79e+00 0.16e+00 8.36 e−02 1.85e−023 3.21e−012 3.72e−037
Rank 10 8 9 6 11 7 5 4 2 3 1
f6 Mean 8.89e−06 4.52e−12 2.5e−16 1.39e−01 1.80e−07 0.81e+00 8.29e−06 2.49e+00 0.00e+000 1.75e−032 0.00e+000
Std 9.91e−06 1.86e−12 1.74e−16 0.45e+00 3.00e−07 0.48e+00 5.02e−06 8.25 e−02 0.00e+000 9.16e−035 0.00e+000
Rank 7 4 3 8 5 9 6 10 1 2 1
f7 Mean 0.02724 2.50e−02 0.08944 1.73e−03 0.17e+00 2.21e−02 1.17e−02 8.20e+00 2.19e−003 5.11e−001 1.07e−003
Std 0.00804 4.35e−03 0.04339 7.59e−04 0.06e+00 1.99e−02 6.54e−04 1.69e− 01 1.10e−004 1.70e−002 1.40e−005
Rank 9 6 10 2 8 5 4 11 3 7 1
f8 Mean − 6075.85 − 1.25e+04 − 2821.1 − 12,569.04 − 7455.8 − 6123.1 − 9016.34 4.86e+00 − 1.25e+004 − 6.37e+003 − 1.25e+004
Std 754.632 2.09e+02 493.037 5.77e−01 772.811 909.865 595.1113 1.03e+00 1.07e−017 2.10e−001 0.00e+000
Rank 6 1 7 1 3 5 2 8 1 4 1
R. P. Parouha, P. Verma
Table 9  (continued)
Function Criteria Algorithm

Traditional algorithms Proposed algorithms

PSO DE GSA HTS SSA GWO EO HHO aDE aPSO haDEPSO

f9 Mean 52.8322 5.92e+01 25.9684 0.00e+00 58.3708 0.31052 0.00e+00 3.77e+00 0.00e+000 0.00e+000 0.00e+000

Std 16.7068 4.90e+00 7.47006 0.00e+00 20.016 0.35214 0.00e+00 8.87e−01 0.00e+000 0.00e+000 0.00e+000

Rank 5 7 4 1 6 2 1 3 1 1 1
f10 Mean 0.00501 5.56e−07 0.06208 2.87e−14 2.6796 1.06e−13 8.34e−14 3.75e+00 1.44e−015 1.97e−014 2.88e−016
Std 0.01257 1.62e−07 0.23628 5.68e−15 0.8275 2.24e−13 2.53e−14 8.75e−01 0.00e+000 0.00e+000 0.00e+000
Rank 8 7 9 4 10 6 5 11 2 3 1
f11 Mean 0.02381 1.02e−10 27.7015 0.00e+00 0.0160 0.00448 0.00e+00 4.17e+00 0.00e+000 3.37e−111 0.00e+000
Std 0.02870 2.02e−10 5.04034 0.00e+00 0.0112 0.00665 0.00e+00 5.56e−01 0.00e+000 1.11e−119 0.00e+000
Rank 6 3 8 1 5 4 1 7 1 2 1
f12 Mean 0.02764 5.54e−13 1.79961 3.02e−04 6.9915 0.05343 7.97e−07 1.90e+01 1.05e−032 3.34e−002 3.73e−033
Std 0.05399 3.57e−13 0.95114 1.13e−03 4.4175 0.02073 7.69e−07 3.31e+00 2.77e−034 1.02e−004 3.18e−034
Design and applications of an advanced hybrid meta‑heuristic…

Rank 7 3 9 5 10 8 4 11 2 6 1
f13 Mean 0.00732 2.38e−12 8.89908 2.84e−03 15.8757 0.65446 0.029295 1.89e + 01 2.09e−021 9.30e−004 1.05e−032
Std 0.01050 1.61e−12 7.12624 9.98e−03 16.1462 0.00447 0.035271 1.56e + 0 3.27e−023 3.71e−004 1.89e−043
Rank 6 3 9 5 10 8 7 11 2 4 1
f14 Mean 3.84902 1.03e + 00 5.859838 0.9980 1.1965 4.042493 0.99800 9.98e − 01 9.98e−001 9.98e−001 9.98e−001
Std 3.24864 1.81e−01 3.831299 3.39e−16 0.5467 4.252799 1.54e−16 9.23e − 01 0.00e + 000 0.00e + 000 0.00e + 000
Rank 4 2 6 1 3 5 1 1 1 1 1
f15 Mean 0.002434 1.29e−03 0.003673 5.33e−04 0.00088 0.00337 0.00239 3.10e − 04 3.83e−004 3.99e−004 3.02e−004
Std 0.006081 3.60e−03 0.001647 3.95e−04 0.00025 0.00625 0.00609 1.97e − 04 9.23e−012 2.96e−007 2.38e−018
Rank 8 6 11 5 7 10 9 2 3 4 1
f16 Mean – 1.03162 – 1.03e + 00 – 1.03163 – 1.03163 – 1.03163 – 1.03163 – 1.03162 − 1.03e + 00 − 1.03e + 000 − 1.03e + 000 − 1.03e + 000
Std 6.51e−16 6.78e−16 4.88e−16 4.53e−16 6.13e−14 2.13e−08 6.04e−16 6.78e−16 0.00e + 000 0.00e + 000 0.00e + 000
Rank 1 1 1 1 1 1 1 1 1 1 1

13
Table 9  (continued)
Function Criteria Algorithm

Traditional algorithms Proposed algorithms

13
PSO DE GSA HTS SSA GWO EO HHO aDE aPSO haDEPSO

f17 Mean 0.397887 3.98e−01 0.397887 0.39789 0.397887 0.397889 0.397887 3.98e − 01 3.98e−001 3.98e−001 3.98e−001

Std 0.00e + 00 0.00e + 00 0.00e + 00 5.66e−17 3.41e−14 2.13e−04 0.00e + 00 2.54e − 06 0.00e + 000 0.00e + 000 0.00e + 000

Rank 2 1 2 2 2 2 2 1 1 1 1
f18 Mean 3.00e + 00 3.00e + 00 3.00e + 00 3.00e + 00 3.00e + 00 3.000028 3.00e + 00 3.00e + 00 3.00e + 000 3.00e + 000 3.00e + 000
Std 1.97e−15 1.23e−15 4.17e−15 0.00e + 00 2.20e−13 4.24e−04 1.56e−15 0.00e + 00 0.00e + 000 1.33e−018 0.00e + 000
Rank 1 1 1 1 1 1 1 1 1 1 1
f19 Mean − 3.86278 − 3.86e + 00 − 3.86278 − 3.8628 − 3.86278 − 3.86263 − 3.86278 − 3.86e + 00 − 3.86e + 000 − 3.86e + 000 − 3.86e + 000
Std 2.65e− 15 2.71e− 15 2.29e− 15 9.06e− 16 1.47e− 10 0.00273 2.59e− 15 2.44e − 03 0.00e + 000 3.36e− 021 0.00e + 000
Rank 3 1 3 4 3 2 3 1 1 1 1
f20 Mean − 3.26651 − 3.32e + 00 − 3.31778 − 3.2837 − 3.2304 − 3.28654 − 3.2687 − 3.322 − 3.32e + 000 − 3.27e + 000 − 3.32e + 000
Std 0.06032 7.73e− 04 0.023081 5.65e− 02 0.0616 0.10556 0.05701 0.137406 0.00e + 000 0.00e + 000 0.00e + 000
Rank 7 1 2 5 8 4 6 1 1 3 1
f21 Mean − 5.9092 − 1.02e + 01 − 5.95512 − 9.5469 − 9.6334 − 8.7214 − 8.55481 − 10.1451 − 1.01e + 001 − 9.87e + 001 − 1.01e + 001
Std 3.59559 1.18e− 07 3.73707 1.6757 1.8104 2.6914 2.76377 0.885673 0.00e + 000 1.07e− 011 0.00e + 000
Rank 9 2 8 5 4 6 7 1 1 3 1
f22 Mean − 7.3360 − 1.04e + 01 − 10.4015 − 8.4839 − 9.0295 − 9.2415 − 9.3353 − 10.4015 − 1.04e + 001 − 9.87e + 000 − 1.04e + 001
Std 3.47381 1.07e− 02 2.01408 2.8821 2.3911 1.61254 2.43834 1.352375 2.91e− 012 2.08e− 002 1.10e− 016
Rank 8 1 2 7 6 5 4 1 1 3 1
f23 Mean − 8.7482 − 1.05e  1 − 10.5364 − 9.4548 − 9.0333 − 10.5343 − 9.63655 − 10.5364 − 1.05e + 001 − 9.13e + 000 − 1.05e + 001
Std 2.55743 3.17e− 12 2.6E− 15 2.2078 2.9645 0.00125 2.38811 0.927655 1.07e− 021 2.05e− 003 0.00e + 000
Rank 7 1 1 4 6 2 3 1 1 5 1
Sum of rank 140 84 132 75 142 108 84 116 32 62 23
Average 6.08 3.65 5.73 3.26 6.17 4.69 3.65 5.04 1.39 2.69 1
Overall rank 9 5 8 4 10 6 5 7 2 3 1
The overall best values in each table are highlighted with boldface letters of the corresponding algorithms
R. P. Parouha, P. Verma
Table 10  Results comparisons of DE variants and proposed algorithms for test suite (TS) − 1
Function Criteria Algorithm

DE variants Proposed algorithms


SADE jDE JADE SHADE aDE aPSO haDEPSO

f1 Mean 9.96e− 16 3.33e− 17 1.87e− 31 1.42e− 09 0.00e+000 0.00e+000 0.00e+000


Std 4.78e− 15 3.83e− 17 6.43e− 31 3.09e− 09 0.00e+000 0.00e+000 0.00e+000
Rank 4 3 2 5 1 1 1
f2 MEAN 7.00e− 12 5.10e− 11 2.79e− 15 0.0087 0.00e+000 0.00e+000 0.00e+000
Std 8.68e− 12 4.88e− 11 9.51e− 15 0.0213 0.00e+000 0.00e+000 0.00e+000
Rank 3 4 2 5 1 1 1
f3 Mean 4.97e+01 9.21e+01 1.10e− 03 15.4352 0.00e+000 0.17e− 129 0.00e+000
Std 3.85e+01 7.13e+01 5.14e− 03 9.9489 0.00e+000 1.67e− 131 0.00e+000
Rank 5 6 3 4 1 2 1
f4 Mean 3.31e+00 1.08e+01 1.66e− 03 0.9796 3.28e− 101 8.35e− 098 0.00e+000
Std 1.92e+00 4.96e+00 1.98e− 03 0.7995 7.12e− 103 7.08e− 098 0.00e+000
Design and applications of an advanced hybrid meta‑heuristic…

Rank 6 7 4 5 2 3 1
f5 Mean 5.63e+01 3.35e+01 1.18e+01 24.4743 1.25e− 021 4.85e− 012 2.21e− 033
Std 3.95e+01 2.11e+01 1.57e+01 11.2080 1.85e− 023 3.21e− 012 3.72e− 037
Rank 7 6 4 5 2 3 1
f6 Mean 5.76e− 17 1.78e− 17 4.59e− 31 5.31e− 10 0.00e+000 1.75e− 032 0.00e+000
Std 1.55e− 16 1.49e− 17 1.65e− 30 6.35e− 10 0.00e+000 9.16e− 035 0.00e+000
Rank 5 4 3 6 1 2 1
f7 Mean 1.23e− 02 1.36e− 02 6.49e− 03 0.0235 2.19e− 003 5.11e− 001 1.07e− 003
Std 4.92e− 03 4.49e− 03 2.48e− 03 0.0088 1.10e− 004 1.70e− 002 1.40e− 005
Rank 4 5 3 6 2 7 1
f8 Mean − 1.25e+04 − 1.25e+04 − 1.24e+04 − 11,713.1 − 1.25e+004 − 6.37e+003 − 1.25e+004
Std 3.60e− 06 5.74e+01 1.27e+02 230.49 1.07e− 017 2.10e− 001 0.00e+000

13
RANK 1 1 1 2 1 3 1
Table 10  (continued)
Function Criteria Algorithm

DE variants Proposed algorithms

13
SADE jDE JADE SHADE aDE aPSO haDEPSO

f9 Mean 6.02e+00 4.66e− 04 1.71e− 04 8.5332 0.00e+000 0.00e+000 0.00e+000


Std 5.14e+00 1.02e− 03 1.52e− 04 2.1959 0.00e+000 0.00e+000 0.00e+000
Rank 4 3 2 5 1 1 1
f10 Mean 4.92e− 01 1.01e− 09 1.31e− 14 0.3957 1.44e− 015 1.97e− 014 2.88e− 016
Std 5.89e− 01 7.78e− 10 2.46e− 14 0.5868 0.00e+000 0.00e+000 0.00e+000
Rank 6 5 3 7 2 4 1
f11 Mean 8.09e− 03 1.48e− 17 2.87e− 03 0.0048 0.00e+000 3.37e− 111 0.00e+000
Std 1.52e− 02 4.82e− 17 7.85e− 03 0.0077 0.00e+000 1.11e− 119 0.00e+000
Rank 5 3 4 6 1 2 1
f12 Mean 4.15e− 02 1.75e− 18 1.73e− 02 0.0346 1.05e− 032 3.34e− 002 3.73e− 033
Std 1.21e− 01 2.82e− 18 7.74e− 02 0.0875 2.77e− 034 1.02e− 004 3.18e− 034
Rank 6 3 4 7 2 5 1
f13 Mean 3.66e− 04 3.66e− 04 5.45e− 24 7.32e− 04 2.09e− 021 9.30e− 004 1.05e− 032
Std 2.01e− 03 2.01e− 03 2.58e− 23 0.0028 3.27e− 023 3.71e− 004 1.89e− 043
Rank 4 4 2 5 3 6 1
f14 Mean 9.98e− 01 9.98e− 01 9.98e− 01 0.998004 9.98e− 001 9.98e− 001 9.98e− 001
Std 0.00e+00 0.00e+00 0.00e+00 5.83e− 17 0.00e+000 0.00e+000 0.00e+000
Rank 1 1 1 1 1 1 1
f15 Mean 3.07e− 04 3.07e− 04 3.01e− 03 0.002374 3.83e− 004 3.99e− 004 3.02e− 004
Std 2.14e− 19 2.09e− 19 6.92e− 03 0.0061 9.23e− 012 2.96e− 007 2.38e− 018
Rank 2 2 5 6 3 4 1
R. P. Parouha, P. Verma
Table 10  (continued)
Function Criteria Algorithm

DE variants Proposed algorithms


SADE jDE JADE SHADE aDE aPSO haDEPSO

f16 Mean − 1.03e+00 − 1.03e+00 − 1.03e+00 − 1.03162 − 1.03e+000 − 1.03e+000 − 1.03e+000


Std 6.78e− 16 6.78e− 16 6.78e− 16 6.51e− 16 0.00e+000 0.00e+000 0.00e+000
Rank 1 1 1 1 1 1 1
f17 Mean 3.98e− 01 3.98e− 01 3.98e− 01 0.397887 3.98e− 001 3.98e− 001 3.98e− 001
Std 0.00e+00 0.00e+00 0.00e+00 3.24e− 16 0.00e+000 0.00e+000 0.00e+000
Rank 1 1 1 2 1 1 1
f18 Mean 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+000 3.00e+000 3.00e+000
Std 2.01e− 15 1.19e− 15 1.82e− 15 1.87e− 15 0.00e + 000 1.33e− 018 0.00e+000
Rank 1 1 1 1 1 1 1
f19 Mean − 3.86e+00 − 3.86e+00 − 3.86e+00 − 3.86278 − 3.86e+000 − 3.86e+000 − 3.86e+000
Design and applications of an advanced hybrid meta‑heuristic…

Std 2.71e− 15 2.71e− 15 2.71e− 15 2.69e− 15 0.00e+000 3.36e− 021 0.00e+000


Rank 1 1 1 1 1 1 1
f20 Mean − 3.32e+00 − 3.27e+00 − 3.29e+00 − 3.27047 − 3.32e+000 − 3.27e+000 − 3.32e+000
Std 1.28e− 15 5.92e− 02 5.11e− 02 0.0599 0.00e+000 0.00e+000 0.00e+000
Rank 1 3 2 3 1 3 1
f21 Mean − 9.66e+00 − 9.64e+00 − 9.14e+00 − 9.2343 − 1.01e+001 − 9.87e+001 − 1.01e+001
Std 1.90e+00 1.55e+00 2.06e+00 1.3969 0.00e+000 1.07e− 011 0.00E+000
Rank 3 4 6 5 1 2 1
f22 Mean − 1.02e+01 − 1.02e+01 − 9.88e+00 − 10.2809 − 1.04e+001 − 9.87e+000 − 1.04e+001
Std 1.22e+00 1.22e+00 1.61e+00 1.3995 2.91e− 012 2.08e−002 1.10e− 016
Rank 2 2 3 2 1 4 1

13
Table 10  (continued)
Function Criteria Algorithm

DE variants Proposed algorithms

13
SADE jDE JADE SHADE aDE aPSO haDEPSO

f23 Mean − 1.05e+01 − 1.05e+01 − 1.03e+01 63.333 − 1.05e+001 − 9.13e+000 − 1.05e+001


Std 1.78e− 15 1.81e− 15 1.40e+00 80.872 1.07e− 021 2.05e− 003 0.00e+000
Rank 1 1 2 4 1 3 1
Sum of rank 74 71 60 94 32 62 23
Average 3.26 3.13 2.65 4.08 1.43 2.69 1
Overall rank 5 4 3 6 2 3 1

The overall best values in each table are highlighted with boldface letters of the corresponding algorithms
R. P. Parouha, P. Verma
Table 11  Results comparisons of PSO variants and proposed algorithms for test suite (TS) − 1
Function Criteria Algorithm

PSO variants Proposed algorithms


LFPSO AGPSO HEPSO RPSOLF aDE aPSO haDEPSO

f1 Mean 1.062e− 04 1.160e+04 16.26772 5.065e− 269 0.00e+000 0.00e+000 0.00e+000


Std 1.588e− 04 2.485e+03 10.01293 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Rank 3 5 4 2 1 1 1
f2 Mean 0.00173 0.00261 1.28424 1.000e− 134 0.00e+000 0.00e+000 0.00e+000
Std 0.00453 0.00481 0.41611 3.753e− 134 0.00e+000 0.00e+000 0.00e+000
Rank 3 4 5 2 1 1 1
f3 Mean 1.182e+03 13.84101 7.423e+03 7.791e− 249 0.00e+000 3.17e− 098 0.00e+000
Std 5.660e+02 14.54349 7.423e+03 0.00e+00 0.00e+000 1.57e− 111 0.00e+000
Rank 5 4 6 2 1 3 1
f4 Mean 12.15124 1.82377 23.95145 1.937e− 157 3.28e− 101 8.35e− 098 0.00e+000
Std 6.22665 0.83725 7.71460 1.061e− 156 7.12e− 103 7.08e− 098 0.00e+000
Design and applications of an advanced hybrid meta‑heuristic…

Rank 6 5 7 2 3 4 1
f5 Mean 97.82509 33.09881 2.380e+03 27.42672 1.25e− 021 4.85e− 012 2.21e− 033
Std 65.33289 27.21112 1.852e+03 0.24848 1.85e− 023 3.21e− 012 3.72e− 037
Rank 6 5 7 4 2 3 1
f6 Mean 1.027e− 04 2.963e− 10 21.55405 2.98244 0.00e+000 1.75e− 032 0.00e+000
Std 1.210e− 04 1.251e− 09 9.33263 0.23250 0.00e+000 9.16e− 035 0.00e+000
Rank 4 3 6 5 1 2 1
f7 Mean 0.04354 0.02147 0.12982 0.00104 2.19e− 003 5.11e− 001 1.07e− 003
Std 0.01113 0.01002 0.09727 7.644e− 04 1.10e− 004 1.70e− 002 1.40e− 005
Rank 4 3 6 2 1 5 1
f8 Mean − 8.749e+03 − 6.878e+03 − 2.139e+03 − 3.254e+03 − 1.25e+004 − 6.37e+003 − 1.25e+004
Std 5.461e+02 7.739e+02 8.282e+02 2.860e+02 1.07e− 017 2.10e− 001 0.00e+000

13
Rank 2 3 6 5 1 4 1
Table 11  (continued)
Function Criteria Algorithm

PSO variants Proposed algorithms

13
LFPSO AGPSO HEPSO RPSOLF aDE aPSO haDEPSO

f9 Mean 29.69942 35.28784 42.00118 0.00e+00 0.00e+000 0.00e+000 0.00e+000


Std 4.29280 12.20452 7.08632 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Rank 2 3 4 1 1 1 1
f10 Mean 0.02995 1.47012 2.83842 4.085e− 15 1.44e− 015 1.97e− 014 2.88e− 016
Std 0.11839 0.71970 0.66134 1.084e− 15 0.00e+000 0.00e+000 0.00e+000
Rank 5 6 7 3 2 4 1
f11 Mean 0.01138 0.02263 1.16858 0.00e+00 0.00e+000 3.37e− 111 0.00e+000
Std 0.01617 0.02733 0.12602 0.00e+00 0.00e+000 1.11e− 119 0.00e+000
Rank 3 4 5 1 1 2 1
f12 Mean 0.29179 0.01728 0.47856 0.26157 1.05e− 032 3.34e− 002 3.73e− 033
Std 0.65905 0.03929 0.22623 0.03386 2.77e− 034 1.02e− 004 3.18e− 034
Rank 6 4 7 5 2 3 1
f13 Mean 0.01337 0.00656 1.85056 2.05282 2.09e− 021 9.30e− 004 1.05e− 032
Std 0.02597 0.01835 0.65246 0.16579 3.27e− 023 3.71e− 004 1.89e− 043
Rank 5 4 6 7 2 3 1
f14 Mean 0.99800 1.13054 0.99800 1.54064 9.98e− 001 9.98e− 001 9.98e− 001
Std 9.219e− 17 0.34368 9.219e− 17 1.84429 0.00e+000 0.00e+000 0.00e+000
Rank 1 2 1 3 1 1 1
f15 Mean 0.00118 0.00107 6.404e− 04 0.00171 3.83e− 004 3.99e− 004 3.02e− 004
Std 0.00363 0.00365 2.801e− 04 0.00508 9.23e− 012 2.96e− 007 2.38e− 018
Rank 6 5 4 7 2 3 1
R. P. Parouha, P. Verma
Table 11  (continued)
Function Criteria Algorithm

PSO variants Proposed algorithms


LFPSO AGPSO HEPSO RPSOLF aDE aPSO haDEPSO

f16 Mean − 1.03162 − 1.03162 − 1.03162 − 1.03161 − 1.03e+000 − 1.03e+000 − 1.03e+000


Std 6.519e− 16 6.387e− 16 3.554e− 15 1.650e− 05 0.00e+000 0.00e+000 0.00e+000
Rank 1 1 1 1 1 1 1
f17 Mean 0.39788 0.39788 0.39788 0.39837 3.98e− 001 3.98e− 001 3.98e− 001
Std 0.00e+00 0.00e+00 6.594e− 13 5.267e− 04 0.00e+000 0.00e+000 0.00e+000
Rank 2 2 2 1 1 1 1
f18 Mean 2.99999 2.99999 0.65246 3.00000 3.00e+000 3.00e+000 3.00e+000
Std 1.653e− 15 1.887e− 15 5.146e− 11 1.658e− 05 0.00e+000 1.33e− 018 0.00e+000
Rank 2 2 3 1 1 1 1
f19 Mean − 3.86278 − 3.86278 − 3.86278 − 3.85923 − 3.86e+000 − 3.86e+000 − 3.86e+000
Design and applications of an advanced hybrid meta‑heuristic…

Std 2.668e− 15 2.640e− 15 1.008e− 13 0.00283 0.00e+000 3.36e− 021 0.00e+000


Rank 1 1 1 2 1 1 1
f20 Mean − 3.27840 − 3.27840 − 3.31803 − 3.10441 − 3.32e+000 − 3.27e+000 − 3.32e+000
Std 0.05827 0.05827 0.02170 0.15760 0.00e+000 0.00e+000 0.00e+000
Rank 3 3 2 4 1 3 1
f21 Mean − 8.28850 − 8.06257 − 10.15319 − 4.76171 − 1.01e+001 − 9.87e+001 − 1.01e+001
Std 2.742524 3.09122 2.680e− 05 0.73723 0.00e+000 1.07e− 011 0.00E+000
Rank 2 3 1 4 1 2 1
f22 Mean − 9.97210 − 9.39650 − 10.39978 − 4.81927 − 1.04e+001 − 9.87e+000 − 1.04e+001
Std 1.66904 2.32408 0.01728 0.75699 7.83e− 008 2.08e− 002 5.86e− 009
Rank 3 5 2 6 1 4 1

13
Table 11  (continued)
Function Criteria Algorithm

PSO variants Proposed algorithms

13
LFPSO AGPSO HEPSO RPSOLF aDE aPSO haDEPSO

f23 Mean − 10.10220 − 9.54436 − 10.53640 − 5.06376 − 1.05e+001 − 9.13e+000 − 1.05e+001


Std 1.67988 2.57736 5.845e− 07 0.82968 1.07e− 021 2.05e− 003 0.00e+000
Rank 2 3 1 5 1 4 1
Sum of rank 77 80 94 75 30 54 23
Average 3.34 3.47 4.08 3.26 1.30 2.34 1
Overall rank 5 6 7 4 2 3 1

The overall best values in each table are highlighted with boldface letters of the corresponding algorithms
R. P. Parouha, P. Verma
Table 12  Results comparisons of hybrid variants and proposed algorithms for test suite (TS)-1
Function Criteria Algorithm

Hybrid variants Proposed algorithms


DE-PSO IPSODE PSOSCALF FAPSO DPD MBDE aDE aPSO haDEPSO

f1 Mean 9.95e+00 8.48e−04 1.11014e− 20 2.87e−127 7.06e− 308 1.039e− 248 0.00e+000 0.00e+000 0.00e+000
Std 1.20e+01 1.43e− 40 1.83289e− 20 1.76e−127 0.00e+000 2.801e− 251 0.00e+000 0.00e+000 0.00e+000
Rank 7 6 5 4 2 3 1 1 1
f2 Mean 4.47e+00 9.40e− 04 4.09460e− 11 1.02e− 17 8.19e− 308 1.012e− 172 0.00e+000 0.00e+000 0.00e+000
Std 2.06e+00 6.47e− 05 5.68981e− 11 1.43e− 17 0.00e+000 6.180e− 190 0.00e+000 0.00e+000 0.00e+000
Rank 7 6 5 4 2 3 1 1 1
f3 Mean 5.80e+03 3.19e+01 2.16858e− 12 1.68e− 11 1.51e−308 2.70e−147 0.00e+000 0.17e− 129 0.00e+000
Std 1.99e+03 1.43e+01 1.03815e− 11 2.49e− 11 0.00e+000 1.04e−151 0.00e+000 1.67e− 131 0.00e+000
Rank 8 7 5 6 2 3 1 4 1
f4 Mean 6.26e+01 2.62e−03 8.47410e− 08 4.09e+03 0.00e+000 4.238e− 128 3.28e− 101 8.35e− 098 0.00e+000
Std 5.58e+00 1.40e−03 1.23324e− 07 6.53e+02 0.00e+000 1.524e− 132 7.12e− 103 7.08e− 098 0.00e+000
Design and applications of an advanced hybrid meta‑heuristic…

Rank 7 6 5 8 1 2 3 4 1
f5 Mean 5.21e+05 2.91e+01 21.97646 6.55e− 11 6.24e− 308 2.642e− 017 1.25e− 021 4.85e− 012 2.21e− 033
Std 1.90e+06 1.03e+01 0.54774 1.99e− 11 0.00e+000 1.571e− 021 1.85e− 023 3.21e− 012 3.72e− 037
Rank 9 8 7 6 1 4 3 5 2
f6 Mean 2.01e+03 00.0e+00 7.13998e− 12 2.37e− 12 8.02e− 308 0.00e+000 0.00e+000 1.75e− 032 0.00e+000
Std 2.85e+03 00.0e+00 3.65884e− 11 1.84e− 13 0.00e+000 0.00e+000 0.00e+000 9.16e− 035 0.00e+000
Rank 6 1 5 4 2 1 1 3 1
f7 Mean 5.91e− 01 6.17e− 03 0.00012 0.00e+00 0.00e+000 1.12e− 27 2.19e− 003 5.11e− 001 1.07e− 003
Std 4.46e− 01 1.45e− 03 0.00010 0.00e+00 0.00e+000 4.45e− 129 1.10e− 004 1.70e− 002 1.40e− 005
Rank 8 6 4 1 1 2 5 7 3
f8 Mean − 12,569.48 − 12,569.48 − 12,569.48 2.48e− 11 − 8.37e+003 − 12,569.5 − 1.25e+004 − 6.37e+003 − 1.25e+004
Std 0.00e+000 1.96e− 09 2.39996e− 07 6.44e− 12 0.00e+000 0.00e+000 1.07e− 017 2.10e− 001 0.00e+000

13
Rank 1 1 1 4 2 1 1 3 1
Table 12  (continued)
Function Criteria Algorithm

Hybrid variants Proposed algorithms

13
DE-PSO IPSODE PSOSCALF FAPSO DPD MBDE aDE aPSO haDEPSO

f9 Mean 9.62e+01 9.19e− 04 0.00e+00 0.00e+00 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000
Std 1.82e+01 8.56e− 05 0.00e+00 0.00e+00 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000
Rank 3 2 1 1 1 1 1 1 1
f10 Mean 1.46e+01 9.50e− 04 2.24609e− 11 4.86e− 15 4.16e− 015 4.442e− 16 1.44e− 015 1.97e− 014 2.88e− 016
Std 1.49e+00 5.99e− 05 2.33542e− 11 1.74e− 15 0.00e+000 0.00e− 000 0.00e+000 0.00e+000 0.00e+000
Rank 9 8 7 5 4 2 3 6 1
Mean 2.51e+00 9.28e− 04 0.00e+00 1.74e− 16 0.00e+000 0.00e+000 0.00e+000 3.37e− 111 0.00e+000
Std 2.58e+00 4.49e− 05 0.00e+00 3.60e− 16 0.00e+000 0.00e+000 0.00e+000 1.11e− 119 0.00e+000
Rank 5 4 1 3 1 1 1 2 1
f12 Mean 5.504e− 13 2.72e− 08 8.46465e− 14 1.57e− 32 4.71e− 032 1.54e− 032 1.05e− 032 3.34e− 002 3.73e− 033
Std 0.000 1.05e − 08 2.79106e− 13 0.00e+00 0.00e+000 2.48e− 048 2.77e− 034 1.02e− 004 3.18e− 034
Rank 7 8 6 4 5 1 3 9 2
f13 Mean 3.697e− 15 8.02e− 04 0.00399 1.58e− 32 1.32e− 032 1.32e− 032 2.09e− 021 9.30e− 004 1.05e− 032
Std 0.000 4.75e − 05 0.00928 0.00e+00 0.00e+000 3.21e− 036 3.27e− 023 3.71e− 004 1.89e− 043
Rank 5 6 8 3 1 1 4 7 2
f14 Mean 9.99e− 001 9.98e− 001 1.13027 9.98e− 001 0.9980 9.98e− 001 9.98e− 001 9.98e− 001 9.98e− 001
Std 0.00e+00 4.46e− 18 0.50338 1.27e− 08 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000
Rank 1 1 2 1 1 1 1 1 1
f15 Mean 3.85e− 04 3.815e− 04 3.13244e− 04 3.95e− 04 3.075e− 004 3.05e− 004 3.83e− 004 3.99e− 004 3.02e− 004
Std 4.61e− 05 5.70e− 04 2.17489e− 05 6.02e− 08 1.572e− 021 4.41e− 010 9.23e− 012 2.96e− 007 2.38e− 018
Rank 7 5 4 8 3 2 6 9 1
R. P. Parouha, P. Verma
Table 12  (continued)
Function Criteria Algorithm

Hybrid variants Proposed algorithms


DE-PSO IPSODE PSOSCALF FAPSO DPD MBDE aDE aPSO haDEPSO

f16 Mean − 1.03e+00 − 1.03e + 00 − 1.0316 − 1.03e+00 − 1.03e+000 − 1.03e+000 − 1.03e + 000 − 1.03e+000 − 1.03e + 000
Std 0.00e+00 0.00e+00 4.40244e− 16 0.00e+00 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000
Rank 1 1 1 1 1 1 1 1 1
f17 Mean 3.98e− 001 3.98e− 001 0.39788 3.98e− 001 3.98e− 001 3.98e− 001 3.98e− 001 3.98e− 001 3.98e− 001
Std 0.00e+000 2.71e− 04 3.66527e− 15 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000
Rank 1 1 2 1 1 1 1 1 1
f18 Mean 3.00e+000 3.00e+000 3. 00e+00 3.00e+000 3.00e+000 3.00e+000 3.00e + 000 3.00e+000 3.00e+000
Std 0.00e+000 1.53e− 04 5.96540e− 13 5.31e− 016 0.00e+000 0.00e+000 0.00e+000 1.33e− 018 0.00e+000
Rank 1 1 1 1 1 1 1 1 1
f19 Mean − 3.86e+000 − 3.86e+000 − 3.86278 − 3.87e+000 − 3.86e+000 − 3.86e+000 − 3.86e+000 − 3.86e + 000 − 3.86e+000
Design and applications of an advanced hybrid meta‑heuristic…

Std 1.53e− 016 4.01e− 04 8.31755e− 15 2.11e− 004 0.00e+000 0.00e+000 0.00e+000 3.36e− 021 0.00e+000
Rank 1 1 3 2 1 1 1 1 1
f20 mean − 3.32e+000 − 3.29e+000 − 3.27168 − 3.29e+000 − 3.32e+000 − 3.32e+000 − 3.32e+000 − 3.27e+000 − 3.32e+000
Std 0.00e+000 0.00e+000 0.06371 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000
Rank 1 2 3 2 1 1 1 3 1
f21 Mean − 1.01e+01 − 5.01e+00 − 10.15319 − 7.51e+00 − 1.01e+001 − 1.01e+001 − 1.01e+001 − 9.87e+001 − 1.01e+001
Std 0.00e+00 2.27e− 02 4.46227e− 15 1.21e− 01 0.00e+000 0.00e+000 0.00e+000 1.07e− 011 0.00e+000
Rank 1 4 1 3 1 1 1 2 1
f22 Mean − 1.04e+01 − 8.94e+01 − 10.40294 − 6.04e+01 − 1.04e+001 − 1.04e+001 − 1.04e+001 − 9.87e+000 − 1.04e+001
Std 2.00e− 01 1.06e− 01 1.80672e− 15 2.14e− 01 5.25e− 002 1.21e− 001 2.91e− 012 2.08e− 002 1.10e− 016
Rank 1 3 1 4 1 1 1 2 1

13
Table 12  (continued)
Function Criteria Algorithm

Hybrid variants Proposed algorithms

13
DE-PSO IPSODE PSOSCALF FAPSO DPD MBDE aDE aPSO haDEPSO

f23 Mean − 1.02e+01 − 8.25e+00 − 10.53640 − 5.64e+00 − 1.05e+001 − 1.05e+001 − 1.05e+001 − 9.13e+000 − 1.05e+001
Std 1.71e− 01 7.14e− 02 4.84794e− 15 5.70e− 02 2.23e− 016 3.08e− 011 1.07e− 021 2.05e− 003 0.00e+000
Rank 2 4 1 5 1 1 1 3 1
Sum of rank 99 92 79 81 37 36 43 77 28
Average 4.30 4 3.43 3.52 1.65 1.56 1.86 3.30 1.21
Overall rank 9 8 6 7 3 2 4 5 1

The overall best values in each table are highlighted with boldface letters of the corresponding algorithms
R. P. Parouha, P. Verma
Design and applications of an advanced hybrid meta‑heuristic…

subsequent performers and so on) in Tables 9, 10, 11 and 12 based on mean result val-
ues. From these tables, it can be concluded that haDEPSO, aDE and aPSO ranked 1st,
2nd and 3rd sequentially. While in Table 12 haDEPSO ranked 1st and aDE, aPSO are
in 4th and 5th place following DPD and MBDE. Also, average and overall ranks of
proposed algorithms versus others are presented in Tables 9, 10, 11 and 12. It is clear
that (from ranking) performances of proposed algorithms are superior or comparable
to others. Eventually, proposed aDE, aPSO and haDEPSO produce less std. (it may be
0.00e+000) for most of the cases on TS-1 which describes their stability.
Furthermore, superiority of proposed algorithms is statistically validated over other
algorithms through one-tailed t-test (with 98 df at 5% significance level) and Wilcoxon
Signed Rank (WSR) test (at 5% significance level). The details of these tests can be
found in (Das and Parouha 2015b). The results of t-test and WSR test on TS-1 are
reported in Table 13.
From Table 13, it can be seen that proposed algorithms have both ‘a (significantly
better than other)’ or ‘a+ (highly significance with other)’ sign (in case of t-test) and
performs better or equally (in case of WSR test) in most of consequence. Also, the
p-values as reported in Table 13 of the proposed algorithms are less with others which
conclude that simulations are reliable for the majority of runs.
The convergence speed of proposed and comparative algorithms is compared over
8 ­(f1, ­f5, ­f6, ­f7, ­f8, ­f9, ­f10 and ­f11) typical 30-D TS-1. To avoid the complicacy on con-
vergence graphs, only one best performed non proposed algorithm from Tables 9, 10,
11 and 12 is picked for comparison with proposed algorithms. All plotted conver-
gence graphs (objective function values versus iterations) are separately presented in
Fig. 4a–h. From this figure, it can be concluded that proposed aDE, aPSO and haDEPSO
converge much faster than other algorithms in all cases.
As a whole, the above numerical, statistical and graphical result analysis shows that
the proposed aDE, aPSO and haDEPSO are perform competitive and/or equally with
other compared algorithms. However, the proposed hybrid algorithm haDEPSO is supe-
rior to its integrated aDE and aPSO component.

(i) On TS-2 and TS-3: IEEE CEC 2014 and IEEE CEC 2017 unconstrained benchmark
functions

Further, the produced result by proposed hybrid haDEPSO and its suggested compo-
nent aDE and aPSO algorithms: on TS-2 is compared with, traditional algorithms- PSO
(Kennedy and Eberthart 1995) and DE (Storn and Price 1997), DE variants- CIPDE
(Zheng et al. 2017) and NDE (Tian and Gao 2019), PSO variants- NDLPSO (Nasir et al.
2012) and EPSO (Lynn and Suganthan 2017) and hybrid variants- MBDE (Parouha and
Das 2016a) and SCA-PSO (Nenavath et al. 2018); and on TS-3 is compared with, tradi-
tional algorithms- PSO (Kennedy and Eberthart 1995) and DE (Storn and Price 1997),
DE variants- JADE (Zhang and Sanderson 2009) and SHADE (Tanabe and Fukunaga
2013), PSO variants- HEPSO (Mahmoodabadi et al. 2014) and BLPSO (Chen et al.
2017) and hybrid variants- HGWO (Zhu et al. 2015) and PSOJADE (Du and Liu 2020).
The parameters of all above compared and proposed algorithms are listed in Table 14.
The relative experimental results in terms of mean error, std. and ranking of the
objective function values are presented in Table 15 (for 30D TS-2) and Table 16 (for
30D TS-3) of 30 independent runs.

13
Table 13  Statistical comparisons of proposed versus other algorithms for test suite (TS)-1
Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrids variants Proposed

13
Versus PSO DE GSA HTS SSA GWO EO HHO SADE JDE JADE SHADE LFPSO AGPSO HEPSO RPSOLF DE-PSO IPSODE PSOS- FAPSO DPD MBDE haDEPSO aPSO
CALF
aDE Better 21 15 20 13 21 21 18 14 15 16 17 19 20 21 18 17 15 15 13 16 8 5 0 15
Equal 2 8 3 9 2 2 5 9 8 7 6 4 3 2 5 5 8 7 7 5 10 12 16 8
Worst 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 3 2 5 6 7 0
R+ 253 220 181 212 168 192 211 153 208 171 177 145 220 227 217 164 207 156 187 197 197 203 150 231
R− 23 56 95 64 108 84 65 123 68 105 99 131 56 49 59 112 69 120 89 79 79 73 126 45
P value 0.0527 0.0512 0.0517 0.0489 0.0417 0.0527 0.0527 0.0419 0.0327 0.0518 0.0327 0.0627 0.0523 0.0518 0.0539 0.0435 0.0398 0.0554 0.0447 0.0412 0.0235 0.0517 0.0437 0.0445
t-test a a a a a a a a a a a a+ a+ a a+ a a a a a a+ a+ a+ a
Decision + + ≈ + + + + ≈ + + + + + + + + + + + + + + + +
Versus PSO DE GSA HTS SSA GWO EO HHO SADE JDE JADE SHADE LFPSO AGPSO HEPSO RPSOLF DE-PSO IPSODE PSOS- FAPSO DPD MBDE aDE haDEPSO
CALF
aPSO Better 21 11 20 13 20 19 19 15 16 11 10 17 17 17 16 14 20 12 11 12 12 6 0 0
Equal 2 4 2 9 2 2 3 4 5 6 5 4 4 3 3 4 4 5 4 5 4 2 7 8
Worst 0 8 1 1 1 2 1 4 2 6 8 2 2 3 4 5 3 6 8 6 9 15 16 15
R+ 227 221 203 152 232 197 163 207 150 168 194 212 194 211 154 255 225 182 154 187 208 155 208 211
R− 49 55 73 124 44 79 114 69 126 108 82 64 82 65 122 21 51 94 122 89 68 121 68 65
P value 0.9031 0.0412 0.0133 0.0527 0.4094 0.4570 0.0384 0.0582 0.0351 0.0351 0.0342 0.0412 0.0327 0.0582 0.0327 0.0498 0.0419 0.0435 0.0554 0.0489 0.0417 0.0527 0.0489 0.0327
t-test a a a a a a a a a a a+ a+ a a a a+ a a a+ a+ a+ a a a+
Decision + + + + + + + + + + + ≈ + + + + + ≈ + + + + + +
Versus PSO DE GSA HTS SSA GWO EO HHO SADE JDE JADE SHADE LFPSO AGPSO HEPSO RPSOLF DE-PSO IPSODE PSOS- FAPSO DPD MBDE aDE aPSO
CALF
haDEPSO Better 0 14 20 13 20 20 15 14 15 16 17 19 21 21 18 18 15 16 15 17 8 8 8 15
Equal 2 8 3 10 3 3 8 9 8 7 6 4 2 2 5 5 8 7 8 4 12 12 15 8
Worst 21 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 3 3 0 0
R+ 217 152 232 222 203 152 232 227 217 194 220 255 197 150 164 232 154 217 145 227 208 194 164 235
R− 59 124 44 54 73 124 44 49 59 82 56 21 79 126 112 44 122 59 131 49 68 82 112 41
p-value 0.0582 0.0351 0.0351 0.0342 0.0412 0.0327 0.0582 0.0527 0.0327 0.0419 0.0327 0.0518 0.0327 0.0627 0.0127 0.0517 0.0489 0.0417 0.0582 0.0327 0.0498 0.0419 0.0554 0.0489
t-test a a a a a a a a a+ a a+ a a a+ a a+ a+ a a a a+ a+ a+ a
decision + + + + + + + + ≈ + + + + ≈ + + + + + + + + + +

+
vs. versus, R sum of ranks of positive differences, R sum of ranks of negative differences
R. P. Parouha, P. Verma
Design and applications of an advanced hybrid meta‑heuristic…

(a) (b)

(c) (d)

(e) (f)

(g) (h)
Fig. 4  a–h Convergence graph of compared and proposed algorithms for test suite (TS)-1

13
Table 14  Parameter setting for test suite (TS)-2 and test suite (TS)-3
Algorithm References Control parameter PopulationSize Stopping criterion Run

13
Term Values

for TS-2
PSO Kennedy and Eberthart 1995 w, ­C1 and ­C2 1, 2 and 2 100 500 51
DE Storn and Price (1997) F and CR 0.5 and 0.5 50 500 51
CIPDE Zheng et al. (2017) c, μF, μCR and T 0.1, 0.7, 0.5 and 90 100 10,000 30
NDE Tian and Gao (2019) 0 0.5, 0.5 and 1 100 10,000 30
Floc , CR0m and c
NDLPSO Nasir et al. (2012) C1, ­C2, wo and w1 1.494, 1.494, 0.9 and 0.4 30 10,000 30
EPSO Lynn and Suganthan (2017) g1 and g2 15 and 25 30 10,000 30
MBDE Parouha and Das (2016a) Use of memory, swarm mutation and – 30 1000 30
swarm crossover
SCA-PSO Nenavath et al. (2018) r2, r3, r4, w, C1 and ­C2 2π*rand, 2*rand, rand, 0.9–0.4, 2 and 2 30 1000 30
for TS-3
PSO Kennedy and Eberthart (1995) C1 and C2 2 and 2 30 500 30
DE Storn and Price 1997 F and CR 0.5 and 0.5 30 500 30
JADE Zhang and Sanderson (2009) Fi and CRi, randci (μF, 0.1) and randni (μCR, 0.1) 30 500 51
SHADE Tanabe and Fukunaga (2013) Pbest and Arc rate 0.1 and 2 30 500 30
HEPSO Mahmoodabadi et al. (2014) PC and PB 0.95 and 0.02 40 500 51
BLPSO Chen et al. (2017) w, C, I and E [0.2 0.9],1.496, 1,1 30 1000 51
HGWO Zhu et al. (2015) betaMin, betaMax and pCR 0.2, 0.8 and 0.2 30 1000 30
PSOJADE Du and Liu (2020) 𝜌 0.5 40 500 30
aDE Proposed – – 30 500 30
aPSO wi, wf , c1i, c1f , c2i and c2f 0.4, 0.9, 0.5, 2.5, 2.5 and 0.5 30 500 30
haDEPSO – – 30 500 30

List of variables w: inertia weight, C1: cognitive acceleration coefficient, C2: social acceleration coefficient, F: scaling vector, CR: crossover rate, c: weighted parameter, μF:
0
location parameter, μCR: mean crossover rate, T: stagnation tolerance, Floc : initial location parameter, CR0m : initial average crossover rate, wo and w1: inertia weights, g1 and
g2: calibrating subpopulation sizes, r2, r3 and r4: random numbers, Fi: mutation factor, CRi: crossover probability, randci: random Cauchy distribution, randni: random normal
distribution, Pbest: personal best, PC: multi-crossover probability, PB: bee colony operator, C: acceleration coefficient, I: maximum immigration rate, E: Maximum emigration
rates, pCR: crossover probability, 𝜌: control parameter, wi and wf : minimum and maximum values of inertia weight, c1i and c1f : minimum and maximum values of personal
learning factors, c2i and c2f : minimum and maximum values of social learning factors.
R. P. Parouha, P. Verma
Table 15  Simulation results on test suite (TS)-2
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrid variants Proposed algorithms


DE PSO NDE CIPDE NDLPSO EPSO SCA-PSO MBDE aDE aPSO haDEPSO

g1 Mean 8.8069e+07 8.28e+05 5.91e+00 2.86e+03 2.18e+06 2.16e+05 2.32e+05 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Std 1.8777e+07 7.89e+05 5.58e+00 2.72e+03 2.47e+06 1.52e+05 2.33e+05 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Rank 8 6 2 3 7 4 5 1 1 1 1
g2 Mean 1.7236e+03 1.44e+02 0.00e+00 2.96e − 14 7.28e+05 2.04e+02 6.77e+08 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Std 4.9340e+02 1.72e+02 0.00e+00 5.68e − 15 2.94e+06 6.24e+02 2.04e+08 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Rank 5 3 1 6 4 7 1 1 1 1
g3 Mean 2.5487e+01 2.39e+03 0.00e+00 2.53e − 01 6.28e+03 6.05e+01 9.57e+02 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Std 5.0444e+00 1.77e+03 0.00e+00 4.62e − 01 1.95e+04 9.48e+01 1.85e+02 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Rank 3 6 1 2 7 4 5 1 1 1 1
g4 Mean 1.2268e+02 9.70e+01 2.94e− 08 1.66e − 13 2.76e+01 2.42e+01 4.79e+02 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Std 5.7729e+00 3.36e+01 4.84e− 08 1.26e − 13 2.78e+01 3.54e+01 1.62e+01 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Design and applications of an advanced hybrid meta‑heuristic…

Rank 7 6 3 2 5 4 8 1 1 1 1
g5 Mean 2.0897e+01 2.10e+01 2.01e+01 2.06e+01 2.08e+01 2.05e+01 2.03e+01 2.06e+001 1.66e+001 2.00e+001 0.00e+000
Std 5.7695e− 02 5.23e− 02 4.71e− 02 3.30e − 02 3.32e − 01 5.74e − 02 3.92e− 02 4.79e− 002 2.45e− 002 3.70e− 002 0.00e+000
Rank 8 9 4 7 8 6 5 7 2 3 1
g6 Mean 3.0312e+01 1.54e+01 3.37e+00 4.53e+00 7.23e+00 9.93e+00 6.08e+01 8.44e− 05 0.00e+000 1.40e− 007 0.00e+000
Std 1.0790e+00 2.14e+00 1.36e+00 2.06e+00 6.00e+00 2.04e+00 5.96e− 01 4.97e− 05 0.00e+000 9.90e− 007 0.00e+000
Rank 9 8 4 5 6 7 10 3 1 2 1
g7 Mean 4.3384e− 02 6.74e− 02 0.00e+00 6.82e − 14 2.69e − 02 2.86e − 04 4.08e+03 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Std 7.9261e− 02 8.69e− 02 0.00e+00 5.68e − 14 1.17e − 01 4.75e − 04 1.83e− 03 0.00e+00 0.00e+000 0.00e+000 0.00e+000
Rank 5 6 1 2 4 3 7 1 1 1 1
g8 Mean 1.1981e+02 7.37e+01 0.00e+00 0.00e+00 4.37e+01 3.90e − 02 8.39e+01 0.00e+00 0.00e+000 5.35e− 003 0.00e+000
Std 6.5322e+00 2.11e+01 0.00e+00 0.00e+00 2.98e+01 1.95e − 01 4.23e+00 0.00e+00 0.00e+000 0.16e+000 0.00e+000

13
Rank 6 4 1 1 3 2 5 1 1 2 1
Table 15  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrid variants Proposed algorithms

13
DE PSO NDE CIPDE NDLPSO EPSO SCA-PSO MBDE aDE aPSO haDEPSO

g9 Mean 1.9501e+02 7.68e+01 2.48e+01 2.07e+01 6.43e+01 5.09e+01 9.41e+02 4.86e+01 4.22e+001 5.76e+001 0.00e+000
Std 8.9552e+00 1.08e+01 4.48e+00 7.21e+00 3.39e+01 1.05e+01 4.51e+00 1.48e+00 1.50e− 013 2.78e− 009 0.00e+000
Rank 9 8 3 2 7 6 10 5 4 7 1
g10 Mean 3.9102e+03 1.72e+03 0.00e+00 1.07e+02 2.37e+03 1.34e+01 1.81e+03 1.87e− 02 1.60e− 002 1.45e− 001 0.00e+000
Std 2.3839e+02 7.41e+02 0.00e+00 3.03e+01 1.70e+03 2.83e+01 1.08e+02 1.60e− 02 3.82e− 009 5.62e− 007 0.00e+000
Rank 9 6 1 6 8 5 7 4 2 3 1
g11 Mean 6.5465e+03 6.71e+03 1.27e+03 2.45e+03 4.02e+03 2.08e+03 2.10e+03 1.94e+03 1.77e+003 2.46e+003 1.07e+003
Std 2.4765e+02 3.92e+02 2.41e+02 4.88e+02 1.71e+03 4.02e+02 1.28e+02 1.87e+02 1.80e+001 7.91e+001 2.21e+002
Rank 9 10 2 6 8 4 5 3 2 7 1
g12 Mean 2.0819e+00 2.64e+00 1.22e− 01 8.74e − 01 1.52e+00 4.62e − 01 2.63e+00 1.63e− 01 1.60e− 001 2.96e− 001 3.13e− 002
Std 2.0466e− 01 2.47e− 01 2.82e− 02 1.44e − 01 9.95e − 01 9.08e − 02 1.28e− 01 3.35e− 02 8.30e− 002 1.35e− 002 2.42e− 003
Rank 9 11 2 7 8 6 10 4 3 5 1
g13 Mean 4.8848e− 01 4.88e− 01 6.80e− 02 9.24e − 02 3.17e − 01 3.06e − 01 4.90e− 01 1.28e− 01 1.20e− 001 3.46e− 001 0.00e+000
Std 4.5645e− 02 1.14e− 01 1.31e− 02 2.35e − 02 1.10e − 01 5.86e − 02 1.17e− 01 1.17e− 02 1.70e− 002 8.55e− 002 0.00e+000
Rank 9 9 2 3 7 6 10 5 4 8 1
g14 Mean 2.9400e− 01 2.88e− 01 6.80e− 02 2.91e − 01 6.67e − 01 2.43e − 01 3.17e− 01 2.44e− 01 8.40e− 003 2.04e− 002 0.00e+000
Std 3.7750e− 02 4.70e− 02 1.31e− 02 2.76e − 02 2.06e+00 4.27e − 02 3.73e− 02 3.80e− 02 3.00e− 004 7.02e− 004 0.00e+000
Rank 8 7 4 8 10 5 9 6 2 3 1
g15 Mean 1.8975e+01 1.80e+01 2.03e− 01 4.38e+00 5.61e+00 5.61e+00 1.63e− 01 2.71e+00 2.01e− 001 3.25e− 001 1.12e− 001
Std 1.1252e+00 5.93e+00 2.64e− 02 9.80e − 01 3.07e+00 2.2e+00 8.29e− 01 2.45e− 01 2.50e− 001 5.21e− 001 7.89e− 001
Rank 10 9 4 7 8 8 2 6 3 5 1
R. P. Parouha, P. Verma
Table 15  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrid variants Proposed algorithms


DE PSO NDE CIPDE NDLPSO EPSO SCA-PSO MBDE aDE aPSO haDEPSO

g16 Mean 1.2474e+01 1.20e+01 2.60e+00 8.45e+00 1.16e+01 1.04e+01 1.60e+01 8.59e+00 8.50e+000 9.93e+000 1.23e+000
Std 2.2289e− 01 4.35e− 02 4.45e− 01 7.90e − 01 9.10e − 01 5.34e − 01 1.62e− 01 4.96e− 01 4.60e− 001 5.41e− 001 3.45e− 001
Rank 10 9 2 3 8 7 11 5 4 6 1
g17 Mean 2.3978e+06 1.91e+05 8.38e+00 1.51e+04 2.81e+05 6.12e+04 1.16e+04 1.96e+02 9.97e+000 1.90e+001 9.88e+000
Std 5.6934e + 05 1.73e+05 4.13e− 01 6.94e+04 6.20e+05 4.73e+04 4.16e+03 6.59e+01 1.10e− 001 3.61e+002 4.86e− 001
Rank 11 9 1 7 10 8 6 5 3 4 2
g18 Mean 2.8801e+04 2.93e+03 1.13e+02 9.74e+01 1.28e+04 2.84e+02 9.69e+02 5.59e+00 3.90e+000 5.70e+000 1.89e+000
Std 1.5262e+04 2.35e + 03 5.94e+01 3.17e+01 1.77e+04 2.78e+02 1.26e+01 2.79e+00 1.90e+000 3.04e+000 3.20e− 001
Rank 11 9 6 5 10 7 8 3 2 4 1
g19 Mean 1.0619e+01 2.12e+01 5.95e+00 4.52e+00 1.03e+01 7.33e+00 1.90e+01 2.98e+00 1.70e+000 2.99e+000 1.06e− 002
Design and applications of an advanced hybrid meta‑heuristic…

Std 5.8059e− 01 2.55e+01 1.50e+00 5.95e − 01 5.08e+00 1.12e+00 5.58e− 01 7.89e− 01 3.80e− 001 5.12e− 001 4.12e− 003
Rank 9 11 6 5 8 7 10 3 2 4 1
g20 Mean 4.5792e+02 1.54+03 2.14e+00 8.74e+02 4.71e+03 1.01e+03 2.31e+03 3.81e+00 2.44e+000 4.94e+000 1.04e+000
Std 8.3335e+01 9.69e+02 4.61e− 01 1.26e+03 1.64e+04 1.11e+03 1.25e+02 3.57e+00 3.61e− 001 6.11e− 001 3.68e− 001
Rank 6 9 2 7 11 8 10 4 3 5 1
g21 Mean 1.8856e+05 9.72e+04 4.05e+00 7.91e+03 9.56e+04 3.53e+04 3.43e+03 9.71e+01 8.70e+001 3.38e+002 6.30e+001
Std 6.8832e+04 9.01e+04 9.50e− 01 2.76e+04 9.09e+04 3.00e+04 3.09e+02 7.04e+01 1.96e+000 2.19e+000 1.88e+001
Rank 11 10 1 7 9 8 6 4 3 5 2
g22 Mean 2.1011e+02 2.81e+02 1.01e+01 2.04e+02 3.25e+02 2.75e+02 2.24e+02 2.87e+01 1.78e+001 2.03e+001 5.86e+000
Std 7.2336e+01 1.08e+02 5.37e+00 1.01e+02 2.09e+02 1.05e+02 7.26e+00 1.68e+01 1.03e+000 5.15e+000 7.37e+001
Rank 6 9 2 5 11 8 7 10 3 4 1

13
Table 15  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrid variants Proposed algorithms

13
DE PSO NDE CIPDE NDLPSO EPSO SCA-PSO MBDE aDE aPSO haDEPSO

g23 Mean 3.1224e+02 3.15e+02 2.61e+01 3.15e+02 3.14e+02 3.15e+02 2.00e+02 3.27e+02 2.00e+002 3.12e+002 1.01e+001
Std 8.0845e− 05 1.23e+11 4.46e+00 0.00e+00 5.26e − 01 2.21e − 12 2.48e+03 0.00e+00 6.78e− 005 1.88e− 003 3.45e+000
Rank 4 6 2 6 5 6 3 7 3 4 1
g24 Mean 2.0891e+02 2.00e+02 3.15e+02 2.25e+02 2.34e+02 2.26e+02 2.24e+02 2.22e+02 2.25e+002 2.29e+002 1.32e+002
Std 3.6346e+00 2.20e+03 2.15e− 13 2.33e+00 8.84e+00 1.26e+00 7.26e+00 5.13e+00 4.83e+000 6.83e+000 5.13e+000
Rank 3 2 10 6 9 7 5 4 6 8 1
g25 Mean 2.2324e+02 2.01e+02 2.22e+02 2.08e+02 2.01e+02 2.08e+02 3.15e+02 2.08e+02 1.86e+002 2.00e+002 1.38e+002
Std 2.7951e+00 2.39e+00 1.67e− 01 3.17e+00 1.45e+00 1.77e+00 3.45e+02 3.05e− 02 4.56e+000 1.93e− 001 0.00e+000
Rank 6 4 7 5 4 5 8 5 2 3 1
g26 Mean 1.0048e+02 1.10e+02 2.03e+02 1.00e+02 1.02e+02 1.06e+02 1.00e+02 1.33e+02 1.49e+002 1.66e+002 1.00e+002
Std 4.7725e− 02 3.15e+01 4.91e− 02 1.78e − 02 1.40e+01 2.3e+01 7.66e− 02 1.56e− 02 2.45e+001 2.16e+001 6.06e− 002
Rank 1 4 8 1 2 3 1 5 6 7 1
g27 Mean 3.8924e+02 5.47e+02 1.00e+02 3.21e+02 4.76e+02 4.12e+02 2.75e+02 3.70e+02 3.22e+002 3.84e+002 1.00e+002
Std 3.7274e+01 1.66e+02 1.79e− 02 3.80e+01 1.17e+02 3.62e+01 2.33e+01 2.01e+01 4.11e+001 6.42e+000 2.38e−002
Rank 7 10 1 3 9 8 2 5 4 6 1
g28 Mean 9.7674e+02 1.24e+03 3.90e+02 7.96e+02 4.18e+02 9.53e+02 9.23e+02 8.58e+02 3.40e+002 4.30e+002 1.80e+002
Std 2.7797e+01 3.51e+02 3.06e+01 2.96e+01 5.22e+01 7.67e+01 2.29e+01 1.74e+02 4.86e+001 5.78e+001 3.82e+001
Rank 10 11 3 6 4 9 8 7 2 5 1
g29 Mean 1.1761e+04 3.31e+06 7.97e+02 7.61e+02 2.12e+02 9.80e+02 3.58e+03 9.27e+02 2.27e+002 3.27e+003 3.89e+001
Std 3.0527e+03 5.39e+06 1.63e+01 7.01e+01 1.25e+01 1.28e+02 1.86e+02 6.18e+00 527e+002 5.27e+001 5.23e+002
Rank 10 11 5 4 2 7 9 6 3 8 1
R. P. Parouha, P. Verma
Table 15  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrid variants Proposed algorithms


DE PSO NDE CIPDE NDLPSO EPSO SCA-PSO MBDE aDE aPSO haDEPSO

g30 Mean 5.5898e+03 3.35e+03 5.14e+02 1.48e+03 9.19e+02 2.38e+03 3.89e+03 1.29e+03 6.45e+002 2.89e+003 2.00e+002
Std 8.7287e+02 1.35e+03 6.93e+01 4.35e+02 2.96e+02 5.55e+02 7.56e+01 6.23e+02 7.90e+001 5.33e+002 1.56e+001
Rank 11 10 2 6 4 7 9 5 3 8 1
Sum of rank 231 232 89 139 208 179 208 127 78 131 32
Average 7.70 7.73 2.96 4.63 6.93 5.96 6.93 4.23 2.6 4.36 1.06
Overall rank 9 10 3 6 8 7 8 4 2 5 1

The overall best values in each table are highlighted with boldface letters of the corresponding algorithms
Design and applications of an advanced hybrid meta‑heuristic…

13
Table 16  Simulation results on test suite (TS) − 3
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrids variants Proposed algorithms

13
PSO DE JADE SHADE BLPSO HEPSO HGWO PSOJADE aDE aPSO haDEPSO

h1 Mean 1.67e+08 1.67e+08 1.39e− 14 1.00e+02 2.14e+09 7.99e+002 4.65e+09 1.01e− 019 0.00e+000 0.00e+000 0.00e+000
Std 2.53e+07 2.02e+02 1.99e− 15 0.00e+00 3.93e+08 1.02e+003 1.65e+09 4.67e− 019 0.00e+000 0.00e+000 0.00e+000
Rank 6 6 3 4 7 5 8 2 1 1 1
h2 Mean 2.39e+15 8.03e+07 1.82e+02 1.78e+02 1.55e+30 5.36e− 006 2.76e+33 1.13e− 007 0.00e+000 0.00e+000 0.00e+000
Std 2.46e+15 4.88e+01 7.87e+02 2.02e+02 5.18e+30 7.53E− 006 4.76e+33 5.48e− 007 0.00e+000 0.00e+000 0.00e+000
Rank 7 6 5 4 8 3 9 2 1 1 1
h3 Mean 9.93e+03 1.84e+04 9.28e+03 3.00e+02 8.26e+04 8.43e− 001 7.87e+04 1.98e− 028 1.33e− 033 1.11e− 021 0.00e+000
Std 4.57e+03 1.38e+07 1.62e+04 0.00e+00 1.38e+04 1.82e+000 7.65e+03 2.70e− 028 6.05e− 029 7.96e− 015 0.00e+000
Rank 8 9 7 6 11 5 10 3 2 4 1
h4 Mean 5.04e+02 4.23e+02 5.50e+01 4.00e+02 8.32e+02 2.12e− 001 7.42e+02 6.47e+000 1.35e− 005 4.31e− 003 5.55e− 009
Std 3.03e+01 1.83e+02 1.61e+01 0.00e+00 4.86e+01 7.91e− 001 9.58e+01 1.61e+001 8.63e− 003 1.06e− 001 1.35e− 014
Rank 9 8 6 7 11 4 10 5 2 3 1
h5 Mean 7.52e+02 5.44e+02 2.60e+01 5.03e+02 7.43e+02 2.23e+001 7.23e+02 2.69e+001 1.95e+001 2.22e+001 1.08e+001
Std 3.80e+01 1.53e+02 4.76e+00 1.00e+00 1.59e+01 5.68e+000 2.33e+01 4.47e+000 1.15e− 001 1.03e+000 3.61e− 005
Rank 11 8 5 7 10 4 9 6 2 3 1
h6 Mean 6.58e+02 6.08e+02 1.67e− 13 6.00e+02 6.25e+02 1.90e− 013 6.22e+02 0.00e+000 0.00e+000 2.89e− 021 0.00e+000
Std 9.87e+00 1.93e+02 8.91e− 14 2.66e− 07 2.73e+00 3.42e− 013 4.02e+00 0.00e+000 0.00e+000 7.52e− 017 0.00e+000
Rank 9 6 3 5 8 4 7 1 1 2 1
h7 Mean 9.52e+02 7.64e+02 5.47e+01 7.13e+02 1.07e+03 4.73e+001 9.60e+02 5.66e+001 3.95e+001 4.21e+001 2.50e+001
Std 1.65e+01 3.72e+01 3.54e+00 1.23e+02 1.91e+01 4.05e+000 3.72e+01 5.02e+000 3.43e+000 3.94e+000 1.95e+000
Rank 9 8 5 7 11 4 10 6 2 3 1
h8 Mean 9.99e+02 8.46e+02 2.74e+01 8.03e+02 1.04e+03 2.15e+001 9.79e+02 2.85e+001 2.01e+001 2.33e+001 1.84e+001
Std 2.51e+01 3.76e+01 3.58e+00 1.27e+02 1.51e+01 6.61e+000 1.66e+01 5.98e+000 2.38e+000 4.84e+000 6.49e− 001
R. P. Parouha, P. Verma

Rank 10 8 5 7 11 3 9 6 2 4 1
Table 16  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrids variants Proposed algorithms


PSO DE JADE SHADE BLPSO HEPSO HGWO PSOJADE aDE aPSO haDEPSO

h9 Mean 6.77e+03 1.08e+03 1.96e− 02 9.00e+02 2.68e+03 5.26e− 023 2.34e+03 1.33e− 001 5.87e− 016 3.62e− 013 8.86e− 021
Std 1.72e+03 4.85e+02 8.96e− 02 0.00e+00 3.89e+02 9.13e− 023 5.53e+02 4.27e− 001 2.52e− 024 1.58e− 009 5.17e− 030
Rank 11 8 5 7 10 1 9 6 3 4 2
h10 Mean 6.57e+03 2.50e+03 1.92e+03 11.93+02 8.56e+03 2.14e+003 6.41e+03 1.11e+003 9.64e+001 1.50e+003 1.49e+001
Std 5.96e+02 3.79e+02 2.20e+02 8.47e+01 2.37e+02 2.09e+002 5.95e+02 2.89e+002 6.85e+001 1.96e+002 2.31e+001
Rank 11 8 6 3 10 7 9 4 2 5 1
h11 Mean 1.32e+03 1.19e+03 3.13e+01 1.10e+03 1.86e+03 5.72e+000 4.01e+03 1.87e+001 4.97e− 001 0.89e+000 1.50e− 005
Std 4.90e+01 5.36e+01 2.45e+01 1.36e+00 1.63e+02 3.05e+000 1.36e+03 2.13e+001 7.61e− 001 2.82e+000 7.15e− 002
Rank 9 8 6 7 10 4 11 5 2 3 1
h12 Mean 4.07e+07 1.69e+07 1.16e+03 1.32e+03 1.65e+08 6.39e+004 3.64e+08 1.02e+003 1.26e+003 1.31e+003 1.04e+003
Design and applications of an advanced hybrid meta‑heuristic…

Std 1.89e+07 4.45e+09 4.33e+02 1.02e+02 4.98e+07 1.00e+005 1.71e+08 4.01e+002 3.78e+002 4.32e+002 1.17e+002
Rank 9 8 3 6 10 7 11 1 4 5 2
h13 Mean 8.19e+06 1.33e+04 1.92e+02 1.30e+03 2.74e+07 5.44e+003 1.86e+08 5.83e+001 2.99e+001 3.07e+001 1.87e+001
Std 2.31e+06 4.97e+07 1.09e+03 0.71e+00 1.32e+07 3.95e+003 1.48e+08 3.52e+001 0.53e+000 1.30e+000 1.83e− 001
Rank 9 8 5 6 10 7 11 4 2 3 1
h14 Mean 3.20e+04 1.50e+03 4.58e+03 1.41e+03 1.97e+05 2.14e+003 9.51e+05 1.61e+001 5.42e+001 8.12e+001 1.75e+001
Std 3.10e+04 4.50e+06 8.11e+03 9.21e+00 1.08e+05 2.89e+003 5.60e+05 1.13e+001 3.77e+002 6.85e+002 1.38e+001
Rank 9 6 8 5 10 7 11 1 3 4 2
h15 Mean 9.72e+05 1.75e+03 1.13e+03 1.50e+03 2.81e+06 2.80e+002 1.71e+07 6.09e+001 4.97e+001 3.22e+001 1.15e+001
Std 3.95e+05 2.58e+06 2.97e+03 0.36e+00 2.50e+06 2.51e+002 2.60e+07 6.97e+001 0.28e+000 1.38e+000 6.26e− 001
Rank 9 8 6 7 10 5 11 4 3 2 1

13
Table 16  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrids variants Proposed algorithms

13
PSO DE JADE SHADE BLPSO HEPSO HGWO PSOJADE aDE aPSO haDEPSO

h16 Mean 3.08e+03 1.77e+03 4.30e+02 1.60e+03 3.53e+03 3.88e+002 3.22e+03 4.26e+002 3.65e+002 4.18e+002 1.84e+002
Std 2.45e+02 4.26e+02 1.25e+02 2.19e+00 2.51e+02 9.93e+001 2.95e+02 1.51e+002 5.61e+001 8.63e+001 0.20e+001
Rank 9 8 6 7 11 4 10 5 2 3 1
h17 Mean 2.40e+03 1.78e+03 7.36e+01 1.71e+03 2.25e+03 4.10e+001 2.28e+03 4.51e+001 5.74e+001 7.26e+001 4.56e+001
Std 2.26e+02 5.47e+02 2.76e+01 5.96e+00 1.57e+02 2.43e+001 1.42e+02 4.87e+001 4.84e+000 3.04e+000 0.47e+001
Rank 11 8 6 7 9 1 10 2 4 5 3
h18 Mean 6.02e+05 3.29e+04 5.19e+01 1.89e+03 3.99e+06 1.17e+004 3.97e+06 3.55e+001 2.67e+001 0.34e+002 1.89e+001
Std 4.07e+05 5.79e+06 5.14e+01 9.46e+00 2.26e+06 8.80e+003 4.85e+06 4.36e+001 5.12e+000 0.74e+001 0.49e+001
Rank 9 8 5 6 11 7 10 4 2 3 1
h19 Mean 2.98e+06 2.10e+03 1.95e+03 1.90e+03 4.53e+06 1.16e+003 1.12e+07 2.68e+001 0.36e+001 9.93e+000 2.91e+000
Std 1.34e+06 1.85e+07 4.44e+03 0.28e+00 3.79e+06 1.21e+003 1.75e+07 2.32e+001 0.23e+000 1.28e+000 0.19e− 001
Rank 9 8 7 6 10 5 11 4 2 3 1
h20 Mean 2.66e+03 2.07e+03 9.76e+01 2.02e+03 2.59e+03 1.22e+002 2.63e+03 7.87e+001 0.65e+002 3.32e+001 1.78e+001
Std 1.61e+02 1.38e+02 4.96e+01 0.00e+00 1.17e+02 4.99e+001 1.07e+02 5.92e+001 3.41e+001 0.48e+002 0.00e+000
Rank 11 8 5 7 9 6 10 4 3 2 1
h21 Mean 2.54e+03 2.34e+03 2.27e+02 2.28e+03 2.53e+03 1.77e+002 2.46e+03 2.30e+002 1.63e+002 2.13e+002 9.89e+001
Std 3.61e+01 8.42e+01 4.17e+00 4.26e+01 1.42e+01 6.49e+001 2.80e+01 8.01e+000 6.31e+000 7.79e+000 0.37e+001
Rank 11 8 5 7 10 3 9 6 2 4 1
h22 Mean 4.71e+03 2.33e+03 1.01e+02 2.29e+03 2.64e+03 1.00e+002 3.21e+03 1.00e+002 0.15e+003 1.89e+002 1.00e+002
Std 2.76e+03 1.23e+03 4.48e+00 1.61e+01 4.45e+01 2.00e− 012 1.03e+03 0.00e+000 5.89e− 015 4.78e− 013 0.00e+000
Rank 9 6 2 5 7 1 8 1 3 4 1
R. P. Parouha, P. Verma
Table 16  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrids variants Proposed algorithms


PSO DE JADE SHADE BLPSO HEPSO HGWO PSOJADE aDE aPSO haDEPSO

h23 Mean 3.15e+03 2.64e+03 3.73e+02 2.60e+03 2.91e+03 3.00e+002 2.83e+03 3.70e+002 3.29e+002 3.75e+002 3.12e+002
Std 1.79e+02 5.43e+01 5.50e+00 1.71e+00 2.40e+01 1.40e+002 3.30e+01 8.21e+000 1.70e+000 0.19e+001 0.14e+001
Rank 11 8 5 7 10 1 9 4 3 6 2
h24 Mean 3.26e+03 2.77e+03 4.42e+02 2.72e+03 3.08e+03 2.49e+ 002 2.98e+03 4.49e+002 2.02e+002 0.24e+003 1.15e+002
Std 8.28e+01 3.15e+01 4.80e+00 3.10e+01 1.79e+01 1.32e+002 2.56e+01 7.44e+000 0.22e+001 0.38e+001 0.00e+000
Rank 11 9 6 8 10 4 5 7 2 3 1
h25 Mean 2.93e+03 2.96e+03 3.87e+02 2.91e+03 3.07e+03 3.88e+002 3.04e+03 3.88e+002 3.05e+002 3.87e+002 2.87e+002
Std 2.27e+01 4.08e+01 1.32e− 01 2.29e+01 3.08e+01 2.19e+000 4.96e+01 3.04e+000 0.92e− 001 0.19e+001 1.16e− 001
Rank 6 7 3 5 9 4 8 4 2 3 1
h26 Mean 5.41e+03 3.17e+03 1.17e+03 2.90e+03 6.17e+03 2.44e+002 5.35e+03 4.25e+002 3.56e+002 4.15e+002 2.89e+002
Design and applications of an advanced hybrid meta‑heuristic…

Std 2.29e+03 2.70e+01 6.60e+01 3.49e+01 4.36e+02 2.44e+002 3.94e+02 4.02e+002 0.48e+002 6.25e+001 0.36e+002
Rank 10 8 6 7 11 1 9 5 3 4 2
h27 Mean 3.21e+03 3.10e+03 5.03e+02 3.07e+03 3.36e+03 5.98e+002 3.28e+03 5.03e+002 4.25e+002 0.49e+003 3.98e+002
Std 8.81e+01 7.47e+01 8.17e+00 0.78e+00 2.03e+01 1.15e+001 2.70e+01 8.02e+000 7.78e− 002 5.20e− 001 5.60e− 003
Rank 8 7 4 6 10 5 9 4 2 3 1
h28 Mean 3.27e+03 3.38e+03 3.36e+02 3.26e+03 3.48e+03 3.00e+002 3.53e+03 3.14e+002 2.69e+002 3.01e+002 1.82e+002
Std 1.93e+01 6.86e+02 5.23e+01 2.22e+01 3.71e+01 7.82e− 013 4.60e+01 3.66e+001 7.18e− 015 4.45e− 011 3.48e− 019
Rank 8 9 6 7 10 3 11 5 2 4 1
h29 Mean 4.38e+03 3.29e+03 4.84e+02 3.14e+03 4.42e+03 6.36e+002 4.35e+03 4.39e+002 4.18e+002 4.33e+002 3.46e+002
Std 2.05e+02 5.57e+00 2.77e+01 1.29e+01 1.48e+02 3.44e+001 1.86e+02 3.29e+001 0.11e+002 1.23e+001 0.98e+001
Rank 10 8 5 7 11 6 9 4 2 3 1

13
Table 16  (continued)
Function Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrids variants Proposed algorithms

13
PSO DE JADE SHADE BLPSO HEPSO HGWO PSOJADE aDE aPSO haDEPSO

h30 Mean 5.77e+06 1.53e+06 2.15e+03 3.20e+02 8.55e+06 9.07e+003 6.08e+07 5.63e+002 0.43e+003 5.46e+002 3.56e+002
Std 2.39e+06 3.76e+03 1.44e+02 0.31e+00 3.38e+06 1.95e+003 3.64e+07 9.81e+001 4.56e− 001 9.70e− 001 0.35e+000
Rank 9 8 6 1 10 7 11 5 3 4 2
Sum of rank 278 231 155 183 295 128 284 120 69 101 39
Average 9.3 7.7 5.16 6.1 9.8 4.26 9.5 4 2.3 3.36 1.3
Overall rank 9 8 6 7 11 5 10 4 2 3 1

The overall best values in each table are highlighted with boldface letters of the corresponding algorithms
R. P. Parouha, P. Verma
Design and applications of an advanced hybrid meta‑heuristic…

From Tables 15 and 16, it should be noted that the mean error values of the pro-
posed aDE, aPSO and haDEPSO algorithms are better and/or equal in comparison of all
compared algorithms in both cases. Moreover, as per the experimental results shown in
Table 15 the following comparison results (among non-proposed algorithms) are sum-
marized as follows for 30D TS-2 cases. (1) Unimodal function (­ g1–g3): proposed aDE,
aPSO and haDEPSO obtained better results in all three functions (­g1–g3) with similar
on NDE algorithms for function ­g2 and ­g3. (2) Multimodal function ­(g4–g16): proposed
haDEPSO exhibits best performance on all multimodal functions ­(g4–g16). Suggested
component aDE gives the best result for four functions ­(g4, ­g6, ­g7 and ­g8) as well as
for other functions they perform similar or marginally inferior whereas aPSO execute
better result in two functions ­(g4 and ­g7). (3) Hybrid function ­(g17–g22): proposed
haDEPSO performs better in four functions ­(g18, ­g19, ­g20 and ­g22), however for ­g17 and
­g21 it performs equal or marginally better to others. Anticipated component of proposed
haDEPSO i.e. aDE and aPSO performs either better or marginally inferior to others and
(4). Composition function ­(g23–g30): proposed haDEPSO exhibits the best performance
on all composition functions meanwhile aDE and aPSO obtained marginally better or
equal results compared to other algorithms.
Similarly, as per the experimental results shown in Table 16 following comparison results
are summarized for 30D TS-3 cases. (1) Unimodal function ­(h1–h3): proposed haDEPSO
exhibits the best performance in all three functions ­(h1–h3). Whereas, aDE and aPSO have
shown the best performance on two functions (­h1 and ­h2) and slightly better for h­ 3 as com-
pared to others. (2) Multimodal function (­h4–h10): proposed haDEPSO outperformed for all
functions and equally with PSOJADE in ­h6 only. Meanwhile, aDE and aPSO perform equal or
marginally inferior with other algorithms. (3). Hybrid function ­(h11–h20): proposed haDEPSO
execute better result for seven functions (­h11, ­h13, ­h15, ­h16, ­h18, ­h19 and ­h20) and marginally
inferior with PSOJADE (for functions h­ 12 and ­h14) and HEPSO (for function h­ 17). At the same
time, suggested aDE and aPSO obtained slightly better or equal results with others and (iv).
Composition function ­(h21–h30): proposed haDEPSO exhibits the best performance on six
functions ­(h21, ­h24, ­h25, ­h27, ­h28 and h­ 29) and equally with HEPSO and PSOJADE on h­ 22 func-
tion as well as marginally inferior with HEPSO (for ­h23 and ­h26 functions) and SHADE (for
­h30 function). Meanwhile, aDE and aPSO obtained marginally better/similar or slightly infe-
rior in a few functions with other algorithms.
Moreover, all algorithms are individually ranked in Tables 15 and 16 based on mean error
values. From these tables, it can be concluded that haDEPSO, aDE and aPSO ranked 1st, 2nd
and 5th (in case of TS-2) and 1st, 2nd and 3rd (in case of TS-3) individually. Also, the aver-
age and overall rank of proposed algorithms versus others are presented in Tables 15 and 16.
It is clear that (from ranking) performances of proposed algorithms are superior to others.
Ultimately, proposed aDE, aPSO and haDEPSO produce less std. for most of the functions on
both cases which defines their stability.
The convergence speed of proposed aDE, aPSO and haDEPSO algorithms with others are
analysed on 30, 50 and 100-D TS-2 and TS-3. For this one function taken from each category
of TS-2 (­ g1, ­g6, ­g22 and ­g30) and TS-3 (­ h3, ­h9, ­h20 and ­h29). The convergence graph for all such
functions can visualize in Figs. 5a–l and 6a–l for TS-2 and TS-3 respectively. From these fig-
ures, it can be observed that aDE, sPSO and haDEPSO makes faster convergence than other
algorithms in both cases.
Despite from this, according to the guidelines of CECtest suite, the algorithm complex-
ity has been investigated on 30, 50 and 100-D TS-2 and TS-3. Firstly, T0 (time consumed
by a complier in seconds to calculate a specific function) is executed through a subsequent
program.

13
R. P. Parouha, P. Verma

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Fig. 5  a–l Convergence curves for test suite (TS)-2 on different dimension (D)

13
Design and applications of an advanced hybrid meta‑heuristic…

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Fig. 6  a–l Convergence curves for test suite (TS)-3 on different dimension (D)

13
R. P. Parouha, P. Verma

Table 17  Algorithm complexity results on test suite (TS) − 2 and test suite (TS) − 3
on TS− 2 on TS− 3
Algorithm 30D 50D 100D Algorithm 30D 50D 100D

T0 0.3213 0.3213 0.3213 T0 0.0216 0.0216 0.0216


T1 0.5975 0.7854 1.2541 T1 0.5831 0.8971 1.3630
PSO 156.23 160.25 180.23 PSO 140.25 148.52 177.12
DE 140.23 145.23 156.42 DE 135.25 142.32 165.32
NDE 106.94 110.58 113.39 JADE 90.25 95.23 98.45
CIPDE 160.52 170.25 180.12 SHADE 99.44 110.25 120.32
NDLPSO 101.94 107.07 115.92 HEPSO 130.25 150.25 170.15
EPSO 167.01 190.25 190.23 BLPSO 167.01 190.25 190.23
SCA-PSO 120.53 135.23 155.23 HWGO 140.25 150.25 160.23
MBDE 121.32 112.12 138.32 PSOJADE 130.55 140.85 155.21
aPSO 85.72 98.56 105.23 aPSO 75.12 77.25 82.12
aDE 55.115 65.75 81.59 aDE 48.12 52.12 53.45
haDEPSO 33.23 40.01 52.45 haDEPSO 33.23 38.22 40.22

D: dimension, T0: time consumed by a complier to calculate a specific function, T1: time consumed by a
complier to calculate a specific function for completing 200,000 evaluations

Fig. 7  Algorithm complexity for test suite (TS)-2 on different dimension (D)

13
Design and applications of an advanced hybrid meta‑heuristic…

Fig. 8  Algorithm complexity for test suite (TS)-3 on different dimension (D)

(a) (b)

(c)
Fig. 9  a–c Average running time of different algorithms for test suite (TS)-2

13
R. P. Parouha, P. Verma

(a) (b)

(c)
Fig. 10  a–c Average running time of different algorithms for test suite (TS)-3

13
Table 18  Parameter setting for real world problems (RWPs)
Algorithms Reference Control parameter Population size Stopping criterion Run
Term Values

Traditional algorithms
PSO Kennedy and Eberthart (1995) w, C1 and C2 [0.543254—0.33362], 1.9460 and 1.8180, 50 1000 30
DE Storn and Price 1997 F and CR 0.12470 and 0.58143 50 1000 30
ABC Karaboga and Basturk (2007) – – 50 1000 30
DE variants
JADE Zhang and Sanderson (2009) F and CR 0.50 and 5 30 10,000 100
SaDE Qin et al. (2009) F and CR N (0.5, 0.3) and N(CRm, 0.1) 30 10,000 100
CoDE Wang et al. (2011) F and CR 1.0 and 0.1 30 10,000 100
PSO variants
SLPSO Li et al. (2012) w 0.9–0.5 30 10,000 100
HCLPSO Lynn and Suganthan (2015) w and C 0.9–0.4 and 1.49445 30 10,000 100
MSPSO Xuewen et al. (2018) C1 and C2 1.49445 and 1.49445 30 10,000 100
Design and applications of an advanced hybrid meta‑heuristic…

Hybrid variants
DEPSO-2S Dor et al. (2012) w, C1, C2 & CR 0.72, 1.19, 1.19 and 0.5 – – 30
DPD Das and Parouha (2015b) FA, FC, CRA and CRC 0.5, 0.9, 0.9 and 0.9 100 200 30
MBDE Parouha and Das (2016a) Use of memory, swarm – 100 5000 50
mutation and swarm
crossover
Proposed algorithms
aDE Proposed – – 30 1000 30
aPSO wi , wf , c1i , c1f , c2i and c2f 0.4, 0.9, 0.5, 2.5, 2.5 and 0.5 30 1000 30
haDEPSO – – 30 1000 30

List of variables- w: inertia weight, C1: cognitive acceleration coefficient, C2: social acceleration coefficient, F: scaling vector, CR: crossover rate, C: acceleration coefficient,
FA and FC: mutation factor for group A and C, CRA and CRC: crossover weight for group A and C, wi and wf : minimum and maximum values of inertia weight, c1i and c1f :
minimum and maximum values of personal learning factors, c2i and c2f : minimum and maximum values of social learning factors.

13
Table 19  Simulation results on real world problems (RWPs)

13
RWPs Criteria Algorithm

Traditional algorithms DE variants PSO variants Hybrid variants Proposed algorithms

PSO DE ABC JADE SaDE CoDE SLPSO HCLPSO MSPSO DEPSO-2S DPD MBDE aDE aPSO haDEPSO

RWP-1 best 2.22e−06 1.78e−09 1.33e−10 1.05e−13 0.00e+00 0.00e+00 4.74e−11 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+000 0.00e+000 0.00e+000
worst 1.35e−06 9.09e−07 4.70e−07 6.34e−05 7.24e−06 7.94e−04 1.17e−01 1.18e−02 3.85e−02 7.17e−03 5.73e−03 4.30e−03 4.78e−014 6.18e−010 0.24e−016
mean 7.17e−03 5.73e−03 4.30e−03 1.87e−01 0.00e+00 1.02e+00 9.66e+00 3.32e+00 7.25e+00 2.07e+00 1.85e+00 2.06e+00 0.00e+000 0.00e+000 0.00e+000
std 2.06e−03 1.65e−03 1.23e−03 2.06e−03 4.16e−03 5.23e−02 3.56e−03 7.11e−02 6.91e−03 1.67e+000 3.11e–01 1.88e−05 1.69e−009 1.25e−007 2.06e−011
P value 0.9031 0.0527 0.0438 0.0527 0.4094 0.4570 0.0384 0.0582 0.0351 0.0351 0.0342 0.0489 0.0417 0.0327 0.0338
rank 11 10 9 12 1 2 8 6 7 5 3 4 1 1 1
RWP-2 best 1.61e−02 8.13e−03 2.50e−02 0.00e+00 2.39e−29 0.00e+00 0.00e+00 4.75e−21 0.00e+00 1.98e−29 0.00e+00 0.00e+00 0.00e+000 0.00e+000 0.00e+000
worst 1.17e−01 1.18e−02 3.85e−02 1.44e+05 8.14e−04 3.45e−02 8.13e+05 1.88e−07 1.93e−09 3.18e−02 4.65e−02 7.48e+05 1.18e−007 3.18e−006 1.23e−009
mean 1.17e−02 7.96e−03 2.21e−02 2.39e−17 7.67e−18 1.94e−17 1.86e−29 1.49e−15 0.00e+00 1.39e−10 4.55e− 12 4.21e+00 0.00e+000 3.17e−030 0.00e+000
std 4.43e+05 7.94e−04 3.14e−02 6.33e−06 7.89e−07 6.45e−06 7.16e+00 5.62e+00 1.39e−03 2.65e− 10 7.75e− 13 2.86e− 10 2.66e− 010 2.65e−009 8.65e− 016
p value 0.0342 0.0412 0.0327 0.0582 0.0527 0.0327 0.0419 0.0327 0.0518 0.0327 0.0497 0.0477 0.0518 0.0327 0.0321
rank 11 10 12 6 4 5 3 7 1 9 8 13 1 2 1
RWP-3 best 1.95e+00 1.88e+00 1.62e+00 1.02e+00 1.11e+00 5.00e+01 1.07e+00 7.59e−01 8.79e−01 1.56e+00 2.36e+05 3.14e+00 0.00e+000 0.00e+000 0.00e+000
worst 2.89e+00 2.77e+00 2.88e+00 1.44e+05 8.14e+00 3.45e+00 8.13e+05 2.23e+00 3.75e+00 3.80e+00 1.07e+00 2.57e+00 1.03e+000 1.17e+000 1.57e−005
mean 2.42e+00 2.44e+00 2.34e+00 1.17e+00 1.31e+00 6.73e+01 1.26e+00 1.08e+00 1.21e+00 3.00e− 01 1.05e− 01 1.04e− 01 0.00e+000 1.59e− 001 0.00e+000
std 0.20e+00 0.22e+00 0.24e+00 1.65e+00 2.54e+00 4.76e+01 1.97e+00 7.59e−01 2.13e+00 7.07e−002 2.15e−02 1.25e− 04 8.98e− 008 1.87e− 007 2.25e− 009
P value 0.0351 0.0342 0.0412 0.0518 0.0327 0.0627 0.0127 0.0517 0.0489 0.0417 0.0582 0.0327 0.0498 0.0419 0.0107
rank 13 13 10 7 9 14 8 6 7 5 3 2 1 4 1

The overall best values in each table are highlighted with boldface letters of the corresponding algorithms
std standard deviation, RWPs real world problems
R. P. Parouha, P. Verma
Design and applications of an advanced hybrid meta‑heuristic…

(a) (b)

(c)
Fig. 11  a–c Convergence graph for real world problems (RWPs)

for i = 1:1,000,000 do.

After this computing T1 time consumed by a complier to calculate a specific function g­ 18


(TS-2) and h­ 18 (TS-3) for completing 200,000 function evaluations. Further, evaluated T2
(time consumed by all algorithms to compile 200,000 function evaluations on both func-
T̂ −T
tions) five times and store its mean value denoted as T̂ 2. Thereafter calculated 2T 1 for each
0
algorithm and reported in Table 17. Also, the complexity charts of proposed algorithms
with others are presented in Figs. 7 (for TS-2) and 8 (for TS-3). It can be observed from
Table 17, Figs. 5 and 6 the time complexity of proposed algorithms is significantly less
than other comparative algorithms.

13
R. P. Parouha, P. Verma

Additionally, the average execution time of the proposed algorithm with others of 30
independent runs for 30, 50 and 100-D TS-2 and TS-3 has been reported in Figs. 9a–c
and 10a–c respectively, through box plots which are plotted between algorithms and
average execution time (in seconds). Each box plot contained outlier, maximum, mini-
mum, mean and median execution time limits used to solve TS-2 and TS-3. From the
corresponding figures, it is clearly visible that the average execution time of proposed
algorithms is less than others.
All in all, from the all above numerical, statistical and graphical result analysis it can
be proclaimed that the proposed aDE, aPSO and haDEPSO are performing competitive
and/or equally with other compared algorithms. However, among anticipated algorithms
proposed haDEPSO have greater efficiency.
(ii) On RWPs: Real world problems
The results of proposed hybrid haDEPSO and its suggested component aDE and aPSO
algorithm on 3 RWPs are compared with, traditional algorithm- PSO (Kennedy and Eber-
thart 1995), DE (Storn and Price 1997) and ABC (Karaboga and Basturk 2007); DE vari-
ants- JADE (Zhang and Sanderson 2009), SaDE (Qin et al. 2009) and CoDE (Wang et al.
2011); PSO variants- SLPSO (Li et al. 2012), HCLPSO (Lynn and Suganthan 2015) and
MSPSO (Xuewen et al. 2018) and hybrid variants- DEPSO-2S (Dor et al. 2012), DPD (Das
and Parouha 2015b) and MBDE (Parouha and Das 2016a). The parameters of all compara-
tive and proposed algorithms are quoted in Table 18.
The comparative results in terms of best, worst, mean, std., p-values and average rank-
ing of the objective function values are presented in Table 19.
From Table 19, proposed aDE, aPSO and haDEPSO produce slightly better and/or
equal results for all RWPs. Meanwhile, aPSO obtained the best results for one RWP-1
and marginally inferior or equal for others. Additionally, less std. and small p-values
of proposed algorithms on most RWPs indicates their better stability and reliabil-
ity respectively. Also, in order to analyse the performance, all algorithms are ranked
depending on their mean value and reported in Table 19. From this table, it can be seen
that haDEPSO and aDE ranked 1st on all RWPs, whereas aPSO secured 1st, 2nd and
4th rank on RWP-1, RWP-2 and RWP-3 successively. Hence, based on ranking results,
it can be decided that the proposed algorithms perform better than others. Moreover,
the convergence graphs of proposed and compared algorithms are plotted and pre-
sented in Fig. 11a–c for RWPs. In these figures clearly visualized that the proposed
algorithms converge faster than others. Therefore, proposed algorithms are computa-
tionally efficient.
In general, from the all above result analysis it can be declared that proposed aDE,
aPSO and haDEPSO are performing better and/or equally with others. However,
among the three proposed algorithms haDEPSO has larger competence. Hence the pro-
posed algorithms can effectively solve not only the benchmark problems, but also real
world optimization problems.

7 Conclusion with future perspectives

In this paper, an advanced hybrid algorithm (haDEPSO) is designed to solve optimiza-


tion problems. It integrated with suggested advanced DE (aDE) and PSO (aPSO) to deal
with premature convergence of classic DE and stagnation of classic PSO respectively. The

13
Design and applications of an advanced hybrid meta‑heuristic…

presented, mutation strategy and crossover probability along with slightly changed selec-
tion scheme of aDE and novel gradually varying inertia weight and acceleration coefficient
parameters of aPSO help to catch hold the dealing properties consistently. At each itera-
tion process, the individual population of haDEPSO is merged with others in a pre-defined
manner which ensures that the improved global and local search ability and to bring solu-
tions with higher quality. Additionally, in haDEPSO particle can learn not only from the
best globally experienced particle, but also from the best particle of each problem, due to
the proper implication of aDE and aPSO. As a whole, mechanism of ‘maintaining diver-
sity quality by aDE’ and ‘memorizing best achieved value by aPSO’ and makes haDEPSO
stronger. Furthermore, associated parameters of aDE and aPSO makes further strengthen
the effectiveness of haDEPSO.
The proposed haDEPSO and its integrated component aDE and aPSO have been tested
over 23 basic, 30 IEEE CEC 2014 and 30 IEEE CEC 2017 unconstrained benchmark func-
tions plus 3 real world optimization problems. The performance of proposed algorithms
compared with the traditional DE and PSO with their existed variants and hybrids as
well as other state-of-the-art algorithms. In all benchmark functions and real world opti-
mization problems, the proposed algorithms have a better or nearly similar capability for
obtaining results based on the minimum, mean and standard deviations. Therefore, it can
be concluded that the presented algorithms can enhance the exploration and exploitation
capacities of the basic DE and PSO algorithm. It is also observed that the proposed algo-
rithms give competitive results to those obtained from the other state-of-the-art algorithms.
Moreover, in the view of contributing to solutions with higher accuracy, faster convergence
speed and more stable scalability among all compared and suggested algorithms hybrid
haDEPSO has a competitive performance.
The investigation on the effects of parameters and the incorporation of the newly devel-
oped DE and PSO variants into proposed hybrid framework will be considered for future
works. Applying the algorithms based on the proposed framework to solve other complex
real world problems is also a promising research direction. Further, it may be extended to
solve constrained and multiobjective optimization problems of academia and industry.

Acknowledgments Authors, heartfelt thanks to the Editor and the Reviewers for their thoughtful comments
and constructive suggestions, as these comments and suggestions led us to an improvement of the work.

Compliance with ethical standards


Conflict of interest The authors declared that they had no conflicts of interest with respect to their authorship
or the publication of this article.

References
Abderazek H, Sait S, Yildiz AR (2019a) Mechanical engineering design optimisation using novel adaptive
differential evolution algorithm. Int J Veh Des 80(2/3/4):285–329
Abderazek H, Sait SM, Yildiz AR (2019b) Optimal design of planetary gear train for automotive transmis-
sions using advanced meta-heuristics. Int J Veh Des 80(2/3/4):121–136
Abderazek H, Yıldız A, Mirjalili S (2020a) Comparison of recent optimization algorithms for design opti-
mization of a cam-follower mechanism. Knowl Based Syst 191:105237
Abderazek H, Yıldız BS, Yıldız AR, Albak EI, Sait SM, Bureerat S (2020b) Butterfly optimization algo-
rithm for optimum shape design of automobile suspension components. Mater Test 62(4):365–370

13
R. P. Parouha, P. Verma

Abualigah LMQ (2019) Feature selection and enhanced krill herd algorithm for text document clustering
studies In: Studies in computational intelligence, vol 816. Springer, Boston, MA, USA, pp. 1–7
Abualigah L (2020a) Multi-verse optimizer algorithm: a comprehensive survey of its results, variants, and
applications. Neural Comput Appl 32:12381–12401
Abualigah L (2020b) Group search optimizer: a nature-inspired meta-heuristic optimization algorithm with
its results, variants, and applications. Neural Comput Appl. https​://doi.org/10.1007/s0052​1-020-
05107​-y
Abualigah L, Diabat A (2020a) A novel hybrid antlion optimization algorithm for multi-objective task
scheduling problems in cloud computing environments. Clust Comput. https​://doi.org/10.1007/s1058​
6-020-03075​-5
Abualigah L, Diabat A (2020b) A comprehensive survey of the Grasshopper optimization algorithm: results,
variants, and applications. Neural Comput Appl 32:15533–15556
Abualigah LMQ, Hanandeh ES (2015) Applying genetic algorithms to information retrieval using vector
space model. Int J Comput Sci Eng Appl 5(1):19–28
Abualigah LM, Khader AT (2017) Unsupervised text feature selection technique based on hybrid par-
ticle swarm optimization algorithm with genetic operators for the text clustering. J Supercomput
73(11):4773–4795
Abualigah LM, Khader AT, Hanandeh ES (2017a) A new feature selection method to improve the document
clustering using particle swarm optimization algorithm. J Comput Sci 25:456–466
Abualigah LM, Khader AT, Hanandeh ES, Gandomi AH (2017b) A novel hybridization strategy for krill
herd algorithm applied to clustering techniques. Appl Soft Comput 60:423–435
Abualigah LM, Khader AT, Hanandeh ES (2018a) A combination of objective functions and hybrid krill
herd algorithm for text document clustering analysis. Eng Appl Artif Intell 73:111–125
Abualigah LM, Khader AT, Hanandeh ES (2018b) Hybrid clustering analysis using improved krill herd
algorithm. Appl Intell 48:4047–4071
Ali M (2007) Differential Evolution with preferential crossover. Eur J Oper Res 181(3):1137–1147
Amjady N, Sharifzadeh H (2010) Solution of non-convex economic dispatch problem considering valve
loading effect by a new modified differential evolution algorithm. Int J Electr Power Energy Syst
32(8):893–903
Ang KM, Lim WH, Isa NAM, Tiang SS, Wong CH (2020) A constrained multi-swarm particle swarm opti-
mization without velocity for constrained optimization problems. Expert Syst Appl 140:1–23
Awad N, Ali M, Liang J, Qu B, Suganthan P (2016) Problem definitions and evaluation criteria for the
CEC 2017 special session and competition on single objective real-parameter numerical optimization,
Technical report
Aye CM, Pholdee N, Yildiz AR, Bureerat S, Sait SM (2019) Multi-surrogate-assisted metaheuristics for
crashworthiness optimization. Int J Veh Des 80(2–4):223–240
Azadani EN, Hosseinian S, Moradzadeh B (2010) Generation and reserve dispatch in a competitive market
using constrained particle swarm optimization. Int J Electr Power Energy Syst 32(1):79–86
Bansal JC, Sharma H, Clerc JSSM (2014) Spider monkey optimization algorithm for numerical optimiza-
tion. Memet Comput 6(1):31–47
Ben GN (2020) An accelerated differential evolution algorithm with new operators for multi-damage detec-
tion in plate-like structures. Appl Math Model 80:366–383
Brest J, Greiner S, Boskovic B, Mernik M, Zumer V (2006) Self-adapting control parameters in differ-
ential evolution: a comparative study on numerical benchmark problems. IEEE Trans Evol Comput
10(6):646–657
Cai Y, Wang J (2013) Differential evolution with neighborhood and direction information for numerical
optimization. IEEE Trans Cybern 43(6):2202–2215
Cai XJ, Cui Y, Tan Y (2009) Predicted modified PSO with time varying accelerator coefficients. Int J Bioin-
spir Comput 1(1/2):50–60
Caponio A, Neri F, Tirronen V (2009) Superfit control adaption in memetic differential evolution frame-
works. Soft Comput 13(8–9):811–831
Champasak P, Panagant N, Pholdee N, Bureerata S, Yildiz A (2020) Self-adaptive many objective meta-
heuristic based on decomposition for many-objective conceptual design of a fixed wing unmanned
aerial vehicle. Aerosp Sci Technol 100:105783
Chegini SN, Bagheri A, Najafi F (2018) A new hybrid PSO based on sine cosine algorithm and Levy flight
for solving optimization problems. Appl Soft Comput 73:697–726
Chen X, Tianfield H, Mei C, Du W, Liu G (2017) Biogeography-based learning particle swarm optimiza-
tion. Soft Comput 21:7519–7541
Chen Y, Li L, Peng H, Xiao J, Wu Q (2018a) Dynamic multi-swarm differential learning particle swarm
optimizer. Swarm Evolut Comput 39:209–221

13
Design and applications of an advanced hybrid meta‑heuristic…

Chen Y, Li L, Xiao J, Yang Y, Liang J, Li T (2018b) Particle swarm optimizer with crossover operation. Eng
Appl Artif Intell 70:159–169
Cuevas E, Cienfuegos M, Zaldívar D, Pérez-Cisneros M (2013) A swarm optimization algorithm inspired in
the behavior of the social-spider. Expert Syst Appl 40(16):6374–6384
Das KN, Parouha RP (2015) An ideal tri-population approach for unconstrained optimization and applica-
tions. Appl Math Comput 256:666–701
Dash J, Dam B, Swain R (2020) Design and implementation of sharp edge FIR filters using hybrid differen-
tial evolution particle swarm optimization. AEU Int J Electron Commun 114:153019
de Castro LN, Von Zuben FJ (2000) The clonal selection algorithm with engineering applications. Proc
GECCO 2000:36–39
Do DTT, Lee S, Lee J (2016) A modified differential evolution algorithm for tensegrity structures. Compos
Struct 158:11–19
Dor AE, Clerc M, Siarry P (2012) Hybridization of differential evolution and particle swarm optimization in
a new algorithm DEPSO-2S. Swarm Evolut Comput 7269:57–65
Du SY, Liu ZG (2020) Hybridizing particle swarm optimization with JADE for continuous optimization.
Multimed Tools Appl 79:4619–4636
Du H, Wu X, Zhuang J (2006) Small-world optimization algorithm for function optimization. In: Jao L et al
(eds) Advances in natural computation. Springer, Heidelberg, pp 264–273
Epitropakis MG, Plagianakos VP, Vrahatis MN (2012) Evolving cognitive and social experience in particle
swarm optimization through differential evolution: a hybrid approach. Inf Sci 216:50–92
Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithm—a novel metaheuris-
tic optimization method for solving constrained engineering optimization problems. Comput Struct
110–111:151–166
Espitia HE, Sofrony JI (2018) Statistical analysis for vortex particle swarm optimization. Appl Soft Comput
67:370–386
Eusuff M, Lansey KE (2003) Optimization of water distribution network design using the shuffled frog leap-
ing algorithm. J Water Resour Plann Manag 129(3):210–225
Famelis IT, Alexandridis A, Tsitouras C (2017) A highly accurate differential evolution–particle swarm opti-
mization algorithm for the construction of initial value problem solvers. Eng Optim 50(8):1364–1379
Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2019) Equilibrium optimizer: a novel optimization
algorithm. Knowl Based Syst 191:1–34
Fu H, Ouyang D, Xu J (2011) A self-adaptive differential evolution algorithm for binary CSPs. Comput
Math Appl 62(7):2712–2718
Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired optimization algorithm. Commun Nonlinear
Sci Numer Simul 17(12):4831–4845
Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Sim-
ulation 76(2):60–68
Ghosh A, Das S, Chowdhury A, Giri R (2011) An improved differential evolution algorithm with fitness-
based adaptation of the control parameters. Inf Sci 181:3749–3765
Goldberg DE, Holland JH (1988) Genetic algorithms and machine learning. Mach Learn 3:95–99
Gong W, Cai Z (2013) Differential evolution with ranking-based mutation operators. IEEE Trans Cybern
43(6):2066–2081
Gui L, Xia X, Yu F, Wu H, Wu R, Wei B, He G (2019) A multi-role based differential evolution. Swarm
Evolut Comput 50:1–15
Guo SM, Yang CC (2015) Enhancing differential evolution utilizing eigenvector-based crossover operator.
IEEE Trans Evol Comput 19(1):31–49
Hakli H, Uguz H (2014) A novel particle swarm optimization algorithm with levy flight. Appl Soft Comput
23:333–345
Hamza F, Abderazek H, Lakhdar S, Ferhat D, Yıldız AR (2018) Optimum design of cam-roller follower
mechanism using a new evolutionary algorithm. Int J Adv Manuf Technol 99(5–8):1267–1282
Hao Z-F, Gua G-H, Huang H (2007) A particle swarm optimization algorithm with differential evolution.
In: Proceedings of sixth international conference on machine learning and cybernetics. pp 1031–1035
Havens TC, Spain CJ, Salmon NG. Keller JM (2008) Roach infestation optimization. In: Proceedings of the
IEEE swarm intelligence symposium. pp 1–7
He Q, Han C (2006) An improved particle swarm optimization algorithm with disturbance term. Comput
Intell Bioinform 4115:100–108
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algo-
rithm and applications. Future Gener Comput Syst 97:849–872

13
R. P. Parouha, P. Verma

Hendtlass T (2001) A Combined Swarm differential evolution algorithm for optimization problems. In: Pro-
ceedings of 14th international conference on industrial and engineering applications of artificial intel-
ligence and expert systems. Lecture notes in computer science, vol 2070. pp 11–18
Hosseini SA, Hajipour A, Tavakoli H (2019) Design and optimization of a CMOS power amplifier using
innovative fractional-order particle swarm optimization. Appl Soft Comput 85:1–10
Hu L, Hua W, Lei W, Xiantian Z (2020) A modified Boltzmann annealing differential evolution algo-
rithm for inversion of directional resistivity logging-while-drilling measurements. J Petrol Sci Eng
180:106916
Huang H, Jiang L, Yu X, Xie D (2018) Hypercube-based crowding differential evolution with neighborhood
mutation for multimodal optimization. Int J Swarm Intell Res 9(2):15–27
Isiet M, Gadala M (2019) Self-adapting control parameters in particle swarm optimization. Appl Soft Com-
put 83:1–24
Islam SM, Das S, Ghosh S, Roy S, Suganthan PN (2012) An adaptive differential evolution algorithm with
novel mutation and crossover strategies for global numerical optimization. IEEE Trans Syst Man
Cybern Syst 42(2):482–500
Jana ND, Sil J (2016) Interleaving of particle swarm optimization and differential evolution algorithm for
global optimization. Int J Comput Appl 38(2–3):116–133
Jie J, Zeng J, Han C, Wang Q (2008) Knowledge-based cooperative particle swarm optimization. Appl Math
Comput 205(2):861–873
Jordehi AR (2015) Enhanced leader PSO: a new PSO variant for solving global optimisation problems.
Appl Soft Comput 26:401–417
Kang Q, He H (2011) A novel discrete particle swarm optimization algorithm for meta-task assignment in
heterogeneous computing systems. Microprocess Microsyst 35(1):10–17
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: arti-
ficial bee colony (abc) algorithm. J Global Optim 39(3):459–471
Karen I, Yildiz AR, Kaya N, Ozturk N, Ozturk F (2006) Hybrid approach for genetic algorithm and Tagu-
chi’s method based design optimization in the automotive industry. Int J Prod Res 44(22):4897–4914
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of the 1995 IEEE international
conference on neural networks, vol 4. IEEE, pp 1942–1948
Khajeh A, Ghasemi MR, Arab HG (2019) Modified particle swarm optimization with novel population ini-
tialization. J Inf Optim Sci 40(6):1167–1179
Kiran MS (2017) Particle swarm optimization with a new update mechanism. Appl Soft Comput
60:670–678
Kohler M, Vellasco MMBR, Tanscheit R (2019) PSO+: A new particle swarm optimization algorithm for
constrained problems. Appl Soft Comput 85:1–26
Lanlan K, Ruey SC, Wenliang C, Yeh C (2020) Non-inertial opposition-based particle swarm optimization
and its theoretical analysis for deep learning applications. Appl Soft Comput 88:1–10
Li X, Yin M (2014) Modified differential evolution with self-adaptive parameters method. J Combin Optim
31(2):546–576
Li C, Yang S, Nguyen TT (2012) A self-learning particle swarm optimizer for global optimization prob-
lems. IEEE Trans Syst Man Cybern 42(3):627–646
Li S, Gu Q, Gong W, Ning B (2020) An enhanced adaptive differential evolution algorithm for parameter
extraction of photovoltaic models. Energy Convers Manag 205:1–16
Liang J, Qu B, Suganthan P (2013) Problem definitions and evaluation criteria for the CEC 2014 special
session and competition on single objective real-parameter numerical optimization. Computational
Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang
Technological University, Singapore
Liu G, Guo Z (2016) A clustering-based differential evolution with random-based sampling and Gaussian
sampling. Neurocomputing 205:229–246
Liu P, Liu J (2017) Multi-leader PSO: a new PSO variant for solving global optimization problems. Appl
Soft Comput 61:256–263
Liu H, Cai Z, Wang Y (2010) Hybridizing particle swarm optimization with differential evolution for con-
strained numerical and engineering optimization. Appl Soft Comput 10(2):629–640
Liu Z-G, Ji X-H, Yang Y (2019) Hierarchical differential evolution algorithm combined with multi-cross
operation. Expert Syst Appl 130:276–292
Lynn N, Suganthan P (2015) Heterogeneous comprehensive learning particle swarm optimization with
enhanced exploration and exploitation. Swarm Evolut Comput 24:11–24
Lynn N, Suganthan PN (2017) Ensemble particle swarm optimizer. Appl Soft Comput 55:533–548
Mahmoodabadi MJ, Mottaghi ZS, Bagheri A (2014) High exploration particle swarm optimization. J Inf Sci
273:101–111

13
Design and applications of an advanced hybrid meta‑heuristic…

Mallipeddi R, Lee M (2015) An evolving surrogate model-based differential evolution algorithm. Appl Soft
Comput 34:770–787
Mao B, Xie Z, Wang Y, Handroos H, Wu H (2018) A hybrid strategy of differential evolution and modi-
fied particle swarm optimization for numerical solution of a parallel manipulator. Math Probl Eng
2018:9815469
Marzbali AG (2020) A novel nature-inspired meta-heuristic algorithm for optimization: bear smell search
algorithm. Soft Comput 24:13003–13035
Mehrabian AR, Lucas C (2006) A novel numerical optimization algorithm inspired from weed colonization.
Ecol Inform 1(4):355–366
Meng Z, Li G, Wang X, Sait SM, Yıldız AR (2020) A Comparative study of metaheuristic algorithms for
reliability-based design optimization problems. Arch Comput Methods Eng. https​://doi.org/10.1007/
s1183​1-020-09443​-z
Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl
Based Syst 89:228–249
Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-
objective, discrete and multi-objective problems. Neural Comput Appl 27(4):1053–1073
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Mirjalili S, Mirjalili SM, Lewis A (2014a) Grey wolf optimizer. Adv Eng Softw 69:46–61
Mirjalili SA, Lewis A, Sadiq AS (2014b) Autonomous particles groups for particle swarm optimization.
Arab J Sci Eng 39:4683–4697
Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algorithm: bio-
inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
Mishra KK, Bisht H, Singh T, Chang V (2018) A direction aware particle swarm optimization with sensitive
swarm leader. Big Data Res 14:57–67
Mohamed AW (2015) An improved differential evolution algorithm with triangular mutation for global
numerical optimization. Comput Ind Eng 85:359–375
Murase H, Wadano A (1998) Photosynthetic algorithm for machine learning and TSP. IFAC Proc Vol
31(12):19–24
Nasir M, Das S, Maity D, Sengupta S, Halder U, Suganthan PN (2012) A dynamic neighborhood learning
based particle swarm optimizer for global numerical optimization. Inf Sci 209:16–36
Nenavath H, Jatoth RK, Das S (2018) A synergy of the sine-cosine algorithm and particle swarm optimizer
for improved global optimization and object tracking. Swarm Evolut Comput 43:1–30
Ngoa TT, Sadollahb A, Kima JH (2016) A cooperative particle swarm optimizer with stochastic movements
for computationally expensive numerical optimization problems. J Comput Sci 13:68–82
Niu B, Li L (2008) A novel PSO-DE-based hybrid algorithm for global optimization. Lect Notes Comput
Sci 5227:156–163
Nwankwor E, Nagar AK, Reid DC (2013) Hybrid differential evolution and particle swarm optimization for
optimal well placement. Comput Geosci 17(2):249–268
Ozkaya H, Yıldız M, Yıldız AR, Bureerat S, Yıldız BS, Sait SM (2020) The equilibrium optimization algo-
rithm and the response surface-based metamodel for optimal structural design of vehicle components.
Mater Test 62(5):492–496
Panagant N, Pholdee N, Wansasueb K, Bureerat S, Yildiz AR, Sait S (2019) Comparison of recent algo-
rithms for many-objective optimisation of an automotive floor-frame. Int J Veh Des 80(2/3/4):176–208
Panagant N, Pholdee N, Bureerat S, Yıldız AR, Sait SM (2020) Seagull optimization algorithm for solving
real-world design optimization problems. Mater Test 6(62):640–644
Pant M, Thangaraj R, Abraham A (2011) a new hybrid meta-heuristic for solving global optimization prob-
lems. New Math Nat Comput 7(3):363–381
Parouha RP, Das KN (2015) An efficient hybrid technique for numerical optimization and applications.
Comput Ind Eng 83:193–216
Parouha RP, Das KN (2016a) A robust memory based hybrid differential evolution for continuous optimiza-
tion problem. Knowl Based Syst 103:118–131
Parouha RP, Das KN (2016b) DPD: an intelligent parallel hybrid algorithm for economic load dispatch
problems with various practical constraints. Expert Syst Appl 63:295–309
Patel VK, Savsani VJ (2015) Heat transfers search a novel optimization algorithm. Inf Sci 324:217–246
Pierezan J, Coelho LDS (2018) Coyote optimization algorithm: a new metaheuristic for global optimiza-
tion problems. In: 2018 IEEE congress on evolutionary computation (CEC). pp 1–8
Pinto P, Runkler TA, Sousa JM (2005) Wasp swarm optimization of logistic systems. In: Ribeiro B,
Albrecht RF, Dobnikar A, Pearson DW, Steele NC (eds) Adaptive and natural computing algo-
rithms. Springer, Vienna, pp 264–267

13
R. P. Parouha, P. Verma

Prabha S, Yadav R (2019) Differential evolution with biological-based mutation operator. Eng Sci Tech-
nol Int J 23(2):253–263
Qin AK, Suganthan PN (2005) Self-adaptive differential evolution algorithm for numerical optimization.
IEEE Congr Evolut Comput 1782:1785–1791
Qin AK, Huang VL, Suganthan PN (2009) Differential evolution algorithm with strategy adaptation for
global numerical optimization. IEEE Trans Evol Comput 13(2):398–417
Qiu X, Tan KC, Xu J-X (2017) Multiple exponential recombination for differential evolution. IEEE
Trans Cybern 47(4):995–1006
Qiu X, Xu J-X, Xu Y, Tan KC (2018) A new differential evolution algorithm for minimax optimization
in robust design. IEEE Trans Cybern 48(5):1355–1368
Rahnamayan S, Tizhoosh H, Salama M (2008) Opposition-based differential evolution. IEEE Trans Evol
Comput 12(1):64–79
Rao RV, Savsani VJ, Vakharia DP (2011) Teaching-learning-based optimization: a novel method for
constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315
Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci
179(13):2232–2248
Sahu BK, Pati S, Panda S (2014) Hybrid differential evolution particle swarm optimisation optimised
fuzzy proportional–integral derivative controller for automatic generation control of intercon-
nected power system. IET Gener Transm Distrib 8(11):1789–1800
Salehpour M, Jamali A, Bagheri A, Nariman-zadeh N (2017) A new adaptive differential evolution opti-
mization algorithm based on fuzzy inference system. Eng Sci Technol 20(2):587–597
Sarangkum R, Wansasueb K, Panagant N, Pholdee N, Bureerat S, Yildiz AR, Sait SM (2019) Automated
design of aircraft fuselage stiffeners using multiobjective evolutionary optimisation. Int J Veh Des
80(2/3/4):162–175
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv
Eng Softw 105:30–47
Seyedmahmoudian M, Rahmani R, Mekhilef S, Than Oo AM, Stojcevski A, Soon TK, Ghandhari AS
(2015) Simulation and hardware implementation of new maximum power point tracking technique
for partially shaded PV system using hybrid DEPSO method. Trans Sustain Energy 6(3):850–862
Shabani A, Asgarian B, Gharebaghi SA, Salido MA, Giret A (2019) A new optimization algorithm
based on search and rescue operations. Math Probl Eng 2019:2482543
Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713
Simpson AR, Dandy GC, Murphy LJ (1994) Genetic algorithms compared to other techniques for pipe
optimization. J Water Resour Plann Manag 120(4):423–443
Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization
over continuous spaces. J Glob Optim 11:341–359
Sun J, Fang W, Wu X, Palade V, Xu W (2012) Quantum behaved particle swarm optimization: analysis
of individual particle behavior and parameter selection. Evolut Comput 20(3):349–393
Talbi H, Batouche M (2004) Hybrid particle swarm with differential evolution for multimodal image
registration. Proc IEEE Int Conf Ind Technol 3:1567–1573
Tanabe R, Fukunaga A (2013) Success-history based parameter adaptation for differential evolution. In:
IEEE congress on evolutionary computation. pp 71–78
Tang B, Zhu Z, Luo J (2016) Hybridizing particle swarm optimization and differential evolution for the
mobile robot global path planning. Int J Adv Rob Syst 13(3):1–17
Tang B, Xiang K, Pang M (2018) An integrated particle swarm optimization approach hybridizing a new
self-adaptive particle swarm optimization with a modified differential evolution. Neural Comput
Appl 32:4849–4883
Tanweer MR, Suresh S, Sundararajan N (2016) Dynamicmentoring and self-regulation based particle
swarm optimization algorithm for solving complex real-world optimization problems. Inf Sci
326:1–24
Tatsumi K, Ibuki T, Tanino T (2013) A chaotic particle swarm optimization exploiting a virtual quar-
tic objective function based on the personal and global best solutions. Appl Math Comput
219(17):8991–9011
Tian MN, Gao XB (2019) Differential evolution with neighborhood-based adaptive evolution mecha-
nism for numerical optimization. Inf Sci 478:422–448
Too J, Abdullah AR, Saad NM (2019) Hybrid binary particle swarm optimization differential evolution-
based feature selection for EMG signals classification. Axioms 8(3):79
Wang Y, Cai Z (2009) A hybrid multi-swarm particle swarm optimization to solve constrained optimization
problems. Front Comput Sci 3:38–52

13
Design and applications of an advanced hybrid meta‑heuristic…

Wang Y, Cai ZZ, Zhang QF (2011) Differential evolution with composite trial vector generation strategies
and control parameters. IEEE Trans Evol Comput 15(1):55–66
Wedde HF, Farooq M, Zhang Y (2004) BeeHive: An efficient fault-tolerant routing algorithm inspired by
honey bee behavior. In: Dorigo M, Birattari M, Blum C, Gambardella LM, Mondada F, Stützle T
(eds) Ant colony optimization and swarm intelligence, vol 3172. Springer. Berlin, Heidelberg, pp
83–94
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput
1(1):67–82
Xia X, Gui L, He G, Xie C, Wei B, Xing Y, Tang Y (2018) A hybrid optimizer based on firefly algorithm
and particle swarm optimization algorithm. J Comput Sci 26:488–500
Xin B, Chen J, Peng Z, Pan F (2010) An adaptive hybrid optimizer based on particle swarm and differential
evolution for global optimization. Sci China Inf Sci 53(5):980–989
Xiong H, Qiu B, Liu J (2020) An improved multi-swarm particle swarm optimizer for optimizing the elec-
tric field distribution of multichannel transcranial magnetic stimulation. Artif Intell Med 104:101790
Xuewen X, Ling G, Hui ZZ (2018) A multi-swarm particle swarm optimization algorithm based on dynami-
cal topology and purposeful. Appl Soft Comput 67:126–140
Yan B, Zhao Z, Zhou Y, Yuan W, Li J, Wu J, Cheng D (2017) A Particle swarm optimization algorithm with
random learning mechanism and levy flight for optimization of atomic clusters. Comput Phys Com-
mun 219:79–86
Yang X-S (2009) Firefly algorithms for multimodal optimization. In: Watanabe O, Zeugmann T (eds)
Stochastic algorithms: foundations and applications. Lecture notes in computer science, vol 5792.
Springer, Berlin, pp 169–178
Yang X-S (2010) A new metaheuristic bat-inspired algorithm. In: González J, Pelta D, Cruz C, Terrazas G,
Krasnogor N (eds) Nature inspired cooperative strategies for optimization (NICSO 2010), studies in
computational intelligence, vol 284. Springer, Berlin Heidelberg, pp 65–74
Yang XS, Deb S (2009) Cuckoo search via lévy flights. In: IEEE world congress on nature and biologically
inspired computing 2009 (NaBIC 2009). pp 210–214
Yang X, Yuan J, Mao H (2007) A modified particle swarm optimizer with dynamic adaptation. Appl Math
Comput 189:1205–1213
Yang M, Li C, Cai Z, Guan J (2015) Differential evolution with auto-enhanced population diversity. IEEE
Trans Cybern 45(2):302–315
Yang X, Li J, Peng X (2019) An improved differential evolution algorithm for learning high-fidelity quan-
tum controls. Sci Bull 64(19):1402–1408
Yıldız BS (2017a) A comparative investigation of eight recent population-based optimisation algorithms for
mechanical and structural design problems. Int J Veh Des 73(1):208–218
Yıldız BS (2017b) Natural frequency optimization of vehicle components using the interior search algo-
rithm. Mater Test 59(5):456–458
Yıldız AR (2018) Comparison of grey wolf, whale, water cycle, ant lion and sine-cosine algorithms for the
optimization of a vehicle engine connecting rod. Mater Test 60(3):311–315
Yıldız AR (2019) A novel hybrid whale nelder mead algorithm for optimization of design and manufactur-
ing problems. Int J Adv Manuf Technol 105:5091–5104
Yıldız BS (2020a) The spotted hyena optimization algorithm for weight-reduction of automobile brake com-
ponents. Mater Test 62(4):383–388
Yıldız BS (2020b) The mine blast algorithm for the structural optimization of electrical vehicle compo-
nents. Mater Test 62(5):497–501
Yıldız BS (2020c) optimal design of automobile structures using moth-flame optimization algorithm and
response surface methodology. Mater Test 62(4):371–377
Yıldız AR, Yıldız BS (2019) The Harris hawks optimization algorithm, salp swarm algorithm, grasshopper
optimization algorithm and dragonfly algorithm for structural design optimization of vehicle compo-
nents. Mater Test 8(61):744–748
Yıldız AR, Mirjalili S, Yıldız BS, Sait SM, Bureerata S, Pholdee N (2019a) A new hybrid harris hawks
Nelder–Mead optimization algorithm for solving design and manufacturing problems. Mater Test
8(61):735–743
Yıldız AR, Mirjalili S, Yıldız BS, Sait SM, Li X (2019b) The Harris hawks, grasshopper and multi-verse
optimization algorithms for the selection of optimal machining parameters in manufacturing opera-
tions. Mater Test 61(8):725–733
Yıldız AR, Abderazek H, Mirjalili S (2020a) A comparative study of recent non-traditional methods for
mechanical design optimization. Arch Comput Methods Eng 27:1031–1048

13
R. P. Parouha, P. Verma

Yıldız AR, Bureerat S, Kurtulus E, Sadiq S (2020b) A novel hybrid Harris hawks-simulated annealing
algorithm and RBF-based metamodel for design optimization of highway guardrails. Mater Test
62(3):251–260
Yıldız BS, Yıldız AR, Pholdee N, Bureerat S, Sait SM, Patel V (2020d) The Henry gas solubility opti-
mization algorithm for optimum structural design of automobile brake components. Mater Test
62(3):261–264
Yıldız AR, Pholdee N, Bureerat S, Sadiq S (2020c) Sine-cosine optimization algorithm for the conceptual
design of automobile components. Mater Test 62(7):744–748
Yu X, Cao J, Shan H, Zhu L, Guo J (2014) An adaptive hybrid algorithm based on particle swarm optimiza-
tion and differential evolution for global optimization. Sci World J 2014:215472
Yu H, Tan Y, Zeng J, Sun C, Jin Y (2018) Surrogate-assisted hierarchical particle swarm optimization. Inf
Sci 454–455:59–72
Zhang H, Li X (2018) Enhanced differential evolution with modified parent selection technique for numeri-
cal optimization. Int J Comput Sci Eng 17(1):98
Zhang J, Sanderson C (2009) JADE: adaptive differential evolution with optional external archive. IEEE
Trans Evol Comput 13(5):945–958
Zhang WJ, Xie XF (2003) DEPSO: hybrid particle swarm with differential evolution operator. In: Proceed-
ings of the IEEE international conference on systems, man and cybernetics, Washington DC, USA. pp
3816–3821
Zhang W, Ma D, Wei J-J, Liang H-F (2014) A parameter selection strategy for particle swarm optimization
based on particle positions. Expert Syst Appl 41(7):3576–3584
Zhao X, Zhang Z, Xie Y, Meng J (2020) Economic-environmental dispatch of microgrid based on improved
quantum particle swarm optimization. Energy 195:117014
Zheng YJ (2015) Water wave optimization: a new nature-inspired metaheuristic. Comput Oper Res 55:1–11
Zheng LM, Zhang SX, Tang KS, Zheng SY (2017) Differential evolution powered by collective informa-
tion. Inf Sci 399:13–29
Zhu A, Xu C, Li Z, Wu J, Liu Z (2015) Hybridizing grey Wolf optimization with differential evolution for
global optimization and test scheduling for 3D stacked SoC. J Syst Eng Electron 26:317–328

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

13

You might also like