Algorithms 15 00023
Algorithms 15 00023
Article
A Reward Population-Based Differential Genetic Harmony
Search Algorithm
Yang Zhang 1,∗ , Jiacheng Li 2 and Lei Li 1
1 Department of Applied Informatics, Faculty of Science and Engineering, Hosei University,
Tokyo 184-8584, Japan; [email protected]
2 Department of Electrical, Electronics and Information Engineering, Faculty of Engineering,
Kanagawa University, Yokohama 221-8686, Japan; [email protected]
* Correspondence: [email protected]; Tel.: +81-70-4134-6022
Abstract: To overcome the shortcomings of the harmony search algorithm, such as its slow con-
vergence rate and poor global search ability, a reward population-based differential genetic har-
mony search algorithm is proposed. In this algorithm, a population is divided into four ordinary
sub-populations and one reward sub-population, for each of which the evolution strategy of the
differential genetic harmony search is used. After the evolution, the population with the optimal
average fitness is combined with the reward population to produce a new reward population. During
an experiment, tests were conducted first on determining the value of the harmony memory size
(HMS) and the harmony memory consideration rate (HMCR), followed by an analysis of the effect
of their values on the performance of the proposed algorithm. Then, six benchmark functions were
selected for the experiment, and a comparison was made on the calculation results of the standard har-
mony memory search algorithm, reward population harmony search algorithm, differential genetic
harmony algorithm, and reward population-based differential genetic harmony search algorithm.
The result suggests that the reward population-based differential genetic harmony search algorithm
has the merits of a strong global search ability, high solving accuracy, and satisfactory stability.
Keywords: harmony search algorithm; reward population; differential evolution algorithm; mutation
Citation: Zhang, Y.; Li, J.; Li, L. A
Reward Population-Based
strategy; genetic algorithm
Differential Genetic Harmony Search
Algorithm. Algorithms 2022, 15, 23.
https://doi.org/10.3390/a15010023
1. Introduction
Academic Editors: Essam H.
Houssein and Frank Werner
With the development of big data, cloud computing, artificial intelligence, and other
technologies, the data size of networks has witnessed fast growth, and as a result, it has
Received: 30 November 2021 been more common to solve optimization problems that are similar to traffic networks,
Accepted: 11 January 2022 such as vehicle route planning, spacecraft design, and wireless sensor layouts [1]. Usually,
Published: 14 January 2022
these optimization problems can be expressed with mathematical programming forms. For
Publisher’s Note: MDPI stays neutral general simple optimization problems, mathematical programming and iterative algorithms
with regard to jurisdictional claims in may be used for complex large optimization problems, however, it is quite difficult to seek
published maps and institutional affil- a global or approximate optimal solution within a reasonable time with these traditional
iations. methods. Therefore, to smoothly solve complex large optimization problems, the heuristic
algorithm was proposed and has received increasing attention in recent decades [2].
In computer science and mathematical optimization, a metaheuristic is a higher-level
procedure or heuristic designed to find, generate, or select a heuristic that may provide
Copyright: © 2022 by the authors.
a sufficiently good solution to an optimization problem, especially with incomplete or
Licensee MDPI, Basel, Switzerland.
imperfect information or limited computation capacity [3,4]. The common meta-heuristic
This article is an open access article
algorithms include a genetic algorithm, simulated annealing algorithm, and particle swarm
distributed under the terms and
optimization algorithm [5,6]. Unlike meta heuristics, which are independent of the problem,
conditions of the Creative Commons
heuristic algorithms depend on a specific problem. Heuristic algorithms are essentially
Attribution (CC BY) license (https://
methods by trial and error [7]. In the process of seeking the optimal solution, it can
creativecommons.org/licenses/by/
4.0/).
change its search path according to individual or global experience. When it becomes
impossible or difficult to find the optimal solution of the problem, a heuristic algorithm
is an efficient method to obtain the feasible solution [8]. Harmony search is a heuristic
global search algorithm introduced in 2001 by Zong Woo Geem, Joong Hoon Kim, and
G. V. Loganathan [9]. Inspired by the improvisation of musicians, it simulates musicians’
improvisation process to achieve subtle harmonies. The HS algorithm is characterized by
merits such as a simple structure, less demand for parameters, a high convergence rate,
and strong robustness, which facilitates its application in optimizing a system’s continuous
and discrete variables [10]. We hope that the HS algorithm can effectively solve some
optimization problems. In recent years, scholars have constantly improved the algorithm
and made some progress [11–14].
Mahdavi et al. used a new method for generating new solution vectors and set the
pitch adjustment rate and distance bandwidth as dynamic changes, improving the accuracy
and convergence rate of the HS algorithm [15]. Khalili et al. changed all key parameters
of the HS algorithm into a dynamic mode without predefining any parameters. This
algorithm is referred to as the global dynamic harmony search, and such modification
imbued the algorithm with outstanding performance on unimodal functions and multi-
modal functions [16]. In view of the drawbacks of HS such as a low convergence rate and
solving accuracy in solving complex problems, Ouyang et al. proposed an adaptive global
modified harmony search (MHS) that fuses local searches [17]. Pan et al. proposed the
concepts of the maximum harmony memory consideration rate, the minimum harmony
memory consideration rate, the maximum regulation bandwidth, the minimum regulation
bandwidth, etc. on the basis of the standard HS algorithm, and they achieved an automatic
regulation mechanism for the relevant parameters of the harmony search algorithm by
improving the current harmony memory consideration rate for each iteration and changing
the bandwidth with the number of iterations [18]. To improve the efficiency of the HS algo-
rithm and make up for its drawback of easily getting into local search, Zhang Kangli et al.
improved the generation of the initial solution vector’s harmony memory and proposed
an improved harmony algorithm, ALHS [19]. Given the low evolution efficiency of single
populations and fewer studies on multi-population improvement, S. Z. Zhao combined
dynamic multi-swarm particle swarm optimization (DMS-PSO) with the HS algorithm and
simplified it into the dynamic multi-swarm particle swarm optimization harmony search
(DMS-PSO-HS). These sub-populations are frequently re-combined, and there is informa-
tion exchange among particles in the entire population. Compared with DMS-PSD and HS,
DMS-PSO-HS is improved from the aspect of multi-mode and combinatorial testing [20].
As to the problems with the HS algorithm in solving high-dimensional multi-objective
optimization problems, Zhang proposed an improved differential evolved harmony search
algorithm. In this algorithm, mutation and crossover are adopted to substitute the original
pitch adjustment in the HS optimization algorithm, thus improving the global search ability
of the algorithm [21]. To effectively solve integrated process planning and scheduling (IPPS)
problems, Wu et al. proposed a nested method for single-objective IPPS problems. On the
external layer of the method, the HS algorithm was used to determine the manufacturing
feature processing sub-path, and static and dynamic scoring methods were proposed for
the sub-path to guide the search on the external layer. Meanwhile, a genetic algorithm was
adopted on the internal layer to determine machine allocation and operation series. Upon
combining the two algorithms, the validity of the proposed method was proved through a
test of benchmark instances [22].
Inspired by the above, the improvement of the HS algorithm mainly focuses on the
improvement of parameters and infusion with other algorithms. In this paper, to solve
the main problems with the algorithm, such as its poor global search ability and poor
solution accuracy, a reward population-based differential harmony search algorithm is
proposed. The highlight of the proposed algorithm is as follows. (a) We hope that by using
the multiple population reward mechanisms, the population diversity can be increased
and the algorithm can avoid falling into local optimization as much as possible. (b) In
the step of generating a new harmony, excellent individuals in existing populations are
Algorithms 2022, 15, 23 3 of 19
utilized for optimization, thus improving the convergence rate of the algorithm; at the same
time, on the basis of differential evolution and genetic operations, the difference among
individuals is utilized flexibly for search guidance, thus improving the search ability of
the algorithm. Finally, in a numerical experiment, the validity of the algorithm is verified
through comparison.
x i ∈ Xi i = 1, 2, · · · , N. (1)
Algorithms 2022, 15, 23 4 of 19
where, f ( x ) is the objective function, x is the set of the decision variables xi , N is the number
of decision variables, Xi is the set of possible ranges of values for each decision variable,
the upper bound for each decision variable is UB(i ), and the lower bound is LB(i ); then
LB(i ) ≤ Xi ≤ UB(i ).
In Section 2.1, it was introduced that five parameters are involved in harmony research.
At the very beginning of the algorithm, these values should be set properly.
(2) Initialize harmony memory
A harmony memory with a size of HMS is generated on the basis of the solution space,
as follows.
Algorithm 1: HS Algorithm
1: Initialize the algorithm parameters HMS, HMCR, PAR, BW, Tmax
2: Initialize the harmony memory.
3: Repeat
4: Improvise a new harmony as:
5: for all i, do (
memory consideration with probability HMCR
6: xinew →
random selection with probability 1-HMCR
7: new
i f xi ∈HM, ( then
new xinew ± r ∗ BW with probability PAR
8: x0 i =
xinew with probability 1-PAR
9: end if
10: end for
11: if the new harmony vector is better than the worst
12: one in the original HM, then
13: update harmony memory
14: end if
15: Until Tmax is fulfilled
16: Return best harmony
∑ λi = 1 (6)
where, Pop and Popi refer to the total harmony memories and sub-harmony memories,
respectively, Popsize and Popsizei refer to the number of the total harmony memories and
sub-harmony memories, respectively, λi refers to the proportion of each sub-harmony
memory, and i refers to the number of sub-harmony memories.
Each harmony memory evolves separately. Furthermore, after each evolution, the sub-
harmony memory with the optimal average fitness is combined with the reward harmony
memory to form a new reward harmony memory. At the same time, poor harmony factors
are eliminated to guarantee stability in the number of newly generated harmony memories.
In this paper, four sub-harmony memories and one reward memory are set.
The operators of both are selection, crossover, and mutation. For the genetic algorithm,
crossover goes before mutation. For the DE algorithm, however, mutation goes before
crossover. Furthermore, with regard to the specific definition of these operations, there is a
difference between the two algorithms.
Let us assume that a population consists of NP N-dimensional vector individuals.
Then, each population can be expressed as: Pop = ( x1 , x2 , · · · , x NP ). Generally, after the
initialization of a population, the differential mutation operation will generate mutation
vectors. The mutation strategy of the DE algorithm is advantageous in overcoming pre-
mature convergence. Therefore, it was considered that the mutation operation of the DE
algorithm be introduced into the harmony search algorithm to overcome the premature
convergence of the harmony search algorithm, thus improving the convergence rate.
There are many forms of mutation operations, where the common forms are DE/rand/
1/bin, DE/best/1/bin, DE/rand to best/1/bin, DE/best/2/bin and so on. As to the
representative methods, they are “selection of DE algorithm”, “basic vector selection
method”, “number of differential vectors”, and “crossover methods”. Generally, they are
expressed in the form of DE/base/num/cross. If “best” is selected for the base part, it
means the optimal individual from the population is selected as the basic vector; if “rand”
is selected, it means random selection. If “bin” is selected for the cross part, it means
binomial crossover is adopted. The results show that if “rand” is selected in the base part,
it is favorable for maintaining diversity in populations; if “best” is selected, it stresses the
convergence rate [27].
For each individual xi in a population, a mutation vector of the differential evolution
algorithm is generated as per the following formula.
decreasing
Algorithms 2022, 15, 23 the requirement for diversity in the initial population and avoiding “premature 7 of 19
decreasing the
convergence” requirement for diversity in the initial population and avoiding “premature
[29].
convergence”
decreasing [29].
the requirement
(1) Permutation Operator. for diversity in the initial population and avoiding “premature
(1)
convergence” Permutation
The permutation [29].
decreasing Operator.
operator the requirement is a process for diversity where ingenes the initial at specific
population locations of a chro-
and avoiding “premature
mosome (1)The permutation
Permutation
are exchanged operator
Operator.
convergence” at a certain [29]. is aprobability process where andgenes at specific
the locations of locations
the exchanged of a chro-
genes
aremosome arePermutation
exchanged
The permutation
random. (1) operator atoperators
Permutation a certain a probability
is Operator.process
may be where and the
divided geneslocations
into at specific
single-point of thelocations exchanged ofgenes
permutation a chro-
and
are random.
mosome arepermutation. Permutation
The
exchanged at As permutation operators
a certain may
operator
probability be divided
is a process into
and the locationswheresingle-pointgenes at
of theon permutation
specific and
locations
exchanged of a chro-
genes
multi-point to single-point permutation, two genes a chromosome
multi-point
areexchanged, permutation.
random. Permutationmosome are As
exchanged
operators to single-pointat a certain
may be divided permutation,
probability intogenes two
and genes
the
single-point locations on a chromosome
of the exchanged
permutation genes
are and in multi-point permutation, multiple on a chromosome areand
ex-
are exchanged, are random.
and in multi-point Permutation permutation, operators may be
multiple genes divided into single-point permutation and
multi-point
changed. Thepermutation.
process of the As two to permutation
single-point permutation,
operators is shown twoon genes
in
a chromosome
Figure
are ex-
on a1,chromosome
where B and
multi-point permutation. As to single-point
operators is permutation, two 1, genes on Ba and
chromosome
0 changed.
Bare
The process of the two permutation shown in Figure where
0exchanged,
Bare the
are the
and
single-point inpermutation
multi-point
are exchanged,
single-point permutation and inand
permutation,
and multi-point
multi-point
multi-point
multiple
permutation
permutation,
permutation
genesmultiple
on genes
result
result
a chromosome
forfor Chromosome
on
are ex-
a chromosome
Chromosome A, A,are ex-
changed.
respectively. The process of the two permutation operators is shown in Figure 1, where B and
respectively. changed. The process of the two permutation operators is shown in Figure 1, where B and
B0 are the single-point 0 permutation and multi-point
B are the single-point permutation and multi-point permutation result for Chromosome permutation result for Chromosome A, A,
respectively. respectively.
AA==(c(1c1, ,cc22, ,cc33,,······ ,, ccii− , c , c +11,,······ , ,ccj−j−1 1c cj ,jc, jc+j+ ,··· ,c )
−11 , cii, cii+ 1 ,1· · · , c N )N
⇓⇓
BAB== =((c(c1c11, ,,ccc222, ,,ccc333,,,·········,,,cccii−
i −11, ,ccjj,i ,ccii+
− i+
+
, · · · , cjj−−1,1,ccc,ji,c,ccjj++1,1,·,·······,· c,,ccN)N))
11,1,······ , ,ccj− 1 i j +1 N
BB ==(c(1c1, ,cc22, ,cc33,,······ ,, ci−1 , c j , cc⇓ii++11,,······ , ,ccNN, ,cic,ic, jc+j+
0 0
11, ., ... ., .c, c
j−j1−)1 )
B = ( c 1 , c 2 , c 3 , · · · , c i −1 , c j , c i +1 , · · · , c j −1 , c i , c j +1 , · · · , c N )
Figure
Figure 1. 1.
Permutation B0 =operator
Permutation (c1 , c2 , operation.
operator 3 , · · · , c i −1 , c j , c i +1 , · · · , c N , c i , c j +1 , . . . , c j −1 )
coperation.
Figure 1. Permutation operator operation.
(2)(2)
Figure ShiftOperator.
1.Shift Operator.
Permutation operator operation.
TheThe shiftoperator
shift operator (2) is isaaprocess
Shift process
Operator. in which
in which genes genesin insome
somesubstrings substringsofof a chromosome
a chromosome areare
shifted(2)backward
shifted Shift Operator.
backward successively,
The shift operator
successively, while is
while the
the last
last gene
a process gene in in inthe
which thesubstring
genes
substring in some isisshiftedsubstrings
shifted to to
theofforemost
the a chromosome
foremost are
location.
The The
shift substrings
operator is
shifted backward
location. The substrings a whose
process
whosesuccessively, genesin which
genes are while are shifted
genes
shifted in
thein a
some
last chromosome
a gene substrings
chromosome in the substring and of atheir
and is length
chromosome
shifted
their to are
length theare
foremost
are
selected
shifted randomly.
backward The
successively, gene shift
while can the be last divided
gene into
in the single-point
substring is shift
shifted and to multi-point
the foremost
selected randomly. The gene shift can be divided into single-point shift and multi-point are
location. The substrings whose genes are shifted in a chromosome and their length
shift. AsThe
location. to single-point
selected randomly.
substrings shift, only
whose Theone
genes gene substring
shift can of be adivided
chromosome into single-point is selected for
shift gene
and multi-point
shift. As to single-point shift, only oneare shifted
substring in
of aa chromosome
chromosomeand their
is selected length are
for gene
shift,
selected and in shift.
multi-point As to single-point
shift, multiple shift,
substrings only one of substring
a chromosome of a chromosome
are selected is selected
for gene for gene
shift, andrandomly.
in multi-point
shift,
The gene
and shift,
in
shift
multiple
multi-point
can be
shift,
divided into
substringsmultiple of single-point shift
a chromosome
substrings of a chromosome
and multi-point
are selected are for gene
selected for gene
shift.As
shift. Thetoprocess of the shift
single-point shift, operation
only one is substring
shown in Figure 2. H is a chromosome
of a chromosome is selected including
for gene
shift. The
multiple process
genes of
shift. the
The shift
process
k i (i ∈ 1,shift, operation
2, · · · multiple of the
N ). I andsubstrings shiftis shown
operation
0
I are chromosomes in Figure
is shown 2. in H is
Figure a
obtainedare chromosome
2. H is a chromosome
byselected
implementing including including
shift, andgenes
in multi-point of 0a chromosome for gene
multiple
single-point kmultiple
i (iand
shift ∈ 1, 2, · · · N
genes
multi-point k i ()i. ∈shift
I 1, and · ·I·0 Nare
2,operations ). Ichromosomes
and for IH,are chromosomes
respectively. obtainedobtained by implementing
by implementing
shift. The process of
single-point shiftsingle-point the shift
and multi-point operation
shift and shift is shown
operations
multi-point in
shift for Figure 2.
H, respectively.
operations H is a chromosome
for H, respectively. including
multiple genes k i (i ∈ 1, 2, · · · N ). I and I 0 are chromosomes obtained by implementing
H = ( k 1 , k 2 , k 3 , k 4 , k 5 , · · · , k i −2 , k i −1 k i , k i +1 , k i +2 , · · · , k N )
single-point shift and multi-point
H = (k1 , k2 , k3 , k4 ,shift k5 , · ·operations
· , k i⇓−2 , k i−for H, respectively.
1 k i , k i +1 , k i +2 , · · · , k N )
I = (k4 , k1 , k2 , k3 , k5 , · · · , k i−⇓2 , k i−1 , k i , k i+1 , k i+2 , · · · , k N )
H
I I=0==(k((4kk,14k,, 1kk,12k,, k2k2,3,k, k3k3,4,k,k5k5,5,·,······· ·,, k, ki−i−2 ,2k, ik+i −1 k i , k i + k i11, ,,kkki+
−21,, kkii−, 1k,i +
i +2, ,·, ·· ·· ··, k,, N
i+ 12
kkN) ))
N
0 ⇓
I = ( k 4 , k 1 , k 2 , k 3 , k 5 , · · · , k i −2 , k i +2 , k i −1 , k i , k i +1 , · · · , k N )
4 , k 1 , k 2 , k 3 , k 5 , · · · , k i −2 , k i −1 , k i , k i +1 , k i +2 , · · · , k N )
I = (koperation
Figure 2. Shift operator
0 , k1 ,operatork2 , k3 , koperation
I = (k4Shift
Figure 2.
Figure 2. Shift operator operation 5 , · · · , k i −2 , k i +2 , k i −1 , k i , k i +1 , · · · , k N )
(3) Inversion Operator.
(3) Inversion Operator.
Figure (3)The inversion
Shift
2.Inversion operator operation
operationis a process where the genes in some substrings of a chromo-
Operator.
The inversion operation is a process where the genes in some substrings of a chromo-
some Theare inversed
inversion some successively
operation
are inversed is aat a specific
process
successively probability.
where the genes
at a specific Theinsubstrings
someThe
probability. for geneof
substrings
substrings inversion
a chromo-
for gene inversion
in a (3) Inversion
chromosome Operator.
and
in asuccessively their
chromosomeat length
and are selected
their length randomly.
are selected The gene
randomly. inversion
The for
gene can also be
inversion can also be
some are inversed a specific probability. The substrings gene inversion
dividedThe inversion
into single-point operation is a process
inversion whereand
andinversion
multi-point theinversion.
genes in some
As substrings
to single-point of a chromo-
inversion,
in a chromosome and their length are selected randomly. The gene inversion can alsoinversion,
divided into single-point multi-point inversion. As to single-point be
some
onlyare oneinversed
substring
only successively
of asubstring
one chromosome atof
a specific probability.
is selected
a chromosome for geneThe substrings
inversion,
is selected for andfor forgene
gene inversion, inversion
multi-point
and for multi-point
divided into single-point inversion and multi-point inversion. As to single-point inversion,
ininversion,
a chromosome multiple andsubstrings
inversion, their length
multiple aare selected
of substrings
chromosomeof a randomly.
are selected
chromosome The forgene
are geneinversion
selected for genecan
inversion. The also
op-beThe op-
inversion.
only one substring of a chromosome is selected for gene inversion, and for multi-point
erationinto
divided is shown
single-point
eration in Figure 3. We
inversion
is shown assume Weisassume
3. O
and multi-point
in Figure a chromosome
inversion. including
As multipleinversion,
to single-point
O is a chromosome including genes
multiple genes
inversion, multiple ∈substrings of athe
chromosome are selectedsingle-point
for gene inversion. Theinversions
op-
li ( i ∈
only 1, 2,
one · · · Nl)i (. iOn
substring of1,the
a2,chromosome
· · N ).of
· basis On the basis of theofoperations
operations
is selected gene of
single-point
for and multi-point
inversion, and andfor
multi-point
inversions
multi-point
eration is shown forO, in Figure
chromosome 3. We assume O is a chromosome including0 multiple genes
for chromosome
inversion, multiple we can obtain
substrings of we
O, athecan obtain the are
chromosomes
chromosome chromosomes
and R0 . for
Rselected R and
geneR .inversion. The op-
li (i ∈ 1, 2, · · · N ). On the basis of the operations of single-point and multi-point inversions
eration is shown in Figure 3. We assume O is a chromosome 0 . including multiple genes
for chromosome O O,=we( l can
, l , obtain
l , l , l , the
· · · chromosomes
, l , l , l , · · · R, l jand , l R, l 1 , · · ·multi-point
, lN )
li (i ∈ 1, 2, · · · N ). On the basis of the operations of single-pointj+and
1 2 3 4 5 i − 1 i i + 1 − 1 j inversions
for chromosome O, we can obtain the chromosomes R and R . ⇓ 0
OR==(l(1l4, ,l2l3, ,l3l2,,ll41,,ll55,, ·· · · , li−11,, llii,,llii++11,,··· ·· ·, ,l jl−j−1 ,1l,jl, jl,j+ l j1+,1·,···· ,·l N, l)N )
0
R = (l , l , l , l , l , · · · , l −1 , l j⇓, l j−1 , · · · , li+1 , li , l j+1 , · · · , c N )
O = (l14, l23, l32, l41, l55, · · · , li− 1 , li , li +1 , · · · , l j −1 , l j , l j +1 , · · · , l N )
R = (l4 , l3 , l2 , l1 , l5 , · · · , lii− 1 , li⇓ , li +1 , · · · , l j −1 , l j , l j +1 , · · · , l N )
Figure
0 3. Inversion operator operation
RR == ((ll4 ,, ll3 ,, ll2 ,, ll1 ,, ll5 ,, ·· ·· ·· ,, lli−1,, ll j,, ll j−1,, ·· ·· ·· ,,lli+1,,lli,, ll j+1,,······ ,,lc N))
Figure 3. Inversion operator operation
4 3 2 1 5 i −1 i i +1 j −1 j j +1 N
R 0 = ( l4 , l3 , l2 , l1 , l5 , · · · , l i −1 , l j , l j −1 , · · · , l i +1 , l i , l j +1 , · · · , c N )
Figure 3. Inversion operator operation
Figure 3. Inversion operator operation
Algorithms 2022, 15, 23 8 of 19
The multi-point genetic operator is usually used when the chromosome length is
large, while the single-point genetic operator is used when the chromosome length is
small. Therefore, the genetic algorithms of single-point permutation, single-point shift,
and single-point inversion for reproduction are used for genetic operator operations and
applied to random selection in the general harmony search algorithm.
8: For all j, do
x best + F ( x kj − x lj )
(
j with probability HMCR
9: x new
j =
random selection with probability 1-HMCR
10: If with probability HMCR, then
xrand + F ( x kj − x lj ) with probability PAR
(
0 new j
11: xj = new
xj with probability 1-PAR
12: Else, do
(
0 new operation o f parthenogenetic with probability PAR
13: xj =
x new
j with probability 1-PAR
14: End If
15: End For
16: If the new harmony vector is better than the worst one in the
17: original HM, then
18: update the harmony memory
19: End If
20: End while
21: Update the reward harmony memory
22: End While
23: End For
24: Return best harmony
4. Experiments
To carry out an in-depth study on the performance of RDEGA-HS, two experiments
were carried out to determine the influence of the values of HMS and HMCR on the per-
formance of the algorithm and to perform a comparative study between the HS algorithm
and the three proposed HS algorithm variants. The function name, formulation, and search
range are listed in Table 1. In this paper, we used six benchmark functions to verify the
ability of the RDEGA-HS algorithm in finding the optimal solution, so as to judge the
advantages and disadvantages of the improved algorithm. Then, a comparison was made
among the results of HS, RHS, and DEGA-HS.
Algorithms 2022, 15, 23 10 of 19
The Ackley function is a widely used multi-mode test function. The Griewank function
has many widely distributed local minimum values and is considered to be a convex
function. The Rastrigin function is a high-dimensional multi-mode function; in this function,
a frequent local minimum value is generated by increasing cosine modulation, and the
locations of minimum values are generally scattered. In addition, benchmark functions
such as Rosenbrock, Sphere, and Schwefel function are also used in the experimental steps.
HMS
Funtion
100 80 50 20 10 5
9.26 × 10−3 5.44 × 10−3 1.24 × 10−2 5.24 × 10−1 1.02 × 10−1 8.84
Ackley
1.20 × 10−2 5.57 × 10−3 8.45 × 10−2 1.12 × 10−1 9.54 2.84 × 101
5.16 × 10−10 1.35 × 10−11 2.46 × 10−8 9.54 × 10−6 8.89 × 10−5 5.12 × 10−2
Griewank
9.25 1.68 × 10−1 5.12 1.26 × 101 5.84 × 101 6.25 × 101
1.91 × 10−7 1.91 × 10−8 1.91 × 10−6 1.91 × 10−4 1.91 × 10−2 1.91
Rastrigin
3.21 × 10−3 5.57 × 10−3 1.05 × 10−3 5.41 × 10−2 4.89 × 10−1 6.27
9.99 × 10−2 8.68 × 10−2 1.38 × 10−1 7.54 × 10−1 3.63 8.74
Rosenbrock
7.45 × 10−1 1.12 × 10−1 4.21 2.84 × 101 9.79 × 101 1.84 × 102
3.25 × 10−37 1.03 × 10−40 5.84 × 10−35 6.84 × 10−34 7.54 × 10−32 8.47 × 10−30
Sphere
6.45 × 10−29 5.14 × 10−30 7.48 × 10−29 6.42 × 10−26 3.41 × 10−24 5.49 × 10−20
5.84 × 10−34 2.51 × 10−35 4.98 × 10−32 7.69 × 10−31 3.98 × 10−30 4.16 × 10−28
Schwefel
5.68 × 10−30 4.87 × 10−31 7.98 × 10−28 1.36 × 10−27 8.54 × 10−26 2.65 × 10−24
It can be seen from Table 3 that the value of HMCR obviously influenced the optimiza-
tion result of the proposed algorithm. A greater HMCR will obviously improve the local
Algorithms 2022, 15, 23 11 of 19
search ability, thus improving the performance of the algorithm. A smaller HMCR will
influence the performance of the algorithm. When HMCR was 0.9, the proposed algorithm
demonstrated a higher convergence rate. Therefore, in this paper, the HMCR value was
chosen to be 0.9.
HMCR
Function
0.9 0.8 0.6 0.5 0.3 0.1
4.26 × 10−3 8.54 × 10−3 9.98 × 10−3 3.41 × 10−2 9.88 × 10−2 6.54
Ackley
9.25 × 10−4 4.51 × 10−3 5.89 × 10−2 5.68 × 10−1 1.25 6.66 × 101
2.54 × 10−10 7.84 × 10−8 1.54 × 10−6 5.65 × 10−5 6.58 × 10−4 2.58 × 10−3
Griewank
7.33 × 10−2 9.98 × 10−2 1.54 × 10−1 6.85 8.98 4.65 × 101
9.91 × 10−9 7.54 × 10−8 8.54 × 10−7 3.69 × 10−5 9.58 × 10−4 7.54 × 10−1
Rastrigin
2.54 × 10−4 8.96 × 10−4 2.56 × 10−3 6.66 × 10−3 2.58 × 10−2 8.95 × 10−2
7.56 × 10−2 8.75 × 10−2 2.45 × 10−1 9.87 × 10−1 2.6 8.74 × 101
Rosenbrock
1.58 × 10−1 6.87 × 10−1 8.95 6.54 × 101 1.26 × 102 9.68 × 102
2.54 × 10−39 8.54 × 10−38 5.64 × 10−35 9.87 × 10−34 6.84 × 10−31 5.36 × 10−29
Sphere
7.84 × 10−30 5.42 × 10−28 6.46 × 10−28 3.38 × 10−25 5.10 × 10−23 3.86 × 10−21
5.98 × 10−36 6.04 × 10−33 9.40 × 10−31 3.20 × 10−30 1.65 × 10−28 9.69 × 10−27
Schwefel
2.31 × 10−30 3.94 × 10−29 4.48 × 10−27 9.45 × 10−25 4.31 × 10−23 1.41 × 10−22
has a strong optimization ability. Upon a general review of each algorithm’s optimization
process for the above six functions, the dimensions of the functions had a significant
influence on the optimal value. As the dimensions increased, the optimal value of each
algorithm also changed. For example, for the Ackley function, the variable dimensions had
a significant influence on each algorithm. In terms of a low-dimensional function, RDEGA-
HS and the other algorithms were all able to achieve the global optimum; however, in terms
of a high-dimensional function, the RDEGA-HS algorithm still achieved a satisfactory result.
Figure 4. Change in average optimal fitness value of Ackley function with number of iterations.
Figure 5. Change in average optimal fitness value of Griewank function with number of iterations.
Figure 6. Change in average optimal fitness value of Rastrigin function with number of iterations.
Figure 7. Change in average optimal fitness value of Rosenbrock function with number of iterations.
Figure 8. Change in average optimal fitness value of Sphere function with number of iterations.
Figure 9. Change in average optimal fitness value of the Schwefel function with the number
of iterations.
As to a comparison of the algorithms’ stability, the experiments were repeated 100 times.
In addition to the optimization ability, which has received a lot of attention, the algorithms’
stability is also an index worth paying attention to when evaluating the performance of
Algorithms 2022, 15, 23 18 of 19
algorithms. After 100 repetitions, the variance in the experimental results was solved to
check the fluctuation in the algorithms’ optimal value. For the six functions, the RDEGA-
HS algorithm still maintained satisfactory stability, while the rest of the algorithms were
quite unstable.
Since five sub-harmony memories have been designed and calculated for RDEGA-HS,
the running duration is longer than an algorithm where a single-harmony memory is set.
However, as the diversity of harmony memories is increased, the local optimal solutions
are avoided, thus improving the solving accuracy.
5. Conclusions
In this paper, on the basis of the standard harmony search (HS) algorithm, a reward
population mechanism, differential mutation, and Partheno-genetic operation were used to
improve the basic harmony search algorithm, whereby the harmony vector diversity and
accuracy were improved. With the introduction of the reward population mechanism, the
harmony memory was divided into four common sub-harmony memories and one reward
harmony memory, each of which adopts the evolution strategy of the differential genetic
harmony search. After each iteration, the sub-harmony memory with the best optimal
average fitness was combined with the reward harmony memory. The mutation strategy of
the differential evolution (DE) algorithm is able to overcome premature convergence. By
introducing the mutation strategy of DE into the HS algorithm, it was able to overcome
premature convergence and improve the convergence rate. Meanwhile, the Partheno-
genetic algorithm was introduced the generation of new harmony vectors. Finally, the six
frequently used test functions of Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and
Schwefel were introduced to verify the validity and convergence of the algorithms. Then, a
comparison was made on the calculation results of the standard harmony search algorithm,
reward population-based harmony search algorithm, differential genetic harmony search
algorithm, and reward population-based differential genetic harmony search algorithm.
The result suggests that the reward population-based differential genetic harmony search
algorithm has advantages such as a strong global search ability, high solving accuracy, and
satisfactory stability.
Author Contributions: Conceptualization, Y.Z.; methodology, Y.Z.; software,Y.Z. and J.L.; validation,
Y.Z. and J.L.; investigation, Y.Z.; resources, L.L.; data curation, Y.Z.; writing—original draft prepara-
tion, Y.Z.; writing—review and editing, J.L.; supervision, L.L. All authors have read and agreed to the
published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Snyman, J.A. Practical Mathematical Optimization; Springer: Boston, MA, USA, 2005; pp. 113–167.
2. Blum, C.; Puchinger, J.; Raidl, G.R.; Roli, A. Hybrid metaheuristics in combinatorial optimization: A survey. Appl. Soft Comput.
2011, 11, 4135–4151. [CrossRef]
3. Balamurugan, R.; Natarajan, A.M.; Premalatha, K. Stellar-mass black hole optimization for biclustering microarray gene expression
data. Appl. Artif. Intell. 2015, 29, 353–381. [CrossRef]
4. Bianchi, L.; Dorigo, M.; Gambardella, L.M.; Gutjahr, W.J. A survey on metaheuristics for stochastic combinatorial optimization.
Nat. Comput. 2009, 8, 239–287. [CrossRef]
5. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Frome, UK, 2010; pp. 4–5.
6. Beheshti, Z.; Shamsuddin, S.M. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl. 2013, 5,
1–35.
7. Yang, X.S.; Chien, S.F.; Ting, T.O. Bio-inspired computation and optimization: An overview. In Bio-Inspired Computation in
Telecommunications; Yang, X.S., Chien, S.F., Ting, T.O., Eds.; Elsevier Inc.: New York, NY, USA, 2015; pp. 1–21.
Algorithms 2022, 15, 23 19 of 19
8. Halim, A.H.; Ismail, I. Combinatorial optimization: Comparison of heuristic algorithms in travelling salesman problem. Arch.
Computat. Methods Eng. 2019, 26, 367–380. [CrossRef]
9. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68.
[CrossRef]
10. Zhao, F.Q.; Qin, S.; Yang, G.Q.; Ma, W.M.; Zhang, C.; Song, H.B. A differential-based harmony search algorithm with variable
neighborhood search for job shop scheduling problem and its runtime analysis. IEEE Access 2018, 6, 76313–76330. [CrossRef]
11. Geem, Z.W. Optimal cost design of water distribution networks using harmony search. Eng. Optimiz. 2006, 38, 259–277. [CrossRef]
12. Geem Z.W. Harmony search applications in industry. In Soft Computing Applications in Industry. Studies in Fuzziness and Soft
Computing; Prasad, B., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 226, pp. 117–134.
13. Hasanipanah, M.; Keshtegar, B.; Thai, D.K; Troung, N.T. An ANN-adaptive dynamical harmony search algorithm to approximate
the flyrock resulting from blasting. Eng. Comput. 2020, 1–13 . [CrossRef]
14. Al-Omoush, A.A.; Alsewari, A.A.; Alamri, H.S.; Zamli, K.Z. Comprehensive review of the development of the harmony search
algorithm and its applications. IEEE Access 2019, 7, 14233–14245. [CrossRef]
15. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl.
Math. Comput. 2007, 188, 1567–1579. [CrossRef]
16. Khalili, M.; Kharrat, R.; Salahshoor, K.; Sefat, M.H. Global dynamic harmony search algorithm: GDHS. Appl. Math. Comput. 2014,
228, 195–219. [CrossRef]
17. Ouyang, H.B.; Gao, L.Q.; Kong, X.Y.; Guo, L. Application of MHS algorithm to structural design problems. J. Northeast. Univ.
2013, 34, 1687–1690. [CrossRef]
18. Pan, Q.K.; Suganthan, P.N.; Tasgetiren, M.F.; Liang, J.J. A self-adaptive global best harmony search algorithm for continuous
optimization problems. Appl. Math. Comput. 2010, 216, 830–848. [CrossRef]
19. Zhang, K.L.; Chen, S.Y.; Shao, Z.Z. The improvement of harmony search algorithm. Artif. Intell. Rob. Res. 2015, 4, 32–39.
[CrossRef]
20. Zhao, S.Z.; Liang, J.J.; Suganthan, P.N.; Tasgetiren, M.F. Dynamic multi-swarm particle swarm optimizer with harmony search.
Expert Syst. Appl. 2011, 38, 3735–3742. [CrossRef]
21. Zhang, T.; Xu, X.Q.; Li, Z.H.; Abu-Siada, A.; Guo, Y.T. Optimum location and parameter setting of STATCOM based on improved
differential evolution harmony search algorithm. IEEE Access 2020, 8, 87810–87819. [CrossRef]
22. Wu, X.L.; Li, J. Two layered approaches integrating harmony search with genetic algorithm for the integrated process planning
and scheduling problem. Comput. Ind. Eng. 2021, 155, 107194. [CrossRef]
23. Moh’d Alia, O.; Mandava, R. The variants of the harmony search algorithm: An overview. Artif. Intell. Rev. 2011, 36, 49–68.
[CrossRef]
24. Zhang, Y.; Li, J.C.; Li, L. An improved clustering-based harmony search algorithm (IC-HS). In Proceedings of the SAI Intelligent
Systems Conference, Amsterdam, The Nederlands, 2–3 September 2021; Arai, K., Ed.; Springer: Cham, Germany, 2021; Volume 295,
pp. 115–124.
25. Manjarres, D.; Landa-Torres, I.; Gil-Lopez, S.; Del Ser, J.; Bilbao, M.N.; Salcedo-Sanz, S.; Geem, Z.W. A survey on applications of
the harmony search algorithm. Eng. Appl. Artif. Intel. 2013, 26, 1818–1831. [CrossRef]
26. Deng, W.; Shang, S.F.; Cai, X.; Zhao, H.M.; Song, Y.J.; Xu, J.J. An improved differential evolution algorithm and its application in
optimization problem. Soft Comput. 2021, 25, 5277–5298. [CrossRef]
27. Jain, S.; Sharma, V.K.; Kumar, S. Robot path planning using differential evolution. In Proceedings of the Advances in Com-
puting and Intelligent Systems, Beijing, China, 14–16 December 2019; Sharma, H., Govindan, K., Poonia, R., Kumar, S.,
El-Medany, W., Eds.; Springer: Singapore, 2020; pp. 531–537.
28. Li, M.J.; Tong, T.S. A partheno-genetic algorithm and analysis on its global convergence. Acta Autom. Sin. 1999, 25, 68–72.
[CrossRef]
29. Yang, Z.; Li, J.C.; Li, L. Time-dependent theme park routing problem by Partheno-genetic algorithm. Mathematics 2020, 8, 2193.
[CrossRef]