Zhang 2012 JAM
Zhang 2012 JAM
Research Article
An Elite Decision Making Harmony Search
Algorithm for Optimization Problem
Copyright q 2012 Lipu Zhang et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
This paper describes a new variant of harmony search algorithm which is inspired by a well-known
item “elite decision making.” In the new algorithm, the good information captured in the current
global best and the second best solutions can be well utilized to generate new solutions, following
some probability rule. The generated new solution vector replaces the worst solution in the
solution set, only if its fitness is better than that of the worst solution. The generating and updating
steps and repeated until the near-optimal solution vector is obtained. Extensive computational
comparisons are carried out by employing various standard benchmark optimization problems,
including continuous design variables and integer variables minimization problems from the liter-
ature. The computational results show that the proposed new algorithm is competitive in finding
solutions with the state-of-the-art harmony search variants.
1. Introduction
In 2001, Geem et al. 1 proposed a new metaheuristic algorithm, harmony search HS algo-
rithm, which imitates the behaviors of music improvisation process. In that algorithm, the
harmony in music is analogous to the optimization solution vector, and the musicians
improvisations are analogous to local and global search schemes in optimization techniques.
The HS algorithm does not require initial values for the decision variables. Furthermore,
instead of a gradient search, the HS algorithm uses a stochastic random search that is based
on the harmony memory considering rate and the pitch adjusting rate so that derivative
information is unnecessary. These features increase the flexibility of the HS algorithm and
have led to its application to optimization problems in different areas including music com-
position 2, Sudoku puzzle solving 3, structural design 4, 5, ecological conservation 6,
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2 Journal of Applied Mathematics
and aquifer parameter identification 7. The interested readers may refer the review papers
8–10 and the references therein for further understanding.
HS algorithm is good at identifying the high performance regions of the solution
space at a reasonable time but gets into trouble in performing local search for numerical
applications. In order to improve the fine-tuning characteristic of HS algorithm, Mahdavi
et al. 11 discussed the impacts of constant parameters on HS algorithm and presented a new
strategy for tuning these parameters. Wang and Huang 12 used the harmony memory HM
set of solution vectors to automatically adjust parameter values. Fesanghary et al. 13 use
sequential quadratic programming technique to speed up local search and improve precision
of the HS algorithm solution. Omran and Mahdavi 14 proposed a so-called the global best
HS algorithm, in which concepts from swarm intelligence are borrowed to enhance the per-
formance of HS algorithm such that the new harmony can mimic the best harmony in the
HM. Also, Geem 15 proposed a stochastic derivative for discrete variables based on an HS
algorithm to optimize problems with discrete variables and problems in which the mathe-
matical derivative of the function cannot be analytically obtained. Pan et al. 16 used the
good information captured in the current global best solution to generate new harmonies.
Jaberipour and Khorram 17 described two HS algorithms through parameter adjusting
technique. Yadav et al. 18 designed an HS algorithm which maintains a proper balance
between diversification and intensification throughout the search process by automatically
selecting the proper pitch adjustment strategy based on its HM. Pan et al. 19 divided the
whole HM into many small-sized sub-HMs and performed the evolution in each sub-HM
independently and thus presented a local-best harmony search algorithm with dynamic sub-
populations. Later on, the excellent ideas of mutation and crossover strategies used in 19
were adopted in designing the differential evolution algorithm and obtained perfect result for
global numerical optimization by Islam et al. 20.
Considering that, in political science and sociology, a small minority elite always
holds the most power in making the decisions, that is, elite decision making. One could image
that the good information captured in the current elite harmonies can be well utilized to
generate new harmonies. Thus, in our elite decision making HS EDMHS algorithm, the new
harmony will be randomly generated between the best and the second best harmonies in the
historic HM, following some probability rule. The generated harmony vector replaces the
worst harmony in the HM, only if its fitness measured in terms of the objective function is
better than that of the worst harmony. These generating and updating procedures repeat until
the near-optimal solution vector is obtained. To demonstrate the effectiveness and robustness
of the proposed algorithm, various benchmark optimization problems, including continuous
design variables and integer variables minimization problems, are used. Numerical results
reveal that the proposed new algorithm is very effective.
This paper is organized as follows. In Section 2, a general harmony search algorithm
and its recently developed variants will be reviewed. Section 3 introduces our method that
has “Elite-Decision-Making” property. Section 4 presents the numerical results for some well-
known benchmark problems. Finally, conclusions are given in the last section.
where fx is an objective function, x is the set of each decision variable xi , N is the number
of decision variables, and Xi is the set of the possible range of values for each decision vari-
able, that is xiL ≤ Xi ≤ xiU and xiL and xiU are the lower and upper bounds for each decision
variable, respectively.
Remarks. HMCR, PAR, and bw are very important factors for the high efficiency of the HS
methods and can be potentially useful in adjusting convergence rate of algorithms to the
optimal solutions. These parameters are introduced to allow the solution to escape from local
optima and to improve the global optimum prediction of the HS algorithm.
The procedure for a harmony search, which consists of Steps 1–4.
Step 1. Create and randomly initialize an HM with HMS. The HM matrix is initially filled
with as many solution vectors as the HMS. Each component of the solution vector is gene-
rated using the uniformly distributed random number between the lower and upper bounds
of the corresponding decision variable xiL , xiU , where i ∈ 1, N.
The HM with the size of HMS can be represented by a matrix as
⎛ ⎞
x11 x21 ··· xN1
⎜ x12 x22 ··· xN2 ⎟
⎜ ⎟
HM ⎜
⎜ .. .. .. .. ⎟.
⎟ 2.2
⎝ . . . . ⎠
x1HMS x2HMS · · · xN
HMS
Step 2. Improvise a new harmony from the HM or from the entire possible range. After defin-
ing the HM, the improvisation of the HM, is performed by generating a new harmony vector
x x1 , x2 , . . . , xN
. Each component of the new harmony vector is generated according to
xi ∈ HM :, i with probability HMCR,
xi ←− 2.3
xi ∈ Xi , with probability 1-HMCR,
where HMCR is defined as the probability of selecting a component from the HM members,
and 1-HMCR is, therefore, the probability of generating a component randomly from the
possible range of values. Every xi obtained from HM is examined to determine whether it
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4 Journal of Applied Mathematics
should be pitch adjusted. This operation uses the PAR parameter, which is the rate of pitch
adjustment as follows:
xi ± rand0, 1 × bw with probability PAR,
xi ←− 2.4
xi , with probability 1-PAR,
Step 3. Update the HM. If the new harmony is better than the worst harmony in the HM,
include the new harmony into the HM and exclude the worst harmony from the HM.
Step 4. Repeat Steps 2 and 3 until the maximum number of searches is reached.
PARmax − PARmin
PAR gn PARmin × gn, 2.5
MaxItr
where PARgn is the pitch adjusting rate for each generation, PARmin is the minimum pitch
adjusting rate, PARmax is the maximum pitch adjusting rate, and MaxItr and gn is the maxi-
mum and current search number, respectively. We have
where
logbwmin /bwmax
c . 2.7
MaxIter
Numerical results reveal that the HS algorithm with variable parameters can find
better solutions when compared to HS and other heuristic or deterministic methods and is a
powerful search algorithm for various engineering optimization problems, see 11.
where k is a random integer between 1 and n. The performance of the GHS is investigated and
compared with HS. The experiments conducted show that the GHS generally outperformed
the other approaches when applied to ten benchmark problems.
xi ∈ HMs, i, HMb, i with probability HMCR,
xi ←− 3.1
xi ∈ Xi , with probability 1-HMCR,
where HMs, i and HMb, i are the ith element of the second-best harmony and the best
harmony, respectively.
In EDMHS algorithm for integer programming, we generate the integer solution vec-
tor in the initial step and improvise step, that is, each component of the new harmony vector
is generated according to
xi ∈ roundHMs, i, HMb, i with probability HMCR,
xi ←− 3.2
xi ∈ Xi , with probability 1-HMCR,
where round∗ means round off for ∗. The pitch adjustment is operated as follows:
xi ± 1 with probability PAR,
xi ←− 3.3
xi , with probability 1-PAR.
4. Numerical Examples
This section is about the performance of the EDMHS algorithm for continuous and integer
variables examples. Several examples taken from the optimization literature are used to show
the validity and effectiveness of the proposed algorithm. The parameters for all the algorithm
are given as follows: HMS 20, HMCR 0.90, PARmin 0.4, PARmax 0.9, bwmin 0.0001,
and bwmax 1.0. In the processing of the algorithm, PAR and bw are generated according to
2.5 and 2.6, respectively.
2
fx 100 x2 − x12 1 − x1 2 . 4.1
Due to a long narrow and curved valley present in the function, Rosenbrock function 4,
22 is probably the best known test case. The minimum of the function is located at x∗
1.0, 1.0 with a corresponding objective function value of fx∗ 0.0. The four algorithms
were applied to the Rosenbrock function using bounds between −10.0 and 10.0 for the two
design variables x1 and x2 . After the 50,000 searches, we arrived at Table 1.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 7
fx 1 x1 x2 12 19 − 14x1 3x12 − 14x2 6x1 x2 3x22
4.2
× 30 2x1 − 3x2 2 18 − 32x1 12x12 48x2 − 36x1 x2 27x22 .
Goldstein and Price function I 4, 13, 23 is an eighth-order polynomial in two variables.
However, the function has four local minima, one of which is global, as follows: f1.2, 0.8
840.0, f1.8, 0.2 84.0, f−0.6, −0.4 30, and f ∗ 0.0, 1.0 3.0 global minimum. In this
example, the bounds for two design variables x1 and x2 were set between −5.0 and 5.0.
After 8000 searches, we arrived at Table 2.
1 1 x22 x12 x22 100
fx 12 x12 . 4.3
10 x12 x1 x2 4
This function 4, 24 consists of a minimization problem for the inertia of a gear train. The
minimum of the function is located at x∗ 1.7435, 2.0297 with a corresponding objective
function value of f ∗ x 1.744152006740573. The four algorithms were applied to the gear
train inertia function problem using bounds between 0.0 and 10.0 for the two design variables
x1 and x2 . After 800 searches, we arrived at Table 3.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 Journal of Applied Mathematics
Table 3: Four HS algorithms for Eason and Fenton’s gear train inertia function.
2 2 2
fx 100 x2 − x12 1 − x1 2 90 x4 − x32 1 − x32
4.4
10.1 x2 − 12 x4 − 12 19.8x2 − 1x4 − 1.
The Wood function 4, 25 is a fourth-degree polynomial, that is, a particularly good test of
convergence criteria and simulates a feature of many physical problems quite well. The mini-
mum solution of the function is obtained at x∗ 1, 1, 1, 1T , and the corresponding objective
function value is f ∗ x 0.0. When applying the four algorithms STO the function, the four
design variables, x1 , x2 , x3 , x4 , were initially structured with random values bounded bet-
ween −5.0 and 5.0, respectively. After 70,000 searches, we arrived at Table 4.
The second derivative of the Powell quartic function 4, 26 becomes singular at the minimum
point, it is quite difficult to obtain the minimum solution i.e., f ∗ 0, 0, 0, 0 0.0 using
gradient-based algorithms. When applying the EDMHS algorithm to the function, the four
design variables, x1 , x2 , x3 , x4 , were initially structured with random values bounded bet-
ween −5.0 and 5.0, respectively. After 50,000 searches, we arrived at Table 5.
It can be seen from Tables 1–5, comparing with IHS, GHS, and SGHS algorithms, that
the EDMHS produces the much better results for four test functions. Figures 1–5 present
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 9
105
100
10−5
10−10
10−15
0 1 2 3 4 5
×104
Number of interions
IHS SGHS
GHS EDMHS
103
102
101
100
0 1000 2000 3000 4000 5000 6000 7000 8000
Number of interions
IHS SGHS
GHS EDMHS
100.35
100.33
100.31
100.29
100.27
100.25
IHS SGHS
GHS EDMHS
104
102
100
10−2
10−4
10−6
10−8
0 1 2 3 4 5 6 7
×104
Number of interions
IHS SGHS
GHS EDMHS
a typical solution history graph along iterations for the five functions, respectively. It can be
observed that four evolution curves of the EDMHS algorithm reach lower level than that of
the other compared algorithms. Thus, it can be concluded that overall the EDMHS algorithm
outperforms the other methods for the above examples.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 11
104
102
100
10−2
10−4
10−6
10−8
0 1 2 3 4 5
×104
Number of interions
IHS SGHS
GHS EDMHS
n
fx xi2 , 4.6
i1
n
fx − xi sin |xi | , 4.7
i1
1 n n
xi
fx xi2 − cos √ 1, 4.8
4000 i1 i1 i
n
fx xi2 − 10 cos2πxi 10 , 4.9
i1
⎛
⎞
1 n
1 n
⎝
fx −20 exp −0.2 x 2⎠
− exp cos2πxi 20 e, 4.10
30 i1 i 30 i1
n−1
2
2
fx 100 xi 1 − xi2 xi − 1 , 4.11
i1
The parameters for the IHS algorithm, HMS 5, HMCR 0.9, bwmax xjU − xjL /20,
bwmin 0.0001, PARmin 0.01, and PARmax 0.99 and for the GHS algorithm, HMS 5,
HMCR 0.9, PARmin 0.01, and PARmax 0.99.
Table 6 presents the average error AE values and standard deviations SD over these
30 runs of the compared HS algorithms on the 6 test functions with dimension equal to 30.
2 2
f1 x 9x12 2x22 − 11 3x12 4x22 − 7 , 4.12
where
x1∗ 4, −2T , x2∗ 3, −2T , x3∗ 2, −1T 4.15
f4 x xT x, 4.16
where x∗ 0, 11, 22, 16, 6T and x∗ 10, 12, 23, 17, 6T with f5 x∗ −737, see 21, 28.
5. Conclusion
This paper presented an EDMHS algorithm for solving continuous optimization prob-
lems and integer optimization problems. The proposed EDMHS algorithm applied a newly
designed scheme to generate candidate solution so as to benefit from the good information
inherent in the best and the second best solution in the historic HM.
Further work is still needed to investigate the effect of EDMHS and adopt this strategy
to solve the real optimization problem.
Acknowlegdments
The research is supported by the Grant from National Natural Science Foundation of China
no. 11171373 and the Grant from Natural Science Foundation of Zhejiang Province no. LQ12-
A01024.
References
1 Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony
search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001.
2 Z. W. Geem and J. Y. Choi, “Music composition using harmony search algorithm,” in Proceedings of the
Applications of Evolutionary Computing, pp. 593–600, April 2007.
3 Z. Geem, “Harmony search algorithm for solving sudoku,” in Knowledge-Based Intelligent Information
and Engineering Systems, pp. 371–378, Springer.
4 K. S. Lee and Z. W. Geem, “A new structural optimization method based on the harmony search
algorithm,” Computers and Structures, vol. 82, no. 9-10, pp. 781–798, 2004.
5 M. P. Saka, “Optimum geometry design of geodesic domes using harmony search algorithm,” Advan-
ces in Structural Engineering, vol. 10, no. 6, pp. 595–606, 2007.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 15
6 Z. Geem and J. Williams, “Ecological optimization using harmony search,” in Proceedings of the Ame-
rican Conference on Applied Mathematics, pp. 24–26, 2008.
7 M. T. Ayvaz, “Simultaneous determination of aquifer parameters and zone structures with fuzzy c-
means clustering and meta-heuristic harmony search algorithm,” Advances in Water Resources, vol. 30,
no. 11, pp. 2326–2338, 2007.
8 Z. W. Geem, “Harmony search applications in industry,” Soft Computing Applications in Industry, vol.
226, pp. 117–134, 2008.
9 Z. Geem, Music-Inspired Harmony Search Algorithm: Theory and Applications, vol. 191, Springer, 2009.
10 G. Ingram and T. Zhang, “Overview of applications and developments in the harmony search algo-
rithm,” Music-Inspired Harmony Search Algorithm, vol. 191, pp. 15–37, 2009.
11 M. Mahdavi, M. Fesanghary, and E. Damangir, “An improved harmony search algorithm for solving
optimization problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1567–1579, 2007.
12 C. M. Wang and Y. F. Huang, “Self-adaptive harmony search algorithm for optimization,” Expert Sys-
tems with Applications, vol. 37, no. 4, pp. 2826–2837, 2010.
13 M. Fesanghary, M. Mahdavi, M. Minary-Jolandan, and Y. Alizadeh, “Hybridizing harmony search
algorithm with sequential quadratic programming for engineering optimization problems,” Computer
Methods in Applied Mechanics and Engineering, vol. 197, no. 33-40, pp. 3080–3091, 2008.
14 M. G. H. Omran and M. Mahdavi, “Global-best harmony search,” Applied Mathematics and Compu-
tation, vol. 198, no. 2, pp. 643–656, 2008.
15 Z. W. Geem, “Novel derivative of harmony search algorithm for discrete design variables,” Applied
Mathematics and Computation, vol. 199, no. 1, pp. 223–230, 2008.
16 Q.-K. Pan, P. N. Suganthan, M. F. Tasgetiren, and J. J. Liang, “A self-adaptive global best harmony
search algorithm for continuous optimization problems,” Applied Mathematics and Computation, vol.
216, no. 3, pp. 830–848, 2010.
17 M. Jaberipour and E. Khorram, “Two improved harmony search algorithms for solving engineering
optimization problems,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, pp.
3316–3331, 2010.
18 P. Yadav, R. Kumar, S. Panda, and C. Chang, “An intelligent tuned harmony search algorithm for opti-
misation,” Information Sciences, vol. 196, pp. 47–72, 2012, http://dx.doi.org/10.1016/j.ins.2011.12.035.
19 Q. Pan, P. Suganthan, J. Liang, and M. Tasgetiren, “A local-best harmony search algorithm with dyna-
mic subpopulations,” Engineering Optimization, vol. 42, pp. 101–117, 2010.
20 S. Islam, S. Das, S. Ghosh, S. Roy, and P. Suganthan, “An adaptive differential evolution algorithm
with novel mutation and crossover strategies for global numerical optimization,” IEEE Transactions on
Systems, Man, and Cybernetics, vol. 42, no. 2, pp. 482–500, 2012.
21 E. Laskari, K. Parsopoulos, and M. Vrahatis, “Particle swarm optimization for integer programming,”
in Proceedings of the IEEE Congress on Evolutionary Computation, vol. 2, pp. 1582–1587.
22 H. H. Rosenbrock, “An automatic method for finding the greatest or least value of a function,” The
Computer Journal, vol. 3, pp. 175–184, 1960.
23 A. A. Goldstein and J. F. Price, “On descent from local minima,” Mathematics of Computation, vol. 25,
pp. 569–574, 1971.
24 E. D. Eason and R. G. Fenton, “A comparison of numerical optimization methods for engineering
design,” Journal of Engineering for Industry, vol. 96, no. 1, pp. 196–200, 1974.
25 A. Colville, I. B. M. C. N. Y. S. Center, and I. B. M. C. P .S. Center, A Comparative Study on Nonlinear
Programming Codes, IBM Corporation, Philadelphia Scientific Center, 1970.
26 A. Conn, K. Scheinberg, and P. Toint, “On the convergence of derivative-free methods for uncon-
strained optimization,” in Approximation Theory and Optimization: Tributes to MJD Powell, pp. 83–108,
1997.
27 P. Suganthan, N. Hansen, J. Liang et al., “Problem definitions and evaluation criteria for the cec 2005
special session on real-parameter optimization,” Tech. Rep. 2005005, Nanyang Technological Uni-
versity, Singapore, 2005.
28 A. Glankwahmdee, S. Judith, and L. Gary, “Unconstrained discrete nonlinear programming,” Engi-
neering Optimization, vol. 4, no. 2, pp. 95–107, 1979.
29 S. S. Rao and S. S. Rao, “Engineering Optimization: Theory and Practice,” John Wiley & Sons,
Hoboken, NJ, USA, 2009.
30 G. Rudolph, “An evolutionary algorithm for integer programming,” in Proceedings of the 3rd Conference
on Parallel Problem Solving from Nature (PPSN ’94), pp. 139–148, Jerusalem, Israel, October 1994.