0% found this document useful (0 votes)
5 views15 pages

Zhang 2012 JAM

This research article presents a new variant of the harmony search algorithm called the Elite Decision Making Harmony Search (EDMHS), which utilizes the best and second-best solutions to generate new solutions. The algorithm replaces the worst solution in the solution set if the new solution has better fitness, and it is tested against various benchmark optimization problems. Results indicate that the EDMHS algorithm is competitive and effective in finding near-optimal solutions compared to existing harmony search variants.

Uploaded by

최대현
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views15 pages

Zhang 2012 JAM

This research article presents a new variant of the harmony search algorithm called the Elite Decision Making Harmony Search (EDMHS), which utilizes the best and second-best solutions to generate new solutions. The algorithm replaces the worst solution in the solution set if the new solution has better fitness, and it is tested against various benchmark optimization problems. Results indicate that the EDMHS algorithm is competitive and effective in finding near-optimal solutions compared to existing harmony search variants.

Uploaded by

최대현
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Hindawi Publishing Corporation

Journal of Applied Mathematics


Volume 2012, Article ID 860681, 15 pages
doi:10.1155/2012/860681

Research Article
An Elite Decision Making Harmony Search
Algorithm for Optimization Problem

Lipu Zhang,1 Yinghong Xu,2 and Yousong Liu3


1
Department of Mathematics, Zhejiang A&F University, Zhejiang 311300, China
2
Department of Mathematics, Zhejiang Sci-Tech University, Zhejiang 310018, China
3
State Key Laboratory of Software Engineering, Wuhan University, Hubei 430072, China

Correspondence should be addressed to Lipu Zhang, [email protected]

Received 5 April 2012; Revised 26 May 2012; Accepted 10 June 2012

Academic Editor: Ricardo Perera

Copyright q 2012 Lipu Zhang et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.

This paper describes a new variant of harmony search algorithm which is inspired by a well-known
item “elite decision making.” In the new algorithm, the good information captured in the current
global best and the second best solutions can be well utilized to generate new solutions, following
some probability rule. The generated new solution vector replaces the worst solution in the
solution set, only if its fitness is better than that of the worst solution. The generating and updating
steps and repeated until the near-optimal solution vector is obtained. Extensive computational
comparisons are carried out by employing various standard benchmark optimization problems,
including continuous design variables and integer variables minimization problems from the liter-
ature. The computational results show that the proposed new algorithm is competitive in finding
solutions with the state-of-the-art harmony search variants.

1. Introduction
In 2001, Geem et al. 1 proposed a new metaheuristic algorithm, harmony search HS algo-
rithm, which imitates the behaviors of music improvisation process. In that algorithm, the
harmony in music is analogous to the optimization solution vector, and the musicians
improvisations are analogous to local and global search schemes in optimization techniques.
The HS algorithm does not require initial values for the decision variables. Furthermore,
instead of a gradient search, the HS algorithm uses a stochastic random search that is based
on the harmony memory considering rate and the pitch adjusting rate so that derivative
information is unnecessary. These features increase the flexibility of the HS algorithm and
have led to its application to optimization problems in different areas including music com-
position 2, Sudoku puzzle solving 3, structural design 4, 5, ecological conservation 6,
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2 Journal of Applied Mathematics

and aquifer parameter identification 7. The interested readers may refer the review papers
8–10 and the references therein for further understanding.
HS algorithm is good at identifying the high performance regions of the solution
space at a reasonable time but gets into trouble in performing local search for numerical
applications. In order to improve the fine-tuning characteristic of HS algorithm, Mahdavi
et al. 11 discussed the impacts of constant parameters on HS algorithm and presented a new
strategy for tuning these parameters. Wang and Huang 12 used the harmony memory HM
set of solution vectors to automatically adjust parameter values. Fesanghary et al. 13 use
sequential quadratic programming technique to speed up local search and improve precision
of the HS algorithm solution. Omran and Mahdavi 14 proposed a so-called the global best
HS algorithm, in which concepts from swarm intelligence are borrowed to enhance the per-
formance of HS algorithm such that the new harmony can mimic the best harmony in the
HM. Also, Geem 15 proposed a stochastic derivative for discrete variables based on an HS
algorithm to optimize problems with discrete variables and problems in which the mathe-
matical derivative of the function cannot be analytically obtained. Pan et al. 16 used the
good information captured in the current global best solution to generate new harmonies.
Jaberipour and Khorram 17 described two HS algorithms through parameter adjusting
technique. Yadav et al. 18 designed an HS algorithm which maintains a proper balance
between diversification and intensification throughout the search process by automatically
selecting the proper pitch adjustment strategy based on its HM. Pan et al. 19 divided the
whole HM into many small-sized sub-HMs and performed the evolution in each sub-HM
independently and thus presented a local-best harmony search algorithm with dynamic sub-
populations. Later on, the excellent ideas of mutation and crossover strategies used in 19
were adopted in designing the differential evolution algorithm and obtained perfect result for
global numerical optimization by Islam et al. 20.
Considering that, in political science and sociology, a small minority elite always
holds the most power in making the decisions, that is, elite decision making. One could image
that the good information captured in the current elite harmonies can be well utilized to
generate new harmonies. Thus, in our elite decision making HS EDMHS algorithm, the new
harmony will be randomly generated between the best and the second best harmonies in the
historic HM, following some probability rule. The generated harmony vector replaces the
worst harmony in the HM, only if its fitness measured in terms of the objective function is
better than that of the worst harmony. These generating and updating procedures repeat until
the near-optimal solution vector is obtained. To demonstrate the effectiveness and robustness
of the proposed algorithm, various benchmark optimization problems, including continuous
design variables and integer variables minimization problems, are used. Numerical results
reveal that the proposed new algorithm is very effective.
This paper is organized as follows. In Section 2, a general harmony search algorithm
and its recently developed variants will be reviewed. Section 3 introduces our method that
has “Elite-Decision-Making” property. Section 4 presents the numerical results for some well-
known benchmark problems. Finally, conclusions are given in the last section.

2. Harmony Search Algorithm


In the whole paper, the optimization problem is specified as follows:

Minimize fx, subject to xi ∈ Xi , i  1, 2, . . . , N, 2.1


4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 3

where fx is an objective function, x is the set of each decision variable xi , N is the number
of decision variables, and Xi is the set of the possible range of values for each decision vari-
able, that is xiL ≤ Xi ≤ xiU and xiL and xiU are the lower and upper bounds for each decision
variable, respectively.

2.1. The General HS Algorithm


The general HS algorithm requires several parameters as follows:

HMS: harmony memory size,

HMCR: harmony memory considering rate,

PAR: pitch adjusting rate,

bw: bandwidth vector.

Remarks. HMCR, PAR, and bw are very important factors for the high efficiency of the HS
methods and can be potentially useful in adjusting convergence rate of algorithms to the
optimal solutions. These parameters are introduced to allow the solution to escape from local
optima and to improve the global optimum prediction of the HS algorithm.
The procedure for a harmony search, which consists of Steps 1–4.

Step 1. Create and randomly initialize an HM with HMS. The HM matrix is initially filled
with as many solution vectors as the HMS. Each component of the solution vector is gene-
rated using the uniformly distributed random number between the lower and upper bounds
of the corresponding decision variable xiL , xiU , where i ∈ 1, N.
The HM with the size of HMS can be represented by a matrix as

⎛ ⎞
x11 x21 ··· xN1
⎜ x12 x22 ··· xN2 ⎟
⎜ ⎟
HM  ⎜
⎜ .. .. .. .. ⎟.
⎟ 2.2
⎝ . . . . ⎠
x1HMS x2HMS · · · xN
HMS

Step 2. Improvise a new harmony from the HM or from the entire possible range. After defin-
ing the HM, the improvisation of the HM, is performed by generating a new harmony vector
x  x1 , x2 , . . . , xN

. Each component of the new harmony vector is generated according to


xi ∈ HM :, i with probability HMCR,
xi ←− 2.3
xi ∈ Xi , with probability 1-HMCR,

where HMCR is defined as the probability of selecting a component from the HM members,
and 1-HMCR is, therefore, the probability of generating a component randomly from the
possible range of values. Every xi obtained from HM is examined to determine whether it
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4 Journal of Applied Mathematics

should be pitch adjusted. This operation uses the PAR parameter, which is the rate of pitch
adjustment as follows:

xi ± rand0, 1 × bw with probability PAR,
xi ←− 2.4
xi , with probability 1-PAR,

where rand0, 1 is the randomly generated number between 0 and 1.

Step 3. Update the HM. If the new harmony is better than the worst harmony in the HM,
include the new harmony into the HM and exclude the worst harmony from the HM.

Step 4. Repeat Steps 2 and 3 until the maximum number of searches is reached.

2.2. The Improved HS Algorithm


To improve the performance of the HS algorithm and eliminate the drawbacks associated
with fixed values of PAR and bw, Mahdavi et al. 11 proposed an improved harmony search
IHS algorithm that uses variable PAR and bw in improvisation step. In their method, PAR
and bw change dynamically with generation number as expressed below:

PARmax − PARmin
PAR gn  PARmin × gn, 2.5
MaxItr

where PARgn is the pitch adjusting rate for each generation, PARmin is the minimum pitch
adjusting rate, PARmax is the maximum pitch adjusting rate, and MaxItr and gn is the maxi-
mum and current search number, respectively. We have

bw gn  bwmax ec×gn , 2.6

where

logbwmin /bwmax 
c . 2.7
MaxIter

Numerical results reveal that the HS algorithm with variable parameters can find
better solutions when compared to HS and other heuristic or deterministic methods and is a
powerful search algorithm for various engineering optimization problems, see 11.

2.3. Global Best Harmony Search (GHS) Algorithm


In 2008, Omran and Mahdavi 14 presented a GHS algorithm by modifying the pitch adjust-
ment rule. Unlike the basic HS algorithm, the GHS algorithm generates a new harmony vector
x by making use of the best harmony vector xbest  {x1best , x2best , . . . , xnbest } in the HM. The pitch
adjustment rule is given as follows:

xj  xkbest , 2.8


4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 5

where k is a random integer between 1 and n. The performance of the GHS is investigated and
compared with HS. The experiments conducted show that the GHS generally outperformed
the other approaches when applied to ten benchmark problems.

2.4. A Self-Adaptive Global Best HS (SGHS) Algorithm


In 2010, Pan et al. 16 presented a SGHS algorithm for solving continuous optimization
problems. In that algorithm, a new improvisation scheme is developed so that the good infor-
mation captured in the current global best solution can be well utilized to generate new har-
monies. The pitch adjustment rule is given as follows:

xj  xjbest , 2.9

where j  1, . . . , n. Numerical experiments based on benchmark problems showed that the


proposed SGHS algorithm was more effective in finding better solutions than the existing HS,
HIS, and GHS algorithms.

3. An Elite Decision Making HS Algorithm


The key differences between the proposed EDMHS algorithm and IHS, GHS, and SGHS are
in the way of improvising the new harmony.

3.1. EDMHS Algorithm for Continuous Design Variables Problems


The EDMHS has exactly the same steps as the IHS with the exception that Step 3 is modified
as follows.
 T
In this step, a new harmony vector x  x1 , x2 , . . . , xN  is generated from


xi ∈ HMs, i, HMb, i with probability HMCR,
xi ←− 3.1
xi ∈ Xi , with probability 1-HMCR,

where HMs, i and HMb, i are the ith element of the second-best harmony and the best
harmony, respectively.

3.2. EDMHS Algorithm for Integer Variables Problems


Many real-world applications require the variables to be integers. Methods developed for
continuous variables can be used to solve such problems by rounding off the real optimum
values to the nearest integers 14, 21. However, in many cases, rounding-off approach may
result in an infeasible solution or a poor suboptimal solution value and may omit the alter-
native solutions.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
6 Journal of Applied Mathematics

In EDMHS algorithm for integer programming, we generate the integer solution vec-
tor in the initial step and improvise step, that is, each component of the new harmony vector
is generated according to


xi ∈ roundHMs, i, HMb, i with probability HMCR,
xi ←− 3.2
xi ∈ Xi , with probability 1-HMCR,

where round∗ means round off for ∗. The pitch adjustment is operated as follows:


xi ± 1 with probability PAR,
xi ←− 3.3
xi , with probability 1-PAR.

4. Numerical Examples
This section is about the performance of the EDMHS algorithm for continuous and integer
variables examples. Several examples taken from the optimization literature are used to show
the validity and effectiveness of the proposed algorithm. The parameters for all the algorithm
are given as follows: HMS  20, HMCR  0.90, PARmin  0.4, PARmax  0.9, bwmin  0.0001,
and bwmax  1.0. In the processing of the algorithm, PAR and bw are generated according to
2.5 and 2.6, respectively.

4.1. Some Simple Continuous Variables Examples


For the following five examples, we adopt the same variable ranges as presented in 4. Each
problem is run for 5 independent replications, the mean fitness of the solutions for four
variants HS algorithm, IHS, SGHS, SGHS, and EDMHS, is presented in tables.

4.1.1. Rosenbrock Function


Consider the following:

2
fx  100 x2 − x12 1 − x1 2 . 4.1

Due to a long narrow and curved valley present in the function, Rosenbrock function 4,
22 is probably the best known test case. The minimum of the function is located at x∗ 
1.0, 1.0 with a corresponding objective function value of fx∗   0.0. The four algorithms
were applied to the Rosenbrock function using bounds between −10.0 and 10.0 for the two
design variables x1 and x2 . After the 50,000 searches, we arrived at Table 1.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 7

Table 1: Four HS algorithms for Rosenbrock function.

Variables IHS GHS SGHS EDMHS


x1 1.0000028617324386 0.9913653798835682 1.0000082201314386 0.9999992918896516
x2 1.0000062226347253 0.9837656861940776 1.0000169034081148 0.9999985841159521
f1 x 0.0000000000331057 0.0001667876726056 0.0000000000890147 0.0000000000005014

Table 2: Four HS algorithms for Goldstein and Price function I.

Variables IHS GHS SGHS EDMHS


x1 0.0000043109765698 −0.0108343859912985 −0.0000010647548017 −0.0000022210968748
x2 −0.9999978894568922 −1.0091267108154769 −1.0000037827893109 −1.0000008657021768
fx 3.0000000046422932 3.0447058568657721 3.0000000055974083 3.0000000011515664

4.1.2. Goldstein and Price Function I (with Four Local Minima)


Consider the following:


fx  1 x1 x2 12 19 − 14x1 3x12 − 14x2 6x1 x2 3x22
 4.2
× 30 2x1 − 3x2 2 18 − 32x1 12x12 48x2 − 36x1 x2 27x22 .

Goldstein and Price function I 4, 13, 23 is an eighth-order polynomial in two variables.
However, the function has four local minima, one of which is global, as follows: f1.2, 0.8 
840.0, f1.8, 0.2  84.0, f−0.6, −0.4  30, and f ∗ 0.0, 1.0  3.0 global minimum. In this
example, the bounds for two design variables x1 and x2  were set between −5.0 and 5.0.
After 8000 searches, we arrived at Table 2.

4.1.3. Eason and Fenton’s Gear Train Inertia Function


Consider the following:

 
1 1 x22 x12 x22 100
fx  12 x12 . 4.3
10 x12 x1 x2 4

This function 4, 24 consists of a minimization problem for the inertia of a gear train. The
minimum of the function is located at x∗  1.7435, 2.0297 with a corresponding objective
function value of f ∗ x  1.744152006740573. The four algorithms were applied to the gear
train inertia function problem using bounds between 0.0 and 10.0 for the two design variables
x1 and x2 . After 800 searches, we arrived at Table 3.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 Journal of Applied Mathematics

Table 3: Four HS algorithms for Eason and Fenton’s gear train inertia function.

Variables IHS GHS SGHS EDMHS


x1 1.7434541913586368 1.7131403370902785 1.7434648607226395 1.7434544417399731
x2 2.0296978640691021 2.0700437540873073 2.0296831598594332 2.0296925490097708
fx 1.7441520055927637 1.7447448145676987 1.7441520056712445 1.7441520055905921

Table 4: Four HS algorithms for Wood function.

Variables IHS GHS SGHS EDMHS


x1 0.9367413185752959 0.9993702652662329 0.9917327966129160 1.0001567183702584
x2 0.8772781982936317 0.9987850979456709 0.9835814785067265 1.0003039053776117
x3 1.0596918740170123 0.9993702652662329 1.0081526992384837 0.9998357549633209
x4 1.1230215213184420 0.9987850979456709 1.0164353912102084 0.9996725376532794
fx 0.0136094062872233 0.0000602033138483 0.0002433431550602 0.0000001061706105

4.1.4. Wood Function


Consider the following:

2 2 2
fx  100 x2 − x12 1 − x1 2 90 x4 − x32 1 − x32
  4.4
10.1 x2 − 12 x4 − 12 19.8x2 − 1x4 − 1.

The Wood function 4, 25 is a fourth-degree polynomial, that is, a particularly good test of
convergence criteria and simulates a feature of many physical problems quite well. The mini-
mum solution of the function is obtained at x∗  1, 1, 1, 1T , and the corresponding objective
function value is f ∗ x  0.0. When applying the four algorithms STO the function, the four
design variables, x1 , x2 , x3 , x4 , were initially structured with random values bounded bet-
ween −5.0 and 5.0, respectively. After 70,000 searches, we arrived at Table 4.

4.1.5. Powell Quartic Function


Consider the following:

fx  x1 10x2 2 5x3 − x4 2 x2 − 2x3 4 10x1 − x4 4 . 4.5

The second derivative of the Powell quartic function 4, 26 becomes singular at the minimum
point, it is quite difficult to obtain the minimum solution i.e., f ∗ 0, 0, 0, 0  0.0 using
gradient-based algorithms. When applying the EDMHS algorithm to the function, the four
design variables, x1 , x2 , x3 , x4 , were initially structured with random values bounded bet-
ween −5.0 and 5.0, respectively. After 50,000 searches, we arrived at Table 5.
It can be seen from Tables 1–5, comparing with IHS, GHS, and SGHS algorithms, that
the EDMHS produces the much better results for four test functions. Figures 1–5 present
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 9

Table 5: Four HS algorithms for Powell quartic function.

Variables IHS GHS SGHS EDMHS


x1 −0.0383028653671760 −0.0256621703960072 0.0334641210434073 −0.0232662093056917
x2 0.0038093414837046 0.0023707007810820 −0.0033373644857512 0.0023226342970439
x3 −0.0195750968208506 −0.0199247989791340 0.0159748222727847 −0.0107227792768697
x4 −0.0195676609811871 −0.0199247989791340 0.0160018633328343 −0.0107574107951817
fx 0.0000046821615160 0.0000070109937353 0.0000024921236096 0.0000005715572753

105

100

10−5

10−10

10−15
0 1 2 3 4 5
×104
Number of interions

IHS SGHS
GHS EDMHS

Figure 1: Convergence of Rosenbrock function.

103

102

101

100
0 1000 2000 3000 4000 5000 6000 7000 8000
Number of interions

IHS SGHS
GHS EDMHS

Figure 2: Convergence of Goldstein and Price function I.


4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
10 Journal of Applied Mathematics

100.35

100.33

100.31

100.29

100.27

100.25

0 100 200 300 400 500 600 700 800


Number of interions

IHS SGHS
GHS EDMHS

Figure 3: Convergence of Eason and Fenton function.

104

102

100

10−2

10−4

10−6

10−8
0 1 2 3 4 5 6 7
×104
Number of interions

IHS SGHS
GHS EDMHS

Figure 4: Convergence of Wood function.

a typical solution history graph along iterations for the five functions, respectively. It can be
observed that four evolution curves of the EDMHS algorithm reach lower level than that of
the other compared algorithms. Thus, it can be concluded that overall the EDMHS algorithm
outperforms the other methods for the above examples.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 11

104

102

100

10−2

10−4

10−6

10−8
0 1 2 3 4 5
×104
Number of interions

IHS SGHS
GHS EDMHS

Figure 5: Convergence of Powell quartic function.

4.2. More Benchmark Problems with 30 Dimensions


To test the performance of the proposed EDMHS algorithm more extensively, we proceed to
evaluate and compare the IHS, GHS, SGHS, and EDMHS algorithms based on the following
6 benchmark optimization problems listed in CEC2005 27 with 30 dimensions.

1 Sphere function:


n
fx  xi2 , 4.6
i1

where global optimum x∗  0 and fx∗   0 for −100 ≤ xi ≤ 100.


2 Schwefel problem:

n 
  
fx  − xi sin |xi | , 4.7
i1

where global optimum x∗  420.9687, . . . , 420.9687 and fx∗   −12569.5 for


−500 ≤ xi ≤ 500.
3 Griewank function:

 
1  n n
xi
fx  xi2 − cos √ 1, 4.8
4000 i1 i1 i

where global optimum x∗  0 and fx∗   0 for −600 ≤ xi ≤ 600.


4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
12 Journal of Applied Mathematics

Table 6: AE and SD generated by the compared algorithms.

IHS IHS GHS GHS SGHS SGHS EDMHS EDMHS


Problem
AE SD AE SD AE SD AE SD
Sphere 1.27e − 07 1.79e − 07 1.41e − 02 2.36e − 02 5.30e − 07 1.27e − 06 1.07e − 07 1.31e − 07
Schwefel 4.83e − 01 7.06e − 01 2.11e − 02 3.01e − 02 7.70e − 01 1.15e − 00 7.03e − 01 1.61e − 00
Griewank 1.18e − 01 1.87e − 01 8.83e − 02 1.57e − 01 8.02e − 03 1.05e − 02 1.02e − 02 1.51e − 02
Rastrigin 9.72e − 01 1.18e − 00 1.09e − 02 2.05e − 02 1.12e − 00 1.43e − 00 1.48e − 00 1.93e − 00
Ackley 5.11e − 01 6.06e − 01 2.05e − 02 2.83e − 02 2.13e − 01 2.98e − 01 3.34e − 01 3.85e − 01
Rosenbrock 3.37e 01 4.08e 01 6.77e 01 8.97e 01 3.46e 01 3.80e 01 3.17e 01 4.02e 01

4 Rastrigin function:


n
fx  xi2 − 10 cos2πxi  10 , 4.9
i1

where global optimum x∗  0 and fx∗   0 for −5.12 ≤ xi ≤ 5.12.


5 Ackley’s function:



 ⎞  
1 n
1 n

fx  −20 exp −0.2 x 2⎠
− exp cos2πxi  20 e, 4.10
30 i1 i 30 i1

where global optimum x∗  0 and fx∗   0 for −32 ≤ xi ≤ 32.


6 Rosenbrock’s Function:

n−1 
 2

2
fx  100 xi 1 − xi2 xi − 1 , 4.11
i1

where global optimum x∗  1, . . . , 1 and fx∗   0 for −5 ≤ xi ≤ 10.

The parameters for the IHS algorithm, HMS  5, HMCR  0.9, bwmax  xjU − xjL /20,
bwmin  0.0001, PARmin  0.01, and PARmax  0.99 and for the GHS algorithm, HMS  5,
HMCR  0.9, PARmin  0.01, and PARmax  0.99.
Table 6 presents the average error AE values and standard deviations SD over these
30 runs of the compared HS algorithms on the 6 test functions with dimension equal to 30.

4.3. Integer Variables Examples


Six commonly used integer programming benchmark problems are chosen to investigate the
performance of the EDMHS integer algorithm. For all the examples, the design variables,
xi , i  1, . . . , N, are initially structured with random integer values bounded between −100
and 100, respectively. Each problem is run 5 independent replications, each with approxi-
mately 800 searches, all the optimal solution vector are obtained.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 13

4.3.1. Test Problem 1


Consider the following:

2 2
f1 x  9x12 2x22 − 11 3x12 4x22 − 7 , 4.12

where x∗  1, 1T and f1 x∗   0, see 14, 21, 28.

4.3.2. Test Problem 2


Consider the following:

f2 x  x1 10x2 2 5x3 − x4 2 x2 − x3 4 10x3 − x4 4 , 4.13

where x∗  0, 0, 0, 0T and f2 x∗   0, see 14, 21, 28.

4.3.3. Test Problem 3


Consider the following:

f3 x  2x12 3x22 4x1 x2 − 6x1 − 3x2 , 4.14

where

x1∗  4, −2T , x2∗  3, −2T , x3∗  2, −1T 4.15

and f3 x∗   0, see 14, 21, 29.

4.3.4. Test Problem 4


Consider the following:

f4 x  xT x, 4.16

where x∗  0, 0, 0, 0, 0T and f4 x∗   0, see 14, 21, 30.


4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
14 Journal of Applied Mathematics

4.3.5. Test Problem 5


Consider the following:
⎛ ⎞
35 −20 −10 32 −10
⎜−20 −6 −31 32 ⎟
⎜ 40 ⎟
⎜ ⎟
f5 x  −15, 27, 36, 18, 12x xT ⎜−10 −6 11 −6 −10⎟x, 4.17
⎜ ⎟
⎝ 32 −31 −6 38 −20⎠
−10 32 −10 −20 31

where x∗  0, 11, 22, 16, 6T and x∗  10, 12, 23, 17, 6T with f5 x∗   −737, see 21, 28.

4.3.6. Test Problem 6


Consider the following:

f6 x  −3803.84 − 138.08x1 − 232.92x2 123.08x12 203.64x22 182.25x1 x2 , 4.18

where x∗  0, 1T and f6 x∗   −3833.12, see 21, 28.

5. Conclusion
This paper presented an EDMHS algorithm for solving continuous optimization prob-
lems and integer optimization problems. The proposed EDMHS algorithm applied a newly
designed scheme to generate candidate solution so as to benefit from the good information
inherent in the best and the second best solution in the historic HM.
Further work is still needed to investigate the effect of EDMHS and adopt this strategy
to solve the real optimization problem.

Acknowlegdments
The research is supported by the Grant from National Natural Science Foundation of China
no. 11171373 and the Grant from Natural Science Foundation of Zhejiang Province no. LQ12-
A01024.

References
1 Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony
search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001.
2 Z. W. Geem and J. Y. Choi, “Music composition using harmony search algorithm,” in Proceedings of the
Applications of Evolutionary Computing, pp. 593–600, April 2007.
3 Z. Geem, “Harmony search algorithm for solving sudoku,” in Knowledge-Based Intelligent Information
and Engineering Systems, pp. 371–378, Springer.
4 K. S. Lee and Z. W. Geem, “A new structural optimization method based on the harmony search
algorithm,” Computers and Structures, vol. 82, no. 9-10, pp. 781–798, 2004.
5 M. P. Saka, “Optimum geometry design of geodesic domes using harmony search algorithm,” Advan-
ces in Structural Engineering, vol. 10, no. 6, pp. 595–606, 2007.
4185, 2012, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2012/860681 by Korea University Medical, Wiley Online Library on [05/03/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Journal of Applied Mathematics 15

6 Z. Geem and J. Williams, “Ecological optimization using harmony search,” in Proceedings of the Ame-
rican Conference on Applied Mathematics, pp. 24–26, 2008.
7 M. T. Ayvaz, “Simultaneous determination of aquifer parameters and zone structures with fuzzy c-
means clustering and meta-heuristic harmony search algorithm,” Advances in Water Resources, vol. 30,
no. 11, pp. 2326–2338, 2007.
8 Z. W. Geem, “Harmony search applications in industry,” Soft Computing Applications in Industry, vol.
226, pp. 117–134, 2008.
9 Z. Geem, Music-Inspired Harmony Search Algorithm: Theory and Applications, vol. 191, Springer, 2009.
10 G. Ingram and T. Zhang, “Overview of applications and developments in the harmony search algo-
rithm,” Music-Inspired Harmony Search Algorithm, vol. 191, pp. 15–37, 2009.
11 M. Mahdavi, M. Fesanghary, and E. Damangir, “An improved harmony search algorithm for solving
optimization problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1567–1579, 2007.
12 C. M. Wang and Y. F. Huang, “Self-adaptive harmony search algorithm for optimization,” Expert Sys-
tems with Applications, vol. 37, no. 4, pp. 2826–2837, 2010.
13 M. Fesanghary, M. Mahdavi, M. Minary-Jolandan, and Y. Alizadeh, “Hybridizing harmony search
algorithm with sequential quadratic programming for engineering optimization problems,” Computer
Methods in Applied Mechanics and Engineering, vol. 197, no. 33-40, pp. 3080–3091, 2008.
14 M. G. H. Omran and M. Mahdavi, “Global-best harmony search,” Applied Mathematics and Compu-
tation, vol. 198, no. 2, pp. 643–656, 2008.
15 Z. W. Geem, “Novel derivative of harmony search algorithm for discrete design variables,” Applied
Mathematics and Computation, vol. 199, no. 1, pp. 223–230, 2008.
16 Q.-K. Pan, P. N. Suganthan, M. F. Tasgetiren, and J. J. Liang, “A self-adaptive global best harmony
search algorithm for continuous optimization problems,” Applied Mathematics and Computation, vol.
216, no. 3, pp. 830–848, 2010.
17 M. Jaberipour and E. Khorram, “Two improved harmony search algorithms for solving engineering
optimization problems,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, pp.
3316–3331, 2010.
18 P. Yadav, R. Kumar, S. Panda, and C. Chang, “An intelligent tuned harmony search algorithm for opti-
misation,” Information Sciences, vol. 196, pp. 47–72, 2012, http://dx.doi.org/10.1016/j.ins.2011.12.035.
19 Q. Pan, P. Suganthan, J. Liang, and M. Tasgetiren, “A local-best harmony search algorithm with dyna-
mic subpopulations,” Engineering Optimization, vol. 42, pp. 101–117, 2010.
20 S. Islam, S. Das, S. Ghosh, S. Roy, and P. Suganthan, “An adaptive differential evolution algorithm
with novel mutation and crossover strategies for global numerical optimization,” IEEE Transactions on
Systems, Man, and Cybernetics, vol. 42, no. 2, pp. 482–500, 2012.
21 E. Laskari, K. Parsopoulos, and M. Vrahatis, “Particle swarm optimization for integer programming,”
in Proceedings of the IEEE Congress on Evolutionary Computation, vol. 2, pp. 1582–1587.
22 H. H. Rosenbrock, “An automatic method for finding the greatest or least value of a function,” The
Computer Journal, vol. 3, pp. 175–184, 1960.
23 A. A. Goldstein and J. F. Price, “On descent from local minima,” Mathematics of Computation, vol. 25,
pp. 569–574, 1971.
24 E. D. Eason and R. G. Fenton, “A comparison of numerical optimization methods for engineering
design,” Journal of Engineering for Industry, vol. 96, no. 1, pp. 196–200, 1974.
25 A. Colville, I. B. M. C. N. Y. S. Center, and I. B. M. C. P .S. Center, A Comparative Study on Nonlinear
Programming Codes, IBM Corporation, Philadelphia Scientific Center, 1970.
26 A. Conn, K. Scheinberg, and P. Toint, “On the convergence of derivative-free methods for uncon-
strained optimization,” in Approximation Theory and Optimization: Tributes to MJD Powell, pp. 83–108,
1997.
27 P. Suganthan, N. Hansen, J. Liang et al., “Problem definitions and evaluation criteria for the cec 2005
special session on real-parameter optimization,” Tech. Rep. 2005005, Nanyang Technological Uni-
versity, Singapore, 2005.
28 A. Glankwahmdee, S. Judith, and L. Gary, “Unconstrained discrete nonlinear programming,” Engi-
neering Optimization, vol. 4, no. 2, pp. 95–107, 1979.
29 S. S. Rao and S. S. Rao, “Engineering Optimization: Theory and Practice,” John Wiley & Sons,
Hoboken, NJ, USA, 2009.
30 G. Rudolph, “An evolutionary algorithm for integer programming,” in Proceedings of the 3rd Conference
on Parallel Problem Solving from Nature (PPSN ’94), pp. 139–148, Jerusalem, Israel, October 1994.

You might also like