A Modified Augmented Lagrangian With Improved Grey Wolf Optimization To Constrained Optimization Problems
A Modified Augmented Lagrangian With Improved Grey Wolf Optimization To Constrained Optimization Problems
net/publication/303600975
CITATIONS READS
80 771
5 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Wen Long on 28 March 2019.
ORIGINAL ARTICLE
Received: 30 November 2015 / Accepted: 17 May 2016 / Published online: 28 May 2016
Ó The Natural Computing Applications Forum 2016
min f ðx
~Þ ð1Þ
Abstract This paper presents a novel constrained opti-
mization algorithm named MAL-IGWO, which integrates s:t: gj ðx
~Þ ¼ 0; j ¼ 1; 2; . . .; p ð2Þ
the benefit of the improved grey wolf optimization (IGWO) gj ðx
~Þ 0; j ¼ p þ 1; p þ 2; . . .; m ð3Þ
capability for discovering the global optimum with the
modified augmented Lagrangian (MAL) multiplier method l i x i ui ; i ¼ 1; 2; . . .; n ð4Þ
to handle constraints. In the proposed MAL-IGWO algo- where f ðx ~Þ is a nonlinear objective function which is
rithm, the MAL method effectively converts a constrained minimized with respect to the design variables
problem into an unconstrained problem and the IGWO x ¼ ðx1 ; x2 ; . . .; xn Þ,
~ nonlinear equality constraints
algorithm is applied to deal with the unconstrained prob- gj ðx
~Þ ¼ 0, and inequality constraints gj ðx
~Þ 0. li and ui are
lem. This algorithm is tested on 24 well-known benchmark the lower bound and the upper bound of xi, respectively.
problems and 3 engineering applications, and compared Evolutionary computation-based algorithms have many
with other state-of-the-art algorithms. Experimental results advantages over conventional nonlinear programming
demonstrate that the proposed algorithm shows better techniques: the gradients of the cost function and con-
performance in comparison to other approaches. strained functions are not required, reliable and robust
performance, easy implementation, and the chance of being
Keywords Evolutionary computation-based algorithm trapped by a local minimum is lower. Due to these
Constrained optimization problems Augmented advantages, evolutionary computation-based algorithms
Lagrangian function Grey wolf optimization have been successfully and broadly applied to deal with
constrained optimization problems recently. Ali and Zhu
[1] presented a novel differential evolution-based algo-
1 Introduction rithm for solving constrained global optimization prob-
lems. The adaptive nature of the penalty function is
Constrained optimization problems are always inevitable in introduced to make the results of the proposed algorithm
many real-world applications such as engineering design, mostly insensitive to low values of the penalty parameter.
structural optimization, economics, and allocation and Rao et al. [2] proposed an efficient teaching–learning-based
location problems. A general constrained optimization optimization (TLBO) algorithm for constrained mechanical
problem is formulated by: design optimization problems. The TLBO algorithm works
on the effect of influence of a teacher on learners. The
process of TLBO is divided into two parts: the first part
& Ximing Liang
[email protected]
consists of the ‘‘Teacher Phase’’ and the second part con-
sists of the ‘‘Learner Phase’’. ‘‘Teacher Phase’’ means
1
Key Laboratory of Economics System Simulation, Guizhou learning from the teacher, and ‘‘Learner Phase’’ means
University of Finance and Economics, Guizhou, China learning by the interaction between learners. Deb and
2
School of Science, Beijing University of Civil Engineering Srivastava [3] presented a new genetic algorithm-based
and Architecture, Beijing, China
123
S422 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
augmented Lagrangian algorithm for solving constrained which combines multi-objective optimization with differ-
optimization problems. In this algorithm, a classical opti- ential evolution to deal with constrained optimization
mization method is used to improve the genetic algorithm- problems. In CMODE algorithm, differential evolution
obtained solution. In addition, an effective strategy is serves as the search engine and the comparison of indi-
proposed to update critical parameters in an adaptive viduals is based on multi-objective optimization technique.
manner based on population statistics. Long et al. [4] In addition, a novel infeasible solution replacement
proposed a new hybrid algorithm for solving constrained mechanism based on multi-objective optimization is pro-
numerical and engineering optimization problems. The posed, with the purpose of guiding the population towards
basic steps of the proposed algorithm are comprised of an promising solutions and the feasible region simultaneously.
outer iteration, in which the Lagrangian multipliers and Niu et al. [12] presented an improved bacterial foraging
various penalty parameters are updated using a first-order optimization (BFO) algorithm for solving constrained
update scheme, and an inner iteration, in which a nonlinear optimization problems. To further improve the perfor-
optimization of the modified augmented Lagrangian func- mance of the original BFO, the proposed algorithm comes
tion with simple bound constraints is implemented by a up with two modified BFOs, i.e. BFO with linear
modified differential evolution algorithm. Wang et al. [5] decreasing chemotaxis step and BFO with nonlinear
developed an adaptive trade-off model (ATM) for solving decreasing chemotaxis step.
constrained optimization problems. The ATM designs Grey wolf optimization (GWO) algorithm is a new
different trade-off schemes during different stages of a evolutionary computation-based method developed by
search process to obtain an appropriate trade-off between Mirjalili et al. [13] in 2014. This algorithm mimics the
objective function and constraint violations. In addition, a leadership hierarchy and hunting behaviour of grey wolves
simple evolutionary strategy (ES) is used as the search in nature. The preliminary studies show that the GWO
engine. Tuba and Bacanin [6] presented a novel improved algorithm is very promising and could outperform existing
seeker optimization algorithm hybridized with firefly population-based algorithms, such as genetic algorithm
algorithm for constrained optimization problems. In this (GA), particle swarm optimization (PSO) algorithm, dif-
method, a modified seeker optimization algorithm is ferential evolution (DE) algorithm, and gravitational search
introduced to control exploitation/exploration balance and algorithm (GSA) [13]. Similar to the other population-
hybridized it with elements of the firefly algorithm that based algorithms, GWO has not required to the gradient of
improved its exploitation capabilities. Gandomi et al. [7] the function in its optimization process. The most impor-
carried out extensive global optimization of constrained tant advantages of the GWO algorithm, compared to other
problems using the eagle strategy in combination with the population-based algorithms, are that GWO is easy to
efficient differential evolution. The proposed algorithm is implement, and there are few parameters to adjust. Due to
applied to thirteen classical constrained numerical opti- its simplicity and ease of implementation, GWO algorithm
mization problems and three constrained engineering has captured much attention and has been applied to solve
optimization problems reported in the literatures. Long many numerical optimization and practical optimization
et al. [8] developed an effective hybrid cuckoo search problems since its invention, such as global optimization
algorithm based on Solis and Wets local search method for problems [14, 15], feature selection [16], unit commitment
constrained global optimization problems that relies on a problem [17], combined heat and power dispatch problem
modified augmented Lagrangian function for constraint [18], multi-input multi-output system optimization [19],
handling. Brajevic [9] described a novel artificial bee col- flow shop scheduling problem [20], parameter estimation
ony (ABC) algorithm for constrained optimization prob- in surface waves [21], train multi-layer perceptrons [22],
lems. In the proposed method, two different modified ABC optimal control of DC motor [23], optimal power flow [24],
search operators are used in employed and onlooker pha- non-convex economic load dispatch problem [25], and
ses, and crossover operator is used in scout phase instead of reactive power dispatch problem [26].
random search. In addition, modifications related to Mirjalili et al. [13] established a performance compar-
dynamic tolerance for handling equality constraints and ison of four evolutionary computation-based algorithms to
improved boundary constraint-handling method are solve benchmark test functions. These four evolutionary
employed. Sadollah et al. [10] proposed a novel modified computation-based algorithms are grey wolf optimization
water cycle algorithm (WCA) based on evaporation rate for (GWO), differential evolution (DE), particle swarm opti-
solving constrained optimization problems. In the proposed mization (PSO), and gravitational search algorithm (GSA).
algorithm, new concept of evaporation rate for different The overall results indicate that GWO is the most com-
rivers and streams is defined so-called evaporation rate- petitive among all of the compared algorithms for this set
based WCA, which offers improvement in search. Wang of test functions. To our limited knowledge, the present
and Cai [11] developed a novel method, called CMODE, GWO and its variants with augmented Lagrangian
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S423
multiplier method have not been found applications to the 2.2 Hunting behaviour and mathematical model
constrained global optimization problems. In this paper, a
novel constrained optimization algorithm based on the The social hierarchy of grey wolves is a special charac-
MAL method and improved GWO algorithm is proposed to teristic, and group hunting is another interesting social
solve constrained optimization problems. This paper pre- behaviour of grey wolves. There are three steps as to the
sents the first effort to integrate GWO algorithm with the main phases of grey wolf hunting [28]: firstly, they track,
augmented Lagrangian method. Furthermore, the perfor- chase, and approach the prey; secondly, they pursue,
mance of the proposed algorithm is investigated on 24 encircle, and harass the prey until it stops moving; finally,
well-known constrained optimization test problems and 3 they attack towards the prey.
engineering optimization problems and compared with The mathematical models of the social hierarchy,
other state-of-the-art evolutionary computation-based tracking, encircling, and attacking prey are provided. In
algorithms. order to mathematically model encircling behaviour, the
This paper is organized as follows. In Sect. 2, the grey following equations are put forward [13]:
wolf optimization algorithm is briefly introduced. The
~ ¼ C
D ~X ~ðtÞ
~p ðtÞ X ð5Þ
proposed algorithm (MAL-IGWO) is described in Sect. 3.
In Sect. 4, the experimental results and the analysis of the *
~ðt þ 1Þ ¼ X p ðtÞ ~
X ~
AD ð6Þ
proposed algorithm are provided. Finally, we end this
paper with some conclusions and comments for further where X~ indicates the position vector of a grey wolf, X~p is
research in Sect. 5. the position vector of the prey, ~ A and C ~ are coefficient
vectors, and t indicates the current iteration.
The coefficient vectors ~ A and ~ C are calculated as
2 Classical grey wolf optimization algorithms
follows:
2.1 Social hierarchy of grey wolves ~
A ¼ 2a
~ ~
r1 ~
a ð7Þ
~
C ¼ 2 ~
r2 ð8Þ
The grey wolf optimization algorithm is a novel meta-
heuristic approach inspired by the leadership hierarchy and where ~ r1 and ~r2 are random vectors in [0,1].
hunting mechanism of grey wolves in nature [13]. Grey Grey wolves first find out the position of the prey and
wolves mostly prefer to live in a pack. The group size is then encircle it. However, the position of the optimal prey
5–12 on average. Of particular interest is that they have a is unknown in a search space. In order to mathematically
very strict social dominant hierarchy as shown in Fig. 1. simulate the hunting behaviour of grey wolves, we suppose
The leader in a grey wolves group is called Alpha, that the Alpha (best candidate solution), Beta, and Delta
which is mostly responsible for making decisions about have better knowledge about the potential location of prey.
hunting, sleeping place, time to wake, and so on [27]. The Therefore, the first three best solutions gained so far are
second level in the hierarchy of grey wolves is Beta, which stored, and the other members in the pack must update their
help the Alpha in decision-making or other pack activities. positions in the light of the best three solutions. Such
Beta is also the best candidate in case of Alpha passing behaviour can be formulated as follows [13]:
away or getting old. The lowest ranking grey wolf is
~a ¼ ~
D C1 X ~
~a X ð9Þ
Omega, which plays the role of scapegoat. Omega can
satisfy the whole group and maintain the dominant archi-
tecture of the group. The third level is Delta, which must ~b ¼ ~
D C2 X ~
~b X ð10Þ
submit to Alpha and Beta, but they dominate the Omega.
Delta should scout to protect and guarantee the safety of ~d ¼ C
D ~3 X ~
~d X ð11Þ
the group.
X ~a ~
~1 ¼ X ~a
A1 D ð12Þ
X ~b ~
~2 ¼ X ~b
A2 D ð13Þ
α
β X ~d ~
~3 ¼ X ~d
A3 D ð14Þ
δ X
~ ~ ~
~ðt þ 1Þ ¼ X 1 þ X 2 þ X 3 ð15Þ
3
ω
As mentioned above, the grey wolves finish the hunt by
Fig. 1 Hierarchy of grey wolves attacking the prey when it stops moving. In order to
123
S424 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
mathematically model approaching the prey, we decrease solving constrained optimization problems. As a result, a
the value of ~:
a variety of evolutionary computation-based constraint-han-
2t dling techniques have been developed for constrained
a ¼2
~ðtÞ ð16Þ optimization problems which can be grouped as follows
Max iter
[29, 30]: (1) methods based on penalty functions; (2)
where t is the current number of iteration, Max_iter is the methods based on preserving feasibility of solutions; (3)
maximum number of iteration. Therefore, ~ðtÞa is linearly methods based on the superiority of feasible solutions over
decreased from 2 to 0 over the process of iterations. infeasible solutions; and (4) other methods.
Penalty function methods are the most common con-
2.3 Grey wolf optimization algorithm straint-handling technique. The augmented Lagrangian
multiplier method is an interesting penalty function that
In grey wolf optimization algorithm, we consider the fittest avoids the side effects associated with ill-conditioning of
solution as the Alpha. Consequently, the second and the third simpler penalty and barrier functions [31]. Therefore, this
best solutions are named Beta and Delta, respectively. The paper uses a modified augmented Lagrangian method in
rest of the candidate solutions are assumed to be Omega. To [32] to deal with constraints.
sum up, the search process starts with creating a random If the simple bound (4) is not present, then one can use the
population of grey wolves (candidate solutions) in the GWO modified augmented Lagrangian multiplier method to solve
algorithm. Over the course of iterations, Alpha, Beta, and problem (1)–(3). For given Lagrangian multiplier vector kk
Delta wolves estimate the probable position of the prey. Each and penalty parameter vector rk, the unconstrained penalty
candidate solution updates its distance from the prey. The sub-problem at the k-th step of this method is:
parameter ~ a is decreased from 2 to 0 in order to emphasize
exploration and exploitation, respectively. Candidate solu- min Pðx; kk ; rk Þ ð17Þ
tions tend to diverge from the prey when ~ A [ 1 and con- where P(x, k, r) is the following modified augmented
Lagrangian function:
~
verge towards the prey when A\1.
Xp X m
1
The pseudo code of the classical grey wolf optimization Pðx; k; rÞ ¼ f ðxÞ kj gj ðxÞ rj ðgj ðxÞÞ2 P~j ðx; k; rÞ
j¼1
2 j¼pþ1
algorithm is presented in Fig. 2.
ð18Þ
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S425
the variables (4). The solution x* to sub-problem (20) can compared. All approaches were applied to 24 well-known
be obtained by searching the search space if k* is known benchmark test constrained optimization problems of
and r is large enough. We will choose the improved grey CEC2006 [33] and 3 benchmark engineering design opti-
wolf optimization algorithm for the global search in (20). mization problems.
3.2 Improved grey wolf optimization algorithm 4.1 Benchmark functions and parameter setting
Population-based stochastic algorithms must have a good In this work, in order to evaluate the performance of the
balance between exploration and exploitation to achieve proposed MAL-IGWO algorithm, 24 benchmark con-
both efficient global and local searches. The exploration is strained optimization problems were considered. This set
to find new regions in search space, where population contains difficult constrained optimization problems with
diversity plays an important role. The exploitation, which very distinct properties [33]. The characteristics of these
expects to obtain a high-precision solution, means the constrained problems are summarized in Table 1 that lists
computing ability of the algorithm by using the information the number of decision variables (n), the type of objective
that has already been collected before. While, in classical function f(x), the feasibility ratio (q), the number of linear
GWO algorithm, exploration and exploitation are guaran- inequality constraints (LI), the number of nonlinear
teed by the adaptive values of parameter ~ a and ~A. A larger equality constraints (NE), the number of nonlinear
~
a value facilitates a global search, while a small ~ a value inequality constraints (NI), the number of linear equality
facilitates a local search. Suitable selection of the param- constraints (LE), the number of active constraints (a), and
eter ~a can provide a balance between global and local the optimal solution of each problem. In Table 1, the fea-
exploration abilities. However, according to Eq. (16), the sibility ratio q is the ratio between the size of the feasible
values of parameter ~ a are linearly decreased from 2 to 0 search space and that of the entire search space. In this
over the course of iterations. work, note that the maximization problems are transformed
Since the search process of GWO is nonlinearly and into minimization using -f(x).
highly complicated, linearly decreasing parameter ~ a can not The following parameters are established experimen-
truly reflect the actual search process. As a result, in order to tally for the best performance of MAL-IGWO: the popu-
balance the exploration ability and the exploitation ability of lation size of GWO was set to 60, and the maximum
GWO algorithm, this paper presented an improved GWO iteration number was set to 1000 (120,000 evaluations were
algorithm in which the adaptive values of parameter ~ a are carried out per independent run). In modified augmented
adjusted according to the following equation: Lagrangian method, the initial Lagrangian multiplier vec-
1 ðiter=itermax Þ tor k0 ¼ ð1; 1; . . .; 1Þ, the initial penalty parameter vector
a ¼
~ðtÞ ð21Þ r0 ¼ ð10; 10; . . .; 10Þ, the user-required tolerance
1 l ðiter=itermax Þ
e = 1e-08, the maximum allowed penalty parameter
where iter is the current number of iterations, itermax is the ru = 1e?10, the penalty parameter increasing factor
maximum number of iterations, and l 2 (0, 3) is the c = 10, and the reduction factor for feasibility norm
nonlinear modulation index. According to Eq. (21), the f = 0.25. For each benchmark problem, 30 independent
values of parameter ~a are nonlinearly time-varying over the runs are performed in MATLAB 7.0.
course of iterations. In this paper, l is set to 1.1. We apply
Eq. (21) instead of Eq. (16) of classical GWO algorithm. 4.2 Experimental results of MAL-IGWO and MAL-
GWO algorithm
3.3 The flow chart of proposed MAL-IGWO
algorithm The experimental results obtained by MAL-IGWO and
MAL-GWO are presented in Table 2 where the best-
Based on the above explanation, the flow chart of proposed known optimal solution and the best value, the mean value,
MAL-IGWO algorithm is presented in Fig. 3. the worst value as well as the standard deviation of the
obtained objective function values over 30 runs have been
listed under the given parameter settings. Note that a result
4 Numerical results and analysis in boldface indicates the best result. NF means that no
feasible solutions were found. When all algorithms had
In this section, the results of the proposed MAL-IGWO equal results, none was emphasized in bold.
algorithm together with some state-of-the-art algorithms As shown from Table 2, it can be seen that the proposed
for solving constrained optimization problems are MAL-IGWO algorithm is able to find the best-known
123
S426 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
Begin
Initialize the parameters
of GWO algorithm
Initialize the Lagrange
multiplier vectors and penalty
parameter vectors
Calculate the fitness of
each grey wolf individual
Formulate the bound
constrained sub-problem (20)
by modified augmented Update the first best value, the
Lagrangian function (18) second best value, and the
third best value of population
solving the proposed sub-
problem (20) by improve GWO
(IGWO) algorithm Update the position of the current
grey wolf individual by Eq.(15).
Update the Update the parameters a, A, and C
Lagrange multiplier by Eqs.(21), (7), and (8)
vectors and penalty
parameter vectors
Obtained an approximate
Yes IGWO termination No
global minimum by IGWO
criterion satisfied ?
algorithm
Stopping
No
criterion of MAL-IGWO
is satisfied?
Yes
Output the optimal
value
End
optimal solution consistently on test problems over 30 runs g24). For test problems g01, g05, g06, g21, and g23, two
except for g03, g05, g11, g13, g20, and g22. With respect algorithms obtained similar ‘‘best’’ solutions. However,
to the test problems g03, g05, g11, and g13, although the MAL-IGWO algorithm found better ‘‘mean’’, ‘‘worst’’, and
optimal solutions obtained by MAL-IGWO are not con- standard deviation values. The convergence graphs of
sistently found, the best results achieved are very close to function values over number of iterations at the median run
the global optimal solutions. For problems g20 and g22, are plotted in Fig. 4. The above results and analysis vali-
MAL-IGWO algorithm cannot find the feasible solutions in date that the proposed MAL-IGWO is an effective and
any runs. With the test problem g10, a typical result promising approach for constrained optimization problems.
found MAL-IGWO is: ~ x ¼ ½579:30621; 1359:96533;
5109:96515; 182:01860; 295:60139; 271:98140; 4.3 Comparison between MAL-IGWO and other
286:41721; 395:60139 with f ðx ~Þ ¼ 7049:23670 which is EAs with augmented Lagrangian method
the ‘‘best’’ result reported so far. Furthermore, it can be
observed from the standard deviations over 30 runs for the To further verify the performances of the proposed MAL-
entire test problems in Table 2 are relatively small, which IGWO algorithm, comparisons are carried out with four
reflects that MAL-IGWO is stable and robust for solving evolutionary algorithms with augmented Lagrangian
these constrained optimization problems in Table 1. In method from the literatures, including genetic algorithm-
particular, for problems g08, g12, g15, and g24, the stan- based augmented Lagrangian (GAAL) [3], augmented
dard deviations of objective function are equal to 0. Lagrangian fish swarm-based method (ALFS) [34], aug-
Compared with MAL-GWO, the proposed MAL-IGWO mented Lagrangian ant colony optimization-based method
algorithm can find better results for ten test problems (g02, (ALACO) [35], and hybrid genetic pattern search aug-
g07, g09, g10, g13, g14, g16, g17, g18, g19) and similar mented Lagrangian method (HGPSAL) [31]. Table 3 lists
results in seven problems (g03, g04, g08, g11, g12, g15, the best obtained function value after the runs, the average
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S427
of the obtained function values, and the average number of GAAL and MAL-IGWO. Regarding the ‘‘mean’’ value,
function evaluations (Aver. FEs). The character ‘‘-’’ GAAL algorithm found better ‘‘mean’’ result in test
means that the result is not available in the paper. Solutions problem g02.
to problems g03, g05, g11, and g13 by GAAL are not given With respect to ALFS algorithm, MAL-IGWO algo-
in [3], and therefore they are not included in Table 3. It is rithm provided better the ‘‘best’’ results and the ‘‘mean’’
necessary to emphasize that the experimental results of results in ten test problems (g01, g02, g03, g04, g05, g06,
GAAL, ALFS, ALACO, and HGPSAL reported in Table 3 g07, g09, g10, and g13). For the test problems g08 and g11,
were directly taken from their literatures to ensure the the similar ‘‘best’’ and ‘‘mean’’ results were obtained by
comparison fair. For clarity, the results of the best algo- two algorithms. In test problem g12, two algorithms pro-
rithms are marked in boldface. vided similar ‘‘best’’ values. However, the better ‘‘mean’’
As shown in Table 3, after analysing the results, an values were obtained by MAL-IGWO algorithm in test
interesting result is that all the five algorithms have most problem g12.
reliably found the global optimal solution of problem g08. In comparison with ALACO algorithm, the proposed
This problem may relatively be easy to solve. Compared MAL-IGWO algorithm found better results in five test
with GAAL algorithm, MAL-IGWO algorithm obtained problems (g06, g07, g09, g10, and g13) and similar results
similar ‘‘best’’ and ‘‘mean’’ results in five test problems: in six test problems (g01, g03, g04, g05, g08, and g12). For
g01, g04, g06, g07, and g08. However, the better ‘‘Aver. the test problem g02, MAL-IGWO algorithm provided
FEs’’ values were provided by GAAL algorithm in five better ‘‘best’’ value. However, the better ‘‘mean’’ result was
test problems. In three test problems g09, g10, and g12, obtained by ALACO algorithm in test problem g02. In test
MAL-IGWO algorithm is able to find better the ‘‘best’’ problem g11, MAL-IGWO algorithm provided better
and the ‘‘mean’’ results than GAAL algorithm. In test ‘‘mean’’ value and similar ‘‘best’’ result than ALACO
problem g02, the similar ‘‘best’’ results were obtained by algorithm.
123
S428 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
Table 2 Results obtained by MAL-IGWO and MAL-GWO with 30 independent runs on 24 problems
Function Optimal value Algorithms Best Mean Worst Standard deviation
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S429
-4 -0.1
-0.2
-6 Problem g01 Problem g02
Function values -0.3
Function values
-8
-0.4
-10 -0.5
-0.6
-12
-0.7
-14
-0.8
-16 -0.9
0 200 400 600 800 1000 0 200 400 600 800 1000
Number of iterations Number of iterations
140 -46.8
Function values
100 -47.2
80 -47.4
60 -47.6
40 -47.8
20 -48
0 200 400 600 800 1000 0 200 400 600 800 1000
Number of iterations Number of iterations
-5.3
1600
Function values
1000 -5.4
800
-5.45
600
400
-5.5
200
0
0 200 400 600 800 1000 0 200 400 600 800 1000
Number of iterations Number of iterations
Fig. 4 Convergence graph for problems g01, g02, g07, g14, g19, and g24
Compared with the HGPSAL algorithm, it can be terms of the efficiency, the quality, and the robustness of
observed from Table 3 that MAL-IGWO obtained better search for the most benchmark test problems.
‘‘mean’’ results in seven test problems (g02, g04, g06, g07,
g09, g10, and g13) and similar results in five test problems 4.4 Comparison with other evolutionary algorithms
g03, g05, g08, g11, and g12. For the test problem g01,
MAL-IGWO algorithm found better ‘‘mean’’ value and In order to evaluate the effectiveness and efficiency of
similar ‘‘best’’ value than HGPSAL algorithm. MAL-IGWO future, we report the solutions obtained by
As a general remark on the comparison above, the MAL-IGWO, as well as those obtained by the modified
proposed MAL-IGWO algorithm performs competitive or artificial bee colony algorithm (MABC) [36], the hybrid
better with respect to the ALFS, ALACO, and HGPSAL in evolutionary algorithm (HEA) [37], the simple
123
S430 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
Table 3 Comparison between MAL-IGWO and GAAL, ALFS, ALACO, HGPSAL on 13 test problems
Function GAAL [3] ALFS [33] ALACO [34] HGPSAL [31] MAL-IGWO
g01
Best -15 -14.9994 -15.000000 -15.00000 -15.000000
Mean -15 -14.8818 -15.000000 -14.99998 -15.000000
Aver. FEs 15870 339,900 345,185 87,927 79,392
g02
Best -0.803619 -0.59393 -0.803519 -0.611330 -0.803619
Mean -0.803619 -0.47966 -0.797772 -0.556323 -0.754524
Aver. FEs 38,937 889,416 445,443 227,247 120,000
g03
Best - -0.99858 -1.000006 -1.000000 -1.000000
Mean - -0.99192 -1.000002 -1.000000 -1.000000
Aver. FEs - 233,264 409,566 113,890 71,596
g04
Best -30,665.53867 -30,665.54 -30,665.53867 -30,665.54 -30,665.53868
Mean -30,665.53867 -30,665.50 -30,665.53867 -30,665.54 -30,665.53867
Aver. FEs 3611 53,773 197,483 106,602 99,608
g05
Best - 5126.681 5126.4981 5126.498 5126.4981
Mean - 5134.745 5126.4981 5126.498 5126.4981
Aver. FEs - 45,196 149,363 199,439 46,810
g06
Best -6961.813876 -6961.640 -6961.813725 -6961.814 -6961.813876
Mean -6961.813876 -6851.709 -6961.813444 -6961.809 -6961.813876
Aver. FEs 2755 17,080 276,641 77,547 30,798
g07
Best 24.306209 24.3065 24.310908 24.30621 24.306209
Mean 24.306209 24.3109 24.330793 24.30621 24.306209
Aver. FEs 202,19 218,989 318,348 81,060 120,000
g08
Best -0.095825 -0.095825 -0.095825 -0.095825 -0.095825
Mean -0.095825 -0.095825 -0.095825 -0.095825 -0.095825
Aver. FEs 100,606 13,987 34,504 39,381 120,000
g09
Best 680.630057 680.630 680.630209 680.6301 680.630037
Mean 680.630057 680.631 680.631189 680.6301 680.630052
Aver. FEs 6450 99,198 398,211 56,564 118,612
g10
Best 7049.24802 7055.47 7049.50541 7049.247 7049.23670
Mean 7049.24802 7134.54 7139.70173 7049.248 7049.23678
Aver. FEs 10,578 140,395 400,336 150,676 93,190
g11
Best - 0.74999 0.749999 0.750000 0.749999
Mean - 0.74999 0.750009 0.750000 0.750000
Aver. FEs - 22,220 362,949 17,948 19,606
g12
Best -0.999375 -1.00000 -1.000000 -1.000000 -1.000000
Mean -0.999375 -0.99808 -1.000000 -1.000000 -1.000000
Aver. FEs 1705 2204 36,2949 61,344 18,000
g13
Best - 0.05395 0.053949 0.053950 0.0539498
Mean - 0.05885 0.304071 0.349041 0.0539498
Aver. FEs - 58,204 234,223 31,269 73,220
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S431
evolutionary strategy (ES) [5], the genetic algorithm (GA) MAL-IGWO and HEA are nearly the same in three test
[38], the hybrid differential evolution algorithm (HDE) [7]. problems (g05, g07, and g09). For the test problem g13, as
These evolutionary computation-based algorithms use dif- seen from Tables 4, 5, and 6, MAL-IGWO obtained better
ferent approaches to handle constraints: MABC uses a results than MABC, ES, and HDE algorithms. As far as the
feasibility-based rule in [39] to deal with constraints; HEA computational cost (the number of function evaluations) is
uses a multi-objective optimization method to deal with concerned, as seen from Table 7, HDE algorithm seems to
constraints; ES uses an adaptive constraint-handling tech- have the minimum computational cost (10,000 FEs) for
nique; GA uses a rough penalty function method; and HDE most test problems, while GA has considerable computa-
uses a penalty function to deal with constraints. It should tional cost (350,000 FEs) for all test problems. In general,
be noted that MAL-IGWO algorithm is being compared the cost required by our algorithm is moderate among the
with other stochastic algorithms which use different con- constrained optimization approaches in comparison. Based
straint-handling techniques. The comparative results have on the above results, MAL-IGWO shows a very competi-
been shown in Tables 4, 5, and 6. It is necessary to tive performance with that of MABC, HEA, GA, ES, and
emphasize that the experimental results of MABC, HEA, HDE which are the state-of-the-art algorithms in the field
ES, GA, and HDE reported in Table 3 were directly taken of constrained optimization problem.
from their literatures to ensure the comparison fair. For
clarity, the results of the best algorithms are marked in 4.5 Discussion of parameter l
boldface. Solutions to problems g12 and g13 by GA are not
given in [43], and therefore they are not included in The main purpose of this section is to investigate the effect
Tables 4, 5, and 6. The character ‘‘-’’ means that the result of the parameter setting on the performance of MAL-
is not available in the paper. IGWO. For all experiments in this section, 30 independent
As shown in Tables 4, 5, and 6, it can be obviously seen runs have been executed for 24 constrained test problems.
that almost all the six algorithms can obtain similar results In Eq. (21), there is a nonlinear modulation index
for six test problems (g01, g03, g04, g06, g08, and g12), parameter (i.e. l). As mentioned previously, this parameter
but the results found by MAL-IGWO for g10 are closer to has the capability of adjusting the exploration and
the optimal solution than all other algorithms, although exploitation of the population over the course of iterations.
they practically obtained the better results. For the test In order to investigate the sensitivity of the parameter l
problem g02, as seen from Table 4, the proposed MAL- used by MAL-IGWO on performance, a set of experiments
IGWO algorithm can find the global optimal solution has been performed. The parameter settings were kept
(-0.803619), while the other five algorithms are unable to unchanged unless we point out that new settings for one or
obtain the global optimal solution. However, as seen from some of the parameters have been adopted with the aim of
Tables 5 and 6, HEA obtained better ‘‘mean’’ and ‘‘worst’’ parameter study. We tested MAL-IGWO with different
results for problem g02. For the test problems g05, g07, values of l: 0.1, 0.5, 1.1, 1.5, 2.1, and 2.5. We summarized
and g09, the proposed MAL-IGWO algorithm found better the mean of the objective function values in Table 8. For
‘‘best’’, ‘‘mean’’, and ‘‘worst’’ results than MABC, ES, GA, clarity, the results of the best algorithms are marked in
and HDE. As seen from Tables 4, 5, and 6, the results of boldface.
123
S432 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
Table 6 Worst solution obtained by the MABC, HEA, ES, GA, HDE, and MAL-IGWO on 13 test problems
Function MABC [36] HEA [37] ES [5] GA [38] HDE [7] MAL-IGWO
As shown from Table 8, with test problems g08, g12, g19, g21, and g23). Based on the analysis above, we can
and g16, an interesting result is that there is no significant see that l = 1.1 is an appropriate setting for proposed
negative effect when decreasing or increasing the param- MAL-IGWO.
eter l. In the case of l = 2.5, the results are a much worse
quality compared with other algorithms except for test 4.6 Experiment on engineering design problems
problems g08, g12, and g16. In the cases of l = 0.1 and
l = 0.5, the quality of results is better than those provided In this section, to study the performance of our algorithm
by l = 2.5; however, it still shows poor performance for on real-world constrained optimization problems, three
nineteen test problems (g01, g02, g03, g04, g05, g06, g07, well-studied engineering design problems that are widely
g09, g10, g11, g13, g14, g15, g17, g18, g19, g21, g23, and used in the literature have been solved.
g24). In the case of l = 2.1, the results are worse than that
of l = 1.1 and l = 1.5 except for test problems g08, g11, 4.6.1 Tension/compression spring design problem
g12, g16, and g24. In the case of l = 1.5, the quality of
results is better than those provided by l = 0.1, l = 0.5, The tension/compression spring design problem (as shown
l = 2.1, and l = 2.5; however, it still obtains worse in Fig. 5) is described in Belegundu [40]. The objective of
results than those provided by l = 1.1 for twelve test this problem is to minimize the weight (f(x)) of a spring
problems (g02, g05, g07, g09, g10, g13, g14, g17, g18, subject to three nonlinear and one linear inequality
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S433
constraints with three continuous design variables [the wire emphasize that the results of GADTS, CPSO, IGSO,
diameter d(x1), the mean coil diameter D(x2), and the AATMEA, and CEDE reported in Table 9 were directly
number of active coils P(x3)]. taken from their literatures to ensure the comparison fair.
The mathematical formulation of spring design problem For clarity, the results of the best algorithms are marked in
is given as follows: boldface.
As shown from Table 9, it is evident that the best
~Þ ¼ ðx3 þ 2Þx2 x21
Minimize f ðx ð22Þ
solution obtained by MAL-IGWO algorithm is better than
x32 x3 those by the other stochastic algorithms except for IGSO.
g1 ðx
~Þ ¼ 1 0
71785x41 Compared with IGSO algorithm, MAL-IGWO finds similar
4x22 x1 x2 1 ‘‘best’’ result for spring design problem. From Table 10,
Subject to g2 ðx
~Þ ¼ þ 10
12566ðx2 x1 x1 Þ 5108x21
3 4 with respect to GADTS, CPSO, AATMEA, and CEDE, it
140:45x1 is clear that the proposed MAL-IGWO is able to provide
g3 ðx
~Þ ¼ 1 0
x22 x3 better ‘‘best’’, ‘‘mean’’, ‘‘worst’’, and ‘‘standard deviation’’
x1 þ x2 results for tension/compression spring design problem.
g4 ðx
~Þ ¼ 10
1:5 Compared to the IGSO algorithm, MAL-IGWO obtains
ð23Þ similar ‘‘best’’ values and better ‘‘mean’’, ‘‘worst’’, and
standard deviation values for spring design problem.
where 0.25 B x1 B 1.3, 0.05 B x2 B 2.0, and
2 B x3 B 15.
4.6.2 Pressure vessel design problem
The spring design problem is a practical design problem
that has been often used as a benchmark for testing dif-
The pressure vessel design problem (as shown in Fig. 6) is
ferent optimization methods: genetic algorithm with dom-
described in Sandgren [46] who first proposed this problem.
inance tournament selection (denoted by GADTS) [41], co-
The objective of this problem is to minimize the total cost of
evolutionary particle swarm optimization (abbreviated as
pressure vessel considering the cost of material, forming, and
CPSO) [42], improved group search optimizer (abbreviated
welding. This problem has one nonlinear and three linear
as IGSO) [43], accelerating adaptive trade-off model with
inequality constraints and two continuous design variables
evolutionary algorithm (denoted as AATMEA) [44], and
[inner radius R(x3) and the length of the cylindrical selection
co-evolutionary differential evolution (abbreviated as
of the vessel L(x4)] and two discrete design variables [thick-
CEDE) [45]. The best solutions obtained by MAL-IGWO
ness of the shell Ts(x1) and thickness of the head Th(x2)].
algorithm in this study for tension/compression spring
The pressure vessel design problem is stated as follows:
design problem were compared with the five best solutions
reported in their literature, and are provided in Table 9, and
~Þ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23
Minimize f ðx
the statistical results of the five algorithms and MAL- ð24Þ
IGWO are illustrated in Table 10. It is necessary to þ 3:1661x21 x4 þ 19:84x21 x3
123
S434 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
Table 8 Mean results obtained by MAL-IGWO with varying l over 30 independent runs
Function l = 0.1 l = 0.5 l = 1.1 l = 1.5 l = 2.1 l = 2.5
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S435
Table 9 Comparison of the best solution for the spring design problem obtained by six methods
GADTS [41] CPSO [42] IGSO [43] AATMEA [44] CEDE [45] MAL-IGWO
123
S436 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S437
benchmark problems and to tune the parameters of the 4. Long W, Liang XM, Huang YF, Chen YX (2013) A hybrid dif-
algorithm. ferential evolution augmented Lagrangian method for constrained
numerical and engineering optimization. Comput Aided Des
45(12):1562–1574
Acknowledgments This work was supported in part by the National 5. Wang Y, Cai Z, Zhou Y, Zeng W (2008) An adaptive tradeoff
Natural Science Foundation of China under Grant No. 61463009, the model for constrained evolutionary optimization. IEEE Trans
Humanities and Social Sciences Planning Foundation of Ministry of Evol Comput 12(1):80–92
Education of China under Grant No. 12XJA910001, the Beijing 6. Tuba M, Bacanin N (2014) Improved seeker optimization algo-
Natural Science Foundation under Grant No. 4122022, the 125 Spe- rithm hybridized with firefly algorithm for constrained opti-
cial Major Science and Technology of Department of Education of mization problems. Neurocomputing 143:197–207
Guizhou Province under Grant No. [2012]011, and the Science and 7. Gandomi AH, Yang XS, Talatahari S, Deb S (2012) Couple eagle
Technology Foundation of Guizhou Province under Grant No. strategy and differential evolution for unconstrained and con-
[2016]2082. strained global optimization. Comput Math Appl 63(1):191–200
8. Long W, Liang X, Huang Y, Chen Y (2014) An effective hybrid
cuckoo search algorithm for constrained global optimi- zation.
References Neural Comput Appl 25(3–4):911–926
9. Brajevic I (2015) Crossover-based artificial bee colony algorithm
1. Ali MM, Zhu WX (2013) A penalty function-based differential for constrained optimization problems. Neural Comput Appl
evolution algorithm for constrained global optimization. Comput 26(7):1587–1601
Optim Appl 54(3):707–739 10. Sadollah A, Eskandar H, Bahreininejad A, Kim JH (2015) Water
2. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching-learning- cycle algorithm with evaporation rate for solving constrained and
based optimization: a novel method for constrained mechanical unconstrained optimization problems. Appl Soft Comput
design optimization problems. Comput Aided Des 43(3):303–315 30:58–71
3. Deb K, Srivastava S (2012) A genetic algorithm based augmented 11. Wang Y, Cai Z (2012) Combining multiobjective optimization
Lagrangian method for constrained optimization. Comput Optim with differential evolution to solve constrained optimization
Appl 51(3):869–902 problems. IEEE Trans Evol Comput 16(1):117–134
123
S438 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
12. Niu B, Wang JW, Wang H (2014) Bacterial-inspired algorithm 32. Liang XM, Hu JB, Zhong WT, Qian JX (2001) A modified
for solving constrained optimization problems. Neurocomputing augmented Lagrange multiplier methods for large-scale opti-
148:54–62 mization. Chin J Chem Eng 9(2):167–172
13. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. 33. Liang J, Runarsson TP, Mezura-Montes E, Clerc M, Suganthan P,
Adv Eng Softw 69(3):46–61 Coello CC, Deb K (2006) Problem definitions and evaluation
14. Zhu A, Xu C, Li Z, Wu J, Liu Z (2015) Hybridizing grey wolf criteria for the CEC 2006 special session on constrained real-
optimization with differential evolution for global optimization parameter optimization. J Appl Mech 41:1–8
and test scheduling for 3D stacked SoC. J Syst Eng Electron 34. Rocha AMA, Martins TF, Fernandes EM (2011) An augmented
26(2):317–328 Lagrangian fish swarm based method for global optimization.
15. Saremi S, Mirjalili SZ, Mirjalili SM (2015) Evolutionary popu- J Comput Appl Math 235(16):4611–4620
lation dynamics and grey wolf optimizer. Neural Comput Appl 35. Mahdavi A, Shiri ME (2015) An augmented Lagrangian ant
26(5):1257–1263 colony based method for constrained optimization. Comput
16. Emary E, Zawbaa HM, Hassanien AE (2016) Binary grey wolf Optim Appl 60(1):263–276
optimization approaches for feature selection. Neurocomputing 36. Mezura-Montes E, Cetina-Dominguez O (2012) Empirical anal-
172:371–381 ysis of a modified artificial bee colony for constrained numerical
17. Kamboj VK (2015) A novel hybrid PSO-GWO approach for unit optimization. Appl Math Comput 218(22):10943–10973
commitment problem. Neural Comput Appl. doi:10.1007/s00521- 37. Wang Y, Cai ZX, Guo GQ, Zhou YR (2007) Multiobjective
015-1962-4 optimization and hybrid evolutionary algorithm to solve con-
18. Jayakumar N, Subramanian S, Ganesan S, Elanchezhian EB strained optimization problems. IEEE Trans Syst Man Cybern
(2016) Grey wolf optimization for combined heat and power 37(3):560–575
dispatch with cogeneration systems. Electr Power Energy Syst 38. Lin CH (2013) A rough penalty genetic algorithm for constrained
74:252–264 optimization. Inf Sci 241:119–137
19. El-Gaafary AAM, Mohamed YS, Hemeida AM, Mohamed AA 39. Deb K (2000) A efficient constraint handling method for genetic
(2015) Grey wolf optimization for multi input multi output sys- algorithms. Comput Meth Appl Mech Eng 186(2–4):311–338
tem. Univers J Commun Netw 3(1):1–6 40. Belegundu AD (1982) A study of mathematical programming
20. Komaki GM, Kayvanfar V (2015) Grey wolf optimizer algorithm methods for structural optimization. Ph.D. Thesis, Deparment of
for the two-stage assembly flow shop scheduling problem with Civil and Environmental Engineering, University of Iowa, Iowa
release time. J Comput Sci 8:109–120 41. Mezura-Montes E, Coello CAC (2002) Constraint-handling in
21. Song X, Tang L, Zhao S, Zhang X, Li L, Huang J, Cai W (2015) genetic algorithms through the use of dominance-based tourna-
Grey wolf optimizer for parameter estimation in surface waves. ment selection. Adv Eng Inform 16(3):193–203
Soil Dyn Earthq Eng 75:147–157 42. He Q, Wang L (2007) An effective co-evolutionary particle
22. Mirjalili S (2015) How effective is the grey wolf optimizer in swarm optimization for constrained engineering design problem.
training multi-layer perceptrons. Appl Intell 43(1):150–161 Eng Appl Artif Intell 20(1):89–99
23. Madadi A, Motlagh MM (2014) Optimal control of DC motor 43. Shen H, Zhu Y, Niu B, Wu QH (2009) An improved group search
using grey wolf optimizer algorithm. Tech J Eng Appl Sci optimizer for mechanical design optimization problems. Progress
4(4):373–379 Nat Sci 19(1):91–97
24. El-Fergany AA, Hasanien HM (2015) Single and multi-objective 44. Wang Y, Cai ZX, Zhou YR (2009) Accelerating adaptive trade-
optimal power flow using grey wolf optimizer and differential off model using shrinking space technique for constrained evo-
evolution algorithms. Electr Power Compon Syst lutionary optimization. Int J Numer Meth Eng 77(11):1501–1534
43(13):1548–1559 45. Huang FZ, Wang L, He Q (2007) An effective co-evolutionary
25. Kamboj VK, Bath SK, Dhillon JS (2015) Solution of non-convex differential evolution for constrained optimization. Appl Math
economic load dispatch problem using grey wolf optimizer. Comput 186(1):340–356
Neural Comput Appl. doi:10.1007/s00521-015-1934-8 46. Sandgren E (1990) Nonlinear integer and discrete programming
26. Sulaiman MH, Mustaffa Z, Mohamed MR, Aliman O (2015) in mechanical design optimization. ASME J Mech Des
Using the gray wolf optimizer for solving optimal reactive power 112(2):223–229
dispatch problem. Appl Soft Comput 32:286–292 47. Mezura-Montes E, Coello CAC (2005) Useful infeasible solu-
27. Metz MC, Vucetich JA, Smith DW, Stahler DR, Peterson RO tions in engineering optimization with evolutionary algorithm.
(2011) Effect of sociality and season on gray wolf (Canis lupus) MICAI’2005 Lect Notes Artif Int 3789:652–662
foraging behavior: implications for estimating summer kill rate. 48. Akhtar S, Tai K, Ray T (2002) A socio-behavioral simulation
PLoS ONE 6(3):1–10 model for engineering design optimization. Eng Optim
28. Muro C, Escobedo R, Spector L, Coppinger RP (2011) Wolf-pack 34(4):341–354
(Canis lupus) hunting strategies emerge from simple rules in 49. Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algo-
computational simulations. Behav Process 88(3):192–197 rithm: a meta-heuristic approach to solve structural optimization
29. Michalewicz Z, Schoenauer M (1996) Evolutionary algorithm for problem. Eng Comput 29(1):17–35
constrained parameter optimization problems. Evol Comput 50. Mezura-Montes E, Coello CAC, Ricardo L (2003) Engineering
4(1):1–32 optimization using a simple evolutionary algorithm. In: Pro-
30. Mezura-Montes E, Coello CAC (2005) Self-adaptive fitness ceedings of International Conference on Tools Artificial Intelli-
formulation for constrained optimization. IEEE Trans Evol gence, pp 149–156
Comput 9(1):1–17 51. Ray T, Saini P (2001) Engineering design optimization using a
31. Costa L, Santo IACPE, Fernandes EMGP (2012) A hybrid swarm with an intelligent information sharing among individuals.
genetic pattern search augmented Lagrangian method for con- Eng Optim 33(6):735–748
strained global optimization. Appl Math Comput
218(18):9415–9426
123