0% found this document useful (0 votes)
35 views19 pages

A Modified Augmented Lagrangian With Improved Grey Wolf Optimization To Constrained Optimization Problems

This document summarizes a research paper that proposes a new algorithm called MAL-IGWO for solving constrained optimization problems. MAL-IGWO combines the modified augmented Lagrangian multiplier method to handle constraints with an improved grey wolf optimization algorithm to search for the global optimum of the unconstrained problem. The algorithm is tested on benchmark problems and engineering applications, showing better performance than other state-of-the-art algorithms. Evolutionary computation approaches are well-suited for constrained optimization as they do not require gradients and are robust, easy to implement, and less likely to get stuck in local optima.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views19 pages

A Modified Augmented Lagrangian With Improved Grey Wolf Optimization To Constrained Optimization Problems

This document summarizes a research paper that proposes a new algorithm called MAL-IGWO for solving constrained optimization problems. MAL-IGWO combines the modified augmented Lagrangian multiplier method to handle constraints with an improved grey wolf optimization algorithm to search for the global optimum of the unconstrained problem. The algorithm is tested on benchmark problems and engineering applications, showing better performance than other state-of-the-art algorithms. Evolutionary computation approaches are well-suited for constrained optimization as they do not require gradients and are robust, easy to implement, and less likely to get stuck in local optima.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/303600975

A modified augmented Lagrangian with improved grey wolf optimization to


constrained optimization problems

Article  in  Neural Computing and Applications · December 2017


DOI: 10.1007/s00521-016-2357-x

CITATIONS READS

80 771

5 authors, including:

Wen Long Jianjun Jiao


Guizhou University of Finance and Economics Guizhou University of Finance and Economics
20 PUBLICATIONS   1,115 CITATIONS    132 PUBLICATIONS   2,164 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Population dynamics and epidemic dynamics View project

All content following this page was uploaded by Wen Long on 28 March 2019.

The user has requested enhancement of the downloaded file.


Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438
DOI 10.1007/s00521-016-2357-x

ORIGINAL ARTICLE

A modified augmented Lagrangian with improved grey wolf


optimization to constrained optimization problems
Wen Long1 • Ximing Liang2 • Shaohong Cai1 • Jianjun Jiao1 • Wenzhuan Zhang1

Received: 30 November 2015 / Accepted: 17 May 2016 / Published online: 28 May 2016
Ó The Natural Computing Applications Forum 2016

min f ðx
~Þ ð1Þ
Abstract This paper presents a novel constrained opti-
mization algorithm named MAL-IGWO, which integrates s:t: gj ðx
~Þ ¼ 0; j ¼ 1; 2; . . .; p ð2Þ
the benefit of the improved grey wolf optimization (IGWO) gj ðx
~Þ  0; j ¼ p þ 1; p þ 2; . . .; m ð3Þ
capability for discovering the global optimum with the
modified augmented Lagrangian (MAL) multiplier method l i  x i  ui ; i ¼ 1; 2; . . .; n ð4Þ
to handle constraints. In the proposed MAL-IGWO algo- where f ðx ~Þ is a nonlinear objective function which is
rithm, the MAL method effectively converts a constrained minimized with respect to the design variables
problem into an unconstrained problem and the IGWO x ¼ ðx1 ; x2 ; . . .; xn Þ,
~ nonlinear equality constraints
algorithm is applied to deal with the unconstrained prob- gj ðx
~Þ ¼ 0, and inequality constraints gj ðx
~Þ  0. li and ui are
lem. This algorithm is tested on 24 well-known benchmark the lower bound and the upper bound of xi, respectively.
problems and 3 engineering applications, and compared Evolutionary computation-based algorithms have many
with other state-of-the-art algorithms. Experimental results advantages over conventional nonlinear programming
demonstrate that the proposed algorithm shows better techniques: the gradients of the cost function and con-
performance in comparison to other approaches. strained functions are not required, reliable and robust
performance, easy implementation, and the chance of being
Keywords Evolutionary computation-based algorithm  trapped by a local minimum is lower. Due to these
Constrained optimization problems  Augmented advantages, evolutionary computation-based algorithms
Lagrangian function  Grey wolf optimization have been successfully and broadly applied to deal with
constrained optimization problems recently. Ali and Zhu
[1] presented a novel differential evolution-based algo-
1 Introduction rithm for solving constrained global optimization prob-
lems. The adaptive nature of the penalty function is
Constrained optimization problems are always inevitable in introduced to make the results of the proposed algorithm
many real-world applications such as engineering design, mostly insensitive to low values of the penalty parameter.
structural optimization, economics, and allocation and Rao et al. [2] proposed an efficient teaching–learning-based
location problems. A general constrained optimization optimization (TLBO) algorithm for constrained mechanical
problem is formulated by: design optimization problems. The TLBO algorithm works
on the effect of influence of a teacher on learners. The
process of TLBO is divided into two parts: the first part
& Ximing Liang
[email protected]
consists of the ‘‘Teacher Phase’’ and the second part con-
sists of the ‘‘Learner Phase’’. ‘‘Teacher Phase’’ means
1
Key Laboratory of Economics System Simulation, Guizhou learning from the teacher, and ‘‘Learner Phase’’ means
University of Finance and Economics, Guizhou, China learning by the interaction between learners. Deb and
2
School of Science, Beijing University of Civil Engineering Srivastava [3] presented a new genetic algorithm-based
and Architecture, Beijing, China

123
S422 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

augmented Lagrangian algorithm for solving constrained which combines multi-objective optimization with differ-
optimization problems. In this algorithm, a classical opti- ential evolution to deal with constrained optimization
mization method is used to improve the genetic algorithm- problems. In CMODE algorithm, differential evolution
obtained solution. In addition, an effective strategy is serves as the search engine and the comparison of indi-
proposed to update critical parameters in an adaptive viduals is based on multi-objective optimization technique.
manner based on population statistics. Long et al. [4] In addition, a novel infeasible solution replacement
proposed a new hybrid algorithm for solving constrained mechanism based on multi-objective optimization is pro-
numerical and engineering optimization problems. The posed, with the purpose of guiding the population towards
basic steps of the proposed algorithm are comprised of an promising solutions and the feasible region simultaneously.
outer iteration, in which the Lagrangian multipliers and Niu et al. [12] presented an improved bacterial foraging
various penalty parameters are updated using a first-order optimization (BFO) algorithm for solving constrained
update scheme, and an inner iteration, in which a nonlinear optimization problems. To further improve the perfor-
optimization of the modified augmented Lagrangian func- mance of the original BFO, the proposed algorithm comes
tion with simple bound constraints is implemented by a up with two modified BFOs, i.e. BFO with linear
modified differential evolution algorithm. Wang et al. [5] decreasing chemotaxis step and BFO with nonlinear
developed an adaptive trade-off model (ATM) for solving decreasing chemotaxis step.
constrained optimization problems. The ATM designs Grey wolf optimization (GWO) algorithm is a new
different trade-off schemes during different stages of a evolutionary computation-based method developed by
search process to obtain an appropriate trade-off between Mirjalili et al. [13] in 2014. This algorithm mimics the
objective function and constraint violations. In addition, a leadership hierarchy and hunting behaviour of grey wolves
simple evolutionary strategy (ES) is used as the search in nature. The preliminary studies show that the GWO
engine. Tuba and Bacanin [6] presented a novel improved algorithm is very promising and could outperform existing
seeker optimization algorithm hybridized with firefly population-based algorithms, such as genetic algorithm
algorithm for constrained optimization problems. In this (GA), particle swarm optimization (PSO) algorithm, dif-
method, a modified seeker optimization algorithm is ferential evolution (DE) algorithm, and gravitational search
introduced to control exploitation/exploration balance and algorithm (GSA) [13]. Similar to the other population-
hybridized it with elements of the firefly algorithm that based algorithms, GWO has not required to the gradient of
improved its exploitation capabilities. Gandomi et al. [7] the function in its optimization process. The most impor-
carried out extensive global optimization of constrained tant advantages of the GWO algorithm, compared to other
problems using the eagle strategy in combination with the population-based algorithms, are that GWO is easy to
efficient differential evolution. The proposed algorithm is implement, and there are few parameters to adjust. Due to
applied to thirteen classical constrained numerical opti- its simplicity and ease of implementation, GWO algorithm
mization problems and three constrained engineering has captured much attention and has been applied to solve
optimization problems reported in the literatures. Long many numerical optimization and practical optimization
et al. [8] developed an effective hybrid cuckoo search problems since its invention, such as global optimization
algorithm based on Solis and Wets local search method for problems [14, 15], feature selection [16], unit commitment
constrained global optimization problems that relies on a problem [17], combined heat and power dispatch problem
modified augmented Lagrangian function for constraint [18], multi-input multi-output system optimization [19],
handling. Brajevic [9] described a novel artificial bee col- flow shop scheduling problem [20], parameter estimation
ony (ABC) algorithm for constrained optimization prob- in surface waves [21], train multi-layer perceptrons [22],
lems. In the proposed method, two different modified ABC optimal control of DC motor [23], optimal power flow [24],
search operators are used in employed and onlooker pha- non-convex economic load dispatch problem [25], and
ses, and crossover operator is used in scout phase instead of reactive power dispatch problem [26].
random search. In addition, modifications related to Mirjalili et al. [13] established a performance compar-
dynamic tolerance for handling equality constraints and ison of four evolutionary computation-based algorithms to
improved boundary constraint-handling method are solve benchmark test functions. These four evolutionary
employed. Sadollah et al. [10] proposed a novel modified computation-based algorithms are grey wolf optimization
water cycle algorithm (WCA) based on evaporation rate for (GWO), differential evolution (DE), particle swarm opti-
solving constrained optimization problems. In the proposed mization (PSO), and gravitational search algorithm (GSA).
algorithm, new concept of evaporation rate for different The overall results indicate that GWO is the most com-
rivers and streams is defined so-called evaporation rate- petitive among all of the compared algorithms for this set
based WCA, which offers improvement in search. Wang of test functions. To our limited knowledge, the present
and Cai [11] developed a novel method, called CMODE, GWO and its variants with augmented Lagrangian

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S423

multiplier method have not been found applications to the 2.2 Hunting behaviour and mathematical model
constrained global optimization problems. In this paper, a
novel constrained optimization algorithm based on the The social hierarchy of grey wolves is a special charac-
MAL method and improved GWO algorithm is proposed to teristic, and group hunting is another interesting social
solve constrained optimization problems. This paper pre- behaviour of grey wolves. There are three steps as to the
sents the first effort to integrate GWO algorithm with the main phases of grey wolf hunting [28]: firstly, they track,
augmented Lagrangian method. Furthermore, the perfor- chase, and approach the prey; secondly, they pursue,
mance of the proposed algorithm is investigated on 24 encircle, and harass the prey until it stops moving; finally,
well-known constrained optimization test problems and 3 they attack towards the prey.
engineering optimization problems and compared with The mathematical models of the social hierarchy,
other state-of-the-art evolutionary computation-based tracking, encircling, and attacking prey are provided. In
algorithms. order to mathematically model encircling behaviour, the
This paper is organized as follows. In Sect. 2, the grey following equations are put forward [13]:
wolf optimization algorithm is briefly introduced. The  
~ ¼ C
D ~X ~ðtÞ
~p ðtÞ  X ð5Þ
proposed algorithm (MAL-IGWO) is described in Sect. 3.
In Sect. 4, the experimental results and the analysis of the *
~ðt þ 1Þ ¼ X p ðtÞ  ~
X ~
AD ð6Þ
proposed algorithm are provided. Finally, we end this
paper with some conclusions and comments for further where X~ indicates the position vector of a grey wolf, X~p is
research in Sect. 5. the position vector of the prey, ~ A and C ~ are coefficient
vectors, and t indicates the current iteration.
The coefficient vectors ~ A and ~ C are calculated as
2 Classical grey wolf optimization algorithms
follows:
2.1 Social hierarchy of grey wolves ~
A ¼ 2a
~ ~
r1  ~
a ð7Þ
~
C ¼ 2 ~
r2 ð8Þ
The grey wolf optimization algorithm is a novel meta-
heuristic approach inspired by the leadership hierarchy and where ~ r1 and ~r2 are random vectors in [0,1].
hunting mechanism of grey wolves in nature [13]. Grey Grey wolves first find out the position of the prey and
wolves mostly prefer to live in a pack. The group size is then encircle it. However, the position of the optimal prey
5–12 on average. Of particular interest is that they have a is unknown in a search space. In order to mathematically
very strict social dominant hierarchy as shown in Fig. 1. simulate the hunting behaviour of grey wolves, we suppose
The leader in a grey wolves group is called Alpha, that the Alpha (best candidate solution), Beta, and Delta
which is mostly responsible for making decisions about have better knowledge about the potential location of prey.
hunting, sleeping place, time to wake, and so on [27]. The Therefore, the first three best solutions gained so far are
second level in the hierarchy of grey wolves is Beta, which stored, and the other members in the pack must update their
help the Alpha in decision-making or other pack activities. positions in the light of the best three solutions. Such
Beta is also the best candidate in case of Alpha passing behaviour can be formulated as follows [13]:
away or getting old. The lowest ranking grey wolf is  
~a ¼ ~
D C1  X ~
~a  X ð9Þ
Omega, which plays the role of scapegoat. Omega can
satisfy the whole group and maintain the dominant archi-  
tecture of the group. The third level is Delta, which must ~b ¼ ~
D C2  X ~
~b  X ð10Þ
submit to Alpha and Beta, but they dominate the Omega.  
Delta should scout to protect and guarantee the safety of ~d ¼ C
D ~3  X ~
~d  X ð11Þ
the group.
X ~a  ~
~1 ¼ X ~a
A1  D ð12Þ
X ~b  ~
~2 ¼ X ~b
A2  D ð13Þ
α
β X ~d  ~
~3 ¼ X ~d
A3  D ð14Þ

δ X
~ ~ ~
~ðt þ 1Þ ¼ X 1 þ X 2 þ X 3 ð15Þ
3
ω
As mentioned above, the grey wolves finish the hunt by
Fig. 1 Hierarchy of grey wolves attacking the prey when it stops moving. In order to

123
S424 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

mathematically model approaching the prey, we decrease solving constrained optimization problems. As a result, a
the value of ~:
a variety of evolutionary computation-based constraint-han-
2t dling techniques have been developed for constrained
a ¼2
~ðtÞ ð16Þ optimization problems which can be grouped as follows
Max iter
[29, 30]: (1) methods based on penalty functions; (2)
where t is the current number of iteration, Max_iter is the methods based on preserving feasibility of solutions; (3)
maximum number of iteration. Therefore, ~ðtÞa is linearly methods based on the superiority of feasible solutions over
decreased from 2 to 0 over the process of iterations. infeasible solutions; and (4) other methods.
Penalty function methods are the most common con-
2.3 Grey wolf optimization algorithm straint-handling technique. The augmented Lagrangian
multiplier method is an interesting penalty function that
In grey wolf optimization algorithm, we consider the fittest avoids the side effects associated with ill-conditioning of
solution as the Alpha. Consequently, the second and the third simpler penalty and barrier functions [31]. Therefore, this
best solutions are named Beta and Delta, respectively. The paper uses a modified augmented Lagrangian method in
rest of the candidate solutions are assumed to be Omega. To [32] to deal with constraints.
sum up, the search process starts with creating a random If the simple bound (4) is not present, then one can use the
population of grey wolves (candidate solutions) in the GWO modified augmented Lagrangian multiplier method to solve
algorithm. Over the course of iterations, Alpha, Beta, and problem (1)–(3). For given Lagrangian multiplier vector kk
Delta wolves estimate the probable position of the prey. Each and penalty parameter vector rk, the unconstrained penalty
candidate solution updates its distance from the prey. The sub-problem at the k-th step of this method is:
parameter ~ a is decreased from 2 to 0 in order to emphasize
exploration and exploitation, respectively. Candidate solu- min Pðx; kk ; rk Þ ð17Þ
 
 
tions tend to diverge from the prey when ~ A [ 1 and con- where P(x, k, r) is the following modified augmented
  Lagrangian function:
~
verge towards the prey when A\1.
Xp   X m
1
The pseudo code of the classical grey wolf optimization Pðx; k; rÞ ¼ f ðxÞ  kj gj ðxÞ  rj ðgj ðxÞÞ2  P~j ðx; k; rÞ
j¼1
2 j¼pþ1
algorithm is presented in Fig. 2.
ð18Þ

and P~j ðx; k; rÞ is defined as follows:


3 The proposed MAL-IGWO algorithm 8
> 1
< kj gj ðxÞ  rj ðgj ðxÞÞ2 ; if kj  rj gj ðxÞ [ 0
3.1 Constraint-handling method P~j ðx; k; rÞ ¼ 1 . 2
>
: k2j rj ; otherwise
It is necessary to note that evolutionary computation-based 2
algorithms are unconstrained optimization techniques that ð19Þ
need additional mechanism to deal with constraints when It can be easily shown that the Kuhn–Tucker solution
(x*, kk) of the primal problem (1–3) is identical to that of
Initialize the prey wolf population X i (i = 1,2, , n)
the augmented problem (17).
Initialize a, A, and C If the simple bound (4) is present, the above modified
Calculate the fitness of each search agent augmented Lagrangian multiplier method needs to be
X α = the best search agent
modified. Unlike the modified barrier function methods, we
X β = the second best search agent
make another modification to deal with the bound con-
X δ = the third best search agent
straints. At the k-th step, assume that the Lagrangian
while ( t < Max number of iterations)
for each search agent multiplier vector kk and penalty parameter vector rk are
Update the position of the current search agent by equation (15) given, we solve the following bound constrained sub-
end for
Update a, A, and C
problem instead of (17):
Calculate the fitness of all search agents
Update X α , X β ,and X δ
min Pðx; kk ; rk Þ
ð20Þ
t = t +1 s:t: l  x  u
end while
return X α where P(x, k, r) is the same modified augmented Lagran-
gian function as in (18). Let S ( Rn designate the search
Fig. 2 Pseudo code of the classical grey wolf optimization algorithm space, which is defined by the lower and upper bounds of

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S425

the variables (4). The solution x* to sub-problem (20) can compared. All approaches were applied to 24 well-known
be obtained by searching the search space if k* is known benchmark test constrained optimization problems of
and r is large enough. We will choose the improved grey CEC2006 [33] and 3 benchmark engineering design opti-
wolf optimization algorithm for the global search in (20). mization problems.

3.2 Improved grey wolf optimization algorithm 4.1 Benchmark functions and parameter setting

Population-based stochastic algorithms must have a good In this work, in order to evaluate the performance of the
balance between exploration and exploitation to achieve proposed MAL-IGWO algorithm, 24 benchmark con-
both efficient global and local searches. The exploration is strained optimization problems were considered. This set
to find new regions in search space, where population contains difficult constrained optimization problems with
diversity plays an important role. The exploitation, which very distinct properties [33]. The characteristics of these
expects to obtain a high-precision solution, means the constrained problems are summarized in Table 1 that lists
computing ability of the algorithm by using the information the number of decision variables (n), the type of objective
that has already been collected before. While, in classical function f(x), the feasibility ratio (q), the number of linear
GWO algorithm, exploration and exploitation are guaran- inequality constraints (LI), the number of nonlinear
teed by the adaptive values of parameter ~ a and ~A. A larger equality constraints (NE), the number of nonlinear
~
a value facilitates a global search, while a small ~ a value inequality constraints (NI), the number of linear equality
facilitates a local search. Suitable selection of the param- constraints (LE), the number of active constraints (a), and
eter ~a can provide a balance between global and local the optimal solution of each problem. In Table 1, the fea-
exploration abilities. However, according to Eq. (16), the sibility ratio q is the ratio between the size of the feasible
values of parameter ~ a are linearly decreased from 2 to 0 search space and that of the entire search space. In this
over the course of iterations. work, note that the maximization problems are transformed
Since the search process of GWO is nonlinearly and into minimization using -f(x).
highly complicated, linearly decreasing parameter ~ a can not The following parameters are established experimen-
truly reflect the actual search process. As a result, in order to tally for the best performance of MAL-IGWO: the popu-
balance the exploration ability and the exploitation ability of lation size of GWO was set to 60, and the maximum
GWO algorithm, this paper presented an improved GWO iteration number was set to 1000 (120,000 evaluations were
algorithm in which the adaptive values of parameter ~ a are carried out per independent run). In modified augmented
adjusted according to the following equation: Lagrangian method, the initial Lagrangian multiplier vec-
1  ðiter=itermax Þ tor k0 ¼ ð1; 1; . . .; 1Þ, the initial penalty parameter vector
a ¼
~ðtÞ ð21Þ r0 ¼ ð10; 10; . . .; 10Þ, the user-required tolerance
1  l  ðiter=itermax Þ
e = 1e-08, the maximum allowed penalty parameter
where iter is the current number of iterations, itermax is the ru = 1e?10, the penalty parameter increasing factor
maximum number of iterations, and l 2 (0, 3) is the c = 10, and the reduction factor for feasibility norm
nonlinear modulation index. According to Eq. (21), the f = 0.25. For each benchmark problem, 30 independent
values of parameter ~a are nonlinearly time-varying over the runs are performed in MATLAB 7.0.
course of iterations. In this paper, l is set to 1.1. We apply
Eq. (21) instead of Eq. (16) of classical GWO algorithm. 4.2 Experimental results of MAL-IGWO and MAL-
GWO algorithm
3.3 The flow chart of proposed MAL-IGWO
algorithm The experimental results obtained by MAL-IGWO and
MAL-GWO are presented in Table 2 where the best-
Based on the above explanation, the flow chart of proposed known optimal solution and the best value, the mean value,
MAL-IGWO algorithm is presented in Fig. 3. the worst value as well as the standard deviation of the
obtained objective function values over 30 runs have been
listed under the given parameter settings. Note that a result
4 Numerical results and analysis in boldface indicates the best result. NF means that no
feasible solutions were found. When all algorithms had
In this section, the results of the proposed MAL-IGWO equal results, none was emphasized in bold.
algorithm together with some state-of-the-art algorithms As shown from Table 2, it can be seen that the proposed
for solving constrained optimization problems are MAL-IGWO algorithm is able to find the best-known

123
S426 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

Begin
Initialize the parameters
of GWO algorithm
Initialize the Lagrange
multiplier vectors and penalty
parameter vectors
Calculate the fitness of
each grey wolf individual
Formulate the bound
constrained sub-problem (20)
by modified augmented Update the first best value, the
Lagrangian function (18) second best value, and the
third best value of population
solving the proposed sub-
problem (20) by improve GWO
(IGWO) algorithm Update the position of the current
grey wolf individual by Eq.(15).
Update the Update the parameters a, A, and C
Lagrange multiplier by Eqs.(21), (7), and (8)
vectors and penalty
parameter vectors

Obtained an approximate
Yes IGWO termination No
global minimum by IGWO
criterion satisfied ?
algorithm

Stopping
No
criterion of MAL-IGWO
is satisfied?

Yes
Output the optimal
value
End

Fig. 3 Flow chart of the proposed MAL-IGWO algorithm

optimal solution consistently on test problems over 30 runs g24). For test problems g01, g05, g06, g21, and g23, two
except for g03, g05, g11, g13, g20, and g22. With respect algorithms obtained similar ‘‘best’’ solutions. However,
to the test problems g03, g05, g11, and g13, although the MAL-IGWO algorithm found better ‘‘mean’’, ‘‘worst’’, and
optimal solutions obtained by MAL-IGWO are not con- standard deviation values. The convergence graphs of
sistently found, the best results achieved are very close to function values over number of iterations at the median run
the global optimal solutions. For problems g20 and g22, are plotted in Fig. 4. The above results and analysis vali-
MAL-IGWO algorithm cannot find the feasible solutions in date that the proposed MAL-IGWO is an effective and
any runs. With the test problem g10, a typical result promising approach for constrained optimization problems.
found MAL-IGWO is: ~ x ¼ ½579:30621; 1359:96533;
5109:96515; 182:01860; 295:60139; 271:98140; 4.3 Comparison between MAL-IGWO and other
286:41721; 395:60139 with f ðx ~Þ ¼ 7049:23670 which is EAs with augmented Lagrangian method
the ‘‘best’’ result reported so far. Furthermore, it can be
observed from the standard deviations over 30 runs for the To further verify the performances of the proposed MAL-
entire test problems in Table 2 are relatively small, which IGWO algorithm, comparisons are carried out with four
reflects that MAL-IGWO is stable and robust for solving evolutionary algorithms with augmented Lagrangian
these constrained optimization problems in Table 1. In method from the literatures, including genetic algorithm-
particular, for problems g08, g12, g15, and g24, the stan- based augmented Lagrangian (GAAL) [3], augmented
dard deviations of objective function are equal to 0. Lagrangian fish swarm-based method (ALFS) [34], aug-
Compared with MAL-GWO, the proposed MAL-IGWO mented Lagrangian ant colony optimization-based method
algorithm can find better results for ten test problems (g02, (ALACO) [35], and hybrid genetic pattern search aug-
g07, g09, g10, g13, g14, g16, g17, g18, g19) and similar mented Lagrangian method (HGPSAL) [31]. Table 3 lists
results in seven problems (g03, g04, g08, g11, g12, g15, the best obtained function value after the runs, the average

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S427

Table 1 Details of the 24


Problem n Type of f(x) q (%) LI NE NI LE a Optimal solution
benchmark constrained test
problems g01 13 Quadratic 0.0003 9 0 0 0 6 -15.00000000
g02 20 Nonlinear 99.9965 0 0 2 0 1 -0.803619
g03 10 Polynomial 0.0000 0 1 0 0 1 -1.000500
g04 5 Quadratic 52.1230 0 0 6 0 2 -30,665.53867
g05 4 Cubic 0.0000 2 3 0 0 3 5126.4967
g06 2 Cubic 0.0066 0 0 2 0 2 -6961.813876
g07 10 Quadratic 0.0003 3 0 5 0 6 24.306209
g08 2 Nonlinear 0.8560 0 0 2 0 0 -0.095825
g09 7 Polynomial 0.5251 0 0 4 0 2 680.630057
g10 8 Linear 0.0010 3 0 3 0 6 7049.24802
g11 2 Quadratic 0.0000 0 1 0 0 1 0.749900
g12 3 Quadratic 4.7713 0 0 1 0 0 -1.000000
g13 5 Nonlinear 0.0000 0 3 0 0 3 0.0539415
g14 10 Nonlinear 0.0000 0 0 0 3 3 -47.7649
g15 3 Quadratic 0.0000 0 1 0 1 2 961.715
g16 5 Nonlinear 0.0204 4 0 34 0 4 -1.905
g17 6 Nonlinear 0.0000 0 4 0 0 4 8853.53387
g18 9 Quadratic 0.0000 0 0 13 0 6 -0.866025
g19 15 Nonlinear 33.4761 0 0 5 0 0 32.65559
g20 24 Linear 0.0000 0 12 6 2 16 0.0967
g21 7 Linear 0.0000 0 5 1 0 6 193.72451
g22 22 Linear 0.0000 0 11 1 8 19 236.431
g23 9 Linear 0.0000 0 1 2 3 6 -400.0551
g24 2 Linear 79.6556 0 0 2 0 2 -5.50801

of the obtained function values, and the average number of GAAL and MAL-IGWO. Regarding the ‘‘mean’’ value,
function evaluations (Aver. FEs). The character ‘‘-’’ GAAL algorithm found better ‘‘mean’’ result in test
means that the result is not available in the paper. Solutions problem g02.
to problems g03, g05, g11, and g13 by GAAL are not given With respect to ALFS algorithm, MAL-IGWO algo-
in [3], and therefore they are not included in Table 3. It is rithm provided better the ‘‘best’’ results and the ‘‘mean’’
necessary to emphasize that the experimental results of results in ten test problems (g01, g02, g03, g04, g05, g06,
GAAL, ALFS, ALACO, and HGPSAL reported in Table 3 g07, g09, g10, and g13). For the test problems g08 and g11,
were directly taken from their literatures to ensure the the similar ‘‘best’’ and ‘‘mean’’ results were obtained by
comparison fair. For clarity, the results of the best algo- two algorithms. In test problem g12, two algorithms pro-
rithms are marked in boldface. vided similar ‘‘best’’ values. However, the better ‘‘mean’’
As shown in Table 3, after analysing the results, an values were obtained by MAL-IGWO algorithm in test
interesting result is that all the five algorithms have most problem g12.
reliably found the global optimal solution of problem g08. In comparison with ALACO algorithm, the proposed
This problem may relatively be easy to solve. Compared MAL-IGWO algorithm found better results in five test
with GAAL algorithm, MAL-IGWO algorithm obtained problems (g06, g07, g09, g10, and g13) and similar results
similar ‘‘best’’ and ‘‘mean’’ results in five test problems: in six test problems (g01, g03, g04, g05, g08, and g12). For
g01, g04, g06, g07, and g08. However, the better ‘‘Aver. the test problem g02, MAL-IGWO algorithm provided
FEs’’ values were provided by GAAL algorithm in five better ‘‘best’’ value. However, the better ‘‘mean’’ result was
test problems. In three test problems g09, g10, and g12, obtained by ALACO algorithm in test problem g02. In test
MAL-IGWO algorithm is able to find better the ‘‘best’’ problem g11, MAL-IGWO algorithm provided better
and the ‘‘mean’’ results than GAAL algorithm. In test ‘‘mean’’ value and similar ‘‘best’’ result than ALACO
problem g02, the similar ‘‘best’’ results were obtained by algorithm.

123
S428 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

Table 2 Results obtained by MAL-IGWO and MAL-GWO with 30 independent runs on 24 problems
Function Optimal value Algorithms Best Mean Worst Standard deviation

g01 -15.000000 MAL-GWO -15.000000 -14.500000 -13.000000 8.66E-01


MAL-IGWO -15.000000 -15.000000 -15.000000 9.70E-10
g02 -0.803619 MAL-GWO -0.794897 -0.732327 -0.622409 6.43E-02
MAL-IGWO -0.803619 -0.754524 -0.699045 3.19E-02
g03 -1.000500 MAL-GWO -1.000000 -1.000000 -1.000000 3.20E-08
MAL-IGWO -1.000000 -1.000000 -1.000000 1.00E-10
g04 -30,665.53867 MAL-GWO -30,665.53867 -30,665.53867 -30,665.53867 1.50E-06
MAL-IGWO -30,665.53868 -30,665.53867 -30,665.53867 9.27E-08
g05 5126.4967 MAL-GWO 5126.498 5126.769 5127.280 4.43E-01
MAL-IGWO 5126.4981 5126.4981 5126.4981 8.19E-06
g06 -6961.813876 MAL-GWO -6961.813800 -6961.811800 -6961.809051 2.46E-03
MAL-IGWO -6961.813876 -6961.813876 -6961.813876 1.21E-05
g07 24.306209 MAL-GWO 24.301176 24.318385 24.329451 9.95E-03
MAL-IGWO 24.306209 24.306209 24.306209 2.79E-08
g08 -0.095825 MAL-GWO -0.095825 -0.095825 -0.095825 1.39E-17
MAL-IGWO -0.095825 -0.095825 -0.095825 0.00E?00
g09 680.630057 MAL-GWO 680.630693 680.638267 680.649309 9.78E-03
MAL-IGWO 680.630037 680.630052 680.630057 8.46E-06
g10 7049.24802 MAL-GWO 7051.79723 7054.71543 7056.66315 2.45E?00
MAL-IGWO 7049.23670 7049.23678 7049.23693 8.77E-05
g11 0.749900 MAL-GWO 0.750000 0.750000 0.750000 2.39E-09
MAL-IGWO 0.749999 0.750000 0.750000 5.90E-09
g12 -1.000000 MAL-GWO -1.000000 -1.000000 -1.000000 7.76E-11
MAL-IGWO -1.000000 -1.000000 -1.000000 0.00E?00
g13 0.0539415 MAL-GWO 0.0539500 0.0551215 0.0572651 1.40E-03
MAL-IGWO 0.0539498 0.0539498 0.0539498 1.10E-12
g14 -47.7649 MAL-GWO 47.7611 47.5813 47.5173 1.04E-01
MAL-IGWO 47.7649 47.7649 47.7649 7.11E-10
g15 961.715 MAL-GWO 961.715 961.715 961.715 3.63E-07
MAL-IGWO 961.715 961.715 961.715 0.00E?00
g16 -1.905155 MAL-GWO -1.905154 -1.905154 -1.905153 3.68E-07
MAL-IGWO -1.905155 -1.905155 -1.905155 8.46E-12
g17 8853.53387 MAL-GWO 8853.53989 8872.05436 8927.59773 3.21E?01
MAL-IGWO 8853.53387 8853.62034 8854.32665 1.29E-05
g18 -0.866025 MAL-GWO -0.864534 -0.860159 -0.854840 4.92E-04
MAL-IGWO -0.866025 -0.866025 -0.866025 2.89E-08
g19 32.65559 MAL-GWO 32.65585 32.70819 32.79447 7.53E-02
MAL-IGWO 32.65559 32.65559 32.65559 5.02E-10
g20 0.0967 MAL-GWO NF NF NF NF
MAL-IGWO NF NF NF NF
g21 193.72451 MAL-GWO 193.72451 237.38395 324.70284 7.56E?01
MAL-IGWO 193.72451 193.72451 193.72451 3.93E-08
g22 236.431 MAL-GWO NF NF NF NF
MAL-IGWO NF NF NF NF
g23 -400.0551 MAL-GWO -400.0551 -398.9140 -396.6321 1.98E?00
MAL-IGWO -400.0551 -400.0551 -400.0551 4.42E-11
g24 -5.50801 MAL-GWO -5.50801 -5.50801 -5.50801 1.85E-06
MAL-IGWO -5.50801 -5.50801 -5.50801 0.00E?00

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S429

-4 -0.1

-0.2
-6 Problem g01 Problem g02
Function values -0.3

Function values
-8
-0.4

-10 -0.5

-0.6
-12
-0.7
-14
-0.8

-16 -0.9
0 200 400 600 800 1000 0 200 400 600 800 1000
Number of iterations Number of iterations
140 -46.8

120 Problem g07 -47 Problem g14


Function values

Function values
100 -47.2

80 -47.4

60 -47.6

40 -47.8

20 -48
0 200 400 600 800 1000 0 200 400 600 800 1000
Number of iterations Number of iterations
-5.3
1600

1400 Problem g24


Problem g19
-5.35
1200
Function values

Function values

1000 -5.4

800
-5.45
600

400
-5.5
200

0
0 200 400 600 800 1000 0 200 400 600 800 1000
Number of iterations Number of iterations

Fig. 4 Convergence graph for problems g01, g02, g07, g14, g19, and g24

Compared with the HGPSAL algorithm, it can be terms of the efficiency, the quality, and the robustness of
observed from Table 3 that MAL-IGWO obtained better search for the most benchmark test problems.
‘‘mean’’ results in seven test problems (g02, g04, g06, g07,
g09, g10, and g13) and similar results in five test problems 4.4 Comparison with other evolutionary algorithms
g03, g05, g08, g11, and g12. For the test problem g01,
MAL-IGWO algorithm found better ‘‘mean’’ value and In order to evaluate the effectiveness and efficiency of
similar ‘‘best’’ value than HGPSAL algorithm. MAL-IGWO future, we report the solutions obtained by
As a general remark on the comparison above, the MAL-IGWO, as well as those obtained by the modified
proposed MAL-IGWO algorithm performs competitive or artificial bee colony algorithm (MABC) [36], the hybrid
better with respect to the ALFS, ALACO, and HGPSAL in evolutionary algorithm (HEA) [37], the simple

123
S430 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

Table 3 Comparison between MAL-IGWO and GAAL, ALFS, ALACO, HGPSAL on 13 test problems
Function GAAL [3] ALFS [33] ALACO [34] HGPSAL [31] MAL-IGWO

g01
Best -15 -14.9994 -15.000000 -15.00000 -15.000000
Mean -15 -14.8818 -15.000000 -14.99998 -15.000000
Aver. FEs 15870 339,900 345,185 87,927 79,392
g02
Best -0.803619 -0.59393 -0.803519 -0.611330 -0.803619
Mean -0.803619 -0.47966 -0.797772 -0.556323 -0.754524
Aver. FEs 38,937 889,416 445,443 227,247 120,000
g03
Best - -0.99858 -1.000006 -1.000000 -1.000000
Mean - -0.99192 -1.000002 -1.000000 -1.000000
Aver. FEs - 233,264 409,566 113,890 71,596
g04
Best -30,665.53867 -30,665.54 -30,665.53867 -30,665.54 -30,665.53868
Mean -30,665.53867 -30,665.50 -30,665.53867 -30,665.54 -30,665.53867
Aver. FEs 3611 53,773 197,483 106,602 99,608
g05
Best - 5126.681 5126.4981 5126.498 5126.4981
Mean - 5134.745 5126.4981 5126.498 5126.4981
Aver. FEs - 45,196 149,363 199,439 46,810
g06
Best -6961.813876 -6961.640 -6961.813725 -6961.814 -6961.813876
Mean -6961.813876 -6851.709 -6961.813444 -6961.809 -6961.813876
Aver. FEs 2755 17,080 276,641 77,547 30,798
g07
Best 24.306209 24.3065 24.310908 24.30621 24.306209
Mean 24.306209 24.3109 24.330793 24.30621 24.306209
Aver. FEs 202,19 218,989 318,348 81,060 120,000
g08
Best -0.095825 -0.095825 -0.095825 -0.095825 -0.095825
Mean -0.095825 -0.095825 -0.095825 -0.095825 -0.095825
Aver. FEs 100,606 13,987 34,504 39,381 120,000
g09
Best 680.630057 680.630 680.630209 680.6301 680.630037
Mean 680.630057 680.631 680.631189 680.6301 680.630052
Aver. FEs 6450 99,198 398,211 56,564 118,612
g10
Best 7049.24802 7055.47 7049.50541 7049.247 7049.23670
Mean 7049.24802 7134.54 7139.70173 7049.248 7049.23678
Aver. FEs 10,578 140,395 400,336 150,676 93,190
g11
Best - 0.74999 0.749999 0.750000 0.749999
Mean - 0.74999 0.750009 0.750000 0.750000
Aver. FEs - 22,220 362,949 17,948 19,606
g12
Best -0.999375 -1.00000 -1.000000 -1.000000 -1.000000
Mean -0.999375 -0.99808 -1.000000 -1.000000 -1.000000
Aver. FEs 1705 2204 36,2949 61,344 18,000
g13
Best - 0.05395 0.053949 0.053950 0.0539498
Mean - 0.05885 0.304071 0.349041 0.0539498
Aver. FEs - 58,204 234,223 31,269 73,220

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S431

evolutionary strategy (ES) [5], the genetic algorithm (GA) MAL-IGWO and HEA are nearly the same in three test
[38], the hybrid differential evolution algorithm (HDE) [7]. problems (g05, g07, and g09). For the test problem g13, as
These evolutionary computation-based algorithms use dif- seen from Tables 4, 5, and 6, MAL-IGWO obtained better
ferent approaches to handle constraints: MABC uses a results than MABC, ES, and HDE algorithms. As far as the
feasibility-based rule in [39] to deal with constraints; HEA computational cost (the number of function evaluations) is
uses a multi-objective optimization method to deal with concerned, as seen from Table 7, HDE algorithm seems to
constraints; ES uses an adaptive constraint-handling tech- have the minimum computational cost (10,000 FEs) for
nique; GA uses a rough penalty function method; and HDE most test problems, while GA has considerable computa-
uses a penalty function to deal with constraints. It should tional cost (350,000 FEs) for all test problems. In general,
be noted that MAL-IGWO algorithm is being compared the cost required by our algorithm is moderate among the
with other stochastic algorithms which use different con- constrained optimization approaches in comparison. Based
straint-handling techniques. The comparative results have on the above results, MAL-IGWO shows a very competi-
been shown in Tables 4, 5, and 6. It is necessary to tive performance with that of MABC, HEA, GA, ES, and
emphasize that the experimental results of MABC, HEA, HDE which are the state-of-the-art algorithms in the field
ES, GA, and HDE reported in Table 3 were directly taken of constrained optimization problem.
from their literatures to ensure the comparison fair. For
clarity, the results of the best algorithms are marked in 4.5 Discussion of parameter l
boldface. Solutions to problems g12 and g13 by GA are not
given in [43], and therefore they are not included in The main purpose of this section is to investigate the effect
Tables 4, 5, and 6. The character ‘‘-’’ means that the result of the parameter setting on the performance of MAL-
is not available in the paper. IGWO. For all experiments in this section, 30 independent
As shown in Tables 4, 5, and 6, it can be obviously seen runs have been executed for 24 constrained test problems.
that almost all the six algorithms can obtain similar results In Eq. (21), there is a nonlinear modulation index
for six test problems (g01, g03, g04, g06, g08, and g12), parameter (i.e. l). As mentioned previously, this parameter
but the results found by MAL-IGWO for g10 are closer to has the capability of adjusting the exploration and
the optimal solution than all other algorithms, although exploitation of the population over the course of iterations.
they practically obtained the better results. For the test In order to investigate the sensitivity of the parameter l
problem g02, as seen from Table 4, the proposed MAL- used by MAL-IGWO on performance, a set of experiments
IGWO algorithm can find the global optimal solution has been performed. The parameter settings were kept
(-0.803619), while the other five algorithms are unable to unchanged unless we point out that new settings for one or
obtain the global optimal solution. However, as seen from some of the parameters have been adopted with the aim of
Tables 5 and 6, HEA obtained better ‘‘mean’’ and ‘‘worst’’ parameter study. We tested MAL-IGWO with different
results for problem g02. For the test problems g05, g07, values of l: 0.1, 0.5, 1.1, 1.5, 2.1, and 2.5. We summarized
and g09, the proposed MAL-IGWO algorithm found better the mean of the objective function values in Table 8. For
‘‘best’’, ‘‘mean’’, and ‘‘worst’’ results than MABC, ES, GA, clarity, the results of the best algorithms are marked in
and HDE. As seen from Tables 4, 5, and 6, the results of boldface.

Table 4 Best solution obtained


Function MABC [36] HEA [37] ES [5] GA [38] HDE [7] MAL-IGWO
by the MABC, HEA, ES, GA,
HDE, and MAL-IGWO on 13 g01 -15.000 -15.000 -15.000 -15.000 -15.000 -15.000000
test problems
g02 -0.803615 -0.803241 -0.803388 -0.803612 -0.80 -0.803619
g03 -1.000 -1.000 -1.000 -1.000 -1.00 -1.000000
g04 -30,665.539 -30,665.539 -30,665.539 -30,665.539 -30,665.54 -30,665.53868
g05 5126.736 5126.498 5126.498 5126.544 5126.50 5126.4981
g06 -6961.814 -6961.814 -6961.814 -6961.814 -6961.81 -6961.813876
g07 24.315 24.306 24.306 24.333 24.31 24.306209
g08 -0.095825 -0.095825 -0.095825 -0.095825 -0.10 -0.095825
g09 680.632 680.630 680.630 680.631 680.63 680.630037
g10 7051.706 7049.287 7052.253 7049.861 7049.25 7049.23670
g11 0.75 0.750 0.75 0.749 0.75 0.749999
g12 -1.000 -1.000 -1.000 – -1.00 -1.000000
g13 0.053985 0.0539498 0.05394 – 0.05 0.0539498

123
S432 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

Table 5 Mean solution


Function MABC [36] HEA [37] ES [5] GA [38] HDE [7] MAL-IGWO
obtained by the MABC, HEA,
ES, GA, HDE, and MAL- g01 -15.000 -15.000 -15.000 -15.000 -14.851 -15.000000
IGWO on 13 test problems
g02 -0.799336 -0.801258 -0.790148 -0.794453 -0.74 -0.754524
g03 -1.000 -1.000 -1.000 -1.000 -1.00 -1.000000
g04 -30,665.539 -30,665.539 -30,665.539 -30,665.539 -30,665.54 -30,665.53867
g05 5178.139 5126.498 5127.648 5352.188 5127.29 5126.4981
g06 -6961.814 -6961.814 -6961.814 -6961.814 -6961.81 -6961.813876
g07 24.415 24.307 24.316 24.387 24.31 24.306209
g08 -0.095825 -0.095825 -0.095825 -0.095825 -0.10 -0.095825
g09 680.647 680.630 680.639 680.634 680.63 680.630052
g10 7233.882 7049.525 7250.437 7131.084 7049.34 7049.23678
g11 0.75 0.750 0.75 0.749 0.75 0.750000
g12 -1.000 -1.000 -1.000 - -1.00 -1.000000
g13 0.158552 0.0539498 0.053959 – 0.05 0.0539498

Table 6 Worst solution obtained by the MABC, HEA, ES, GA, HDE, and MAL-IGWO on 13 test problems
Function MABC [36] HEA [37] ES [5] GA [38] HDE [7] MAL-IGWO

g01 -15.000 -15.000 -15.000 -15.000 -13.000 -15.000000


g02 -0.777438 -0.792363 -0.756986 -0.780826 -0.53 -0.699045
g03 -1.000 -1.000 -1.000 -1.000 -1.00 -1.000000
g04 -30,665.539 -30,665.539 -30,665.539 -30,665.539 -30,665.54 -30,665.53867
g05 5317.197 5126.498 5135.256 5888.510 5129.42 5126.4981
g06 -6961.814 -6961.814 -6961.814 -6961.814 -6961.81 -6961.813876
g07 24.854 24.309 24.359 24.427 24.31 24.306209
g08 -0.095825 -0.095825 -0.095825 -0.095825 -0.10 -0.095825
g09 680.691 680.630 680.673 680.637 680.63 680.630057
g10 7473.109 7049.984 7560.224 7263.461 7050.23 7049.23693
g11 0.75 0.750 0.75 0.749 0.75 0.750000
g12 -1.000 -1.000 -0.994 - -1.00 -1.000000
g13 0.442905 0.0539498 0.053999 – 0.05 0.0539498

As shown from Table 8, with test problems g08, g12, g19, g21, and g23). Based on the analysis above, we can
and g16, an interesting result is that there is no significant see that l = 1.1 is an appropriate setting for proposed
negative effect when decreasing or increasing the param- MAL-IGWO.
eter l. In the case of l = 2.5, the results are a much worse
quality compared with other algorithms except for test 4.6 Experiment on engineering design problems
problems g08, g12, and g16. In the cases of l = 0.1 and
l = 0.5, the quality of results is better than those provided In this section, to study the performance of our algorithm
by l = 2.5; however, it still shows poor performance for on real-world constrained optimization problems, three
nineteen test problems (g01, g02, g03, g04, g05, g06, g07, well-studied engineering design problems that are widely
g09, g10, g11, g13, g14, g15, g17, g18, g19, g21, g23, and used in the literature have been solved.
g24). In the case of l = 2.1, the results are worse than that
of l = 1.1 and l = 1.5 except for test problems g08, g11, 4.6.1 Tension/compression spring design problem
g12, g16, and g24. In the case of l = 1.5, the quality of
results is better than those provided by l = 0.1, l = 0.5, The tension/compression spring design problem (as shown
l = 2.1, and l = 2.5; however, it still obtains worse in Fig. 5) is described in Belegundu [40]. The objective of
results than those provided by l = 1.1 for twelve test this problem is to minimize the weight (f(x)) of a spring
problems (g02, g05, g07, g09, g10, g13, g14, g17, g18, subject to three nonlinear and one linear inequality

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S433

Table 7 Average number of


Function MABC [36] HEA [37] ES [5] GA [38] HDE [7] MAL-IGWO
function evaluations obtained
by the MABC, HEA, ES, GA, g01 20,500 240,000 240,000 350,000 10,000 79,392
HDE, and MAL-IGWO on 13
test problems g02 83,500 240,000 240,000 350,000 100,000 120,000
g03 189,000 240,000 240,000 350,000 10,000 71,596
g04 76,400 240,000 240,000 350,000 10,000 99,608
g05 – 240,000 240,000 350,000 10,000 46,810
g06 107,000 240,000 240,000 350,000 10,000 30,798
g07 – 240,000 240,000 350,000 10,000 120,000
g08 1550 240,000 240,000 350,000 10,000 120,000
g09 – 240,000 240,000 350,000 10,000 118,612
g10 – 240,000 240,000 350,000 10,000 93,190
g11 189,000 240,000 240,000 350,000 10,000 19,606
g12 1350 240,000 240,000 – 10,000 18,000
g13 189,000 240,000 240,000 – 10,000 73,220

constraints with three continuous design variables [the wire emphasize that the results of GADTS, CPSO, IGSO,
diameter d(x1), the mean coil diameter D(x2), and the AATMEA, and CEDE reported in Table 9 were directly
number of active coils P(x3)]. taken from their literatures to ensure the comparison fair.
The mathematical formulation of spring design problem For clarity, the results of the best algorithms are marked in
is given as follows: boldface.
As shown from Table 9, it is evident that the best
~Þ ¼ ðx3 þ 2Þx2 x21
Minimize f ðx ð22Þ
solution obtained by MAL-IGWO algorithm is better than
x32 x3 those by the other stochastic algorithms except for IGSO.
g1 ðx
~Þ ¼ 1  0
71785x41 Compared with IGSO algorithm, MAL-IGWO finds similar
4x22  x1 x2 1 ‘‘best’’ result for spring design problem. From Table 10,
Subject to g2 ðx
~Þ ¼ þ  10
12566ðx2 x1  x1 Þ 5108x21
3 4 with respect to GADTS, CPSO, AATMEA, and CEDE, it
140:45x1 is clear that the proposed MAL-IGWO is able to provide
g3 ðx
~Þ ¼ 1  0
x22 x3 better ‘‘best’’, ‘‘mean’’, ‘‘worst’’, and ‘‘standard deviation’’
x1 þ x2 results for tension/compression spring design problem.
g4 ðx
~Þ ¼  10
1:5 Compared to the IGSO algorithm, MAL-IGWO obtains
ð23Þ similar ‘‘best’’ values and better ‘‘mean’’, ‘‘worst’’, and
standard deviation values for spring design problem.
where 0.25 B x1 B 1.3, 0.05 B x2 B 2.0, and
2 B x3 B 15.
4.6.2 Pressure vessel design problem
The spring design problem is a practical design problem
that has been often used as a benchmark for testing dif-
The pressure vessel design problem (as shown in Fig. 6) is
ferent optimization methods: genetic algorithm with dom-
described in Sandgren [46] who first proposed this problem.
inance tournament selection (denoted by GADTS) [41], co-
The objective of this problem is to minimize the total cost of
evolutionary particle swarm optimization (abbreviated as
pressure vessel considering the cost of material, forming, and
CPSO) [42], improved group search optimizer (abbreviated
welding. This problem has one nonlinear and three linear
as IGSO) [43], accelerating adaptive trade-off model with
inequality constraints and two continuous design variables
evolutionary algorithm (denoted as AATMEA) [44], and
[inner radius R(x3) and the length of the cylindrical selection
co-evolutionary differential evolution (abbreviated as
of the vessel L(x4)] and two discrete design variables [thick-
CEDE) [45]. The best solutions obtained by MAL-IGWO
ness of the shell Ts(x1) and thickness of the head Th(x2)].
algorithm in this study for tension/compression spring
The pressure vessel design problem is stated as follows:
design problem were compared with the five best solutions
reported in their literature, and are provided in Table 9, and
~Þ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23
Minimize f ðx
the statistical results of the five algorithms and MAL- ð24Þ
IGWO are illustrated in Table 10. It is necessary to þ 3:1661x21 x4 þ 19:84x21 x3

123
S434 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

Table 8 Mean results obtained by MAL-IGWO with varying l over 30 independent runs
Function l = 0.1 l = 0.5 l = 1.1 l = 1.5 l = 2.1 l = 2.5

g01 14.933620 14.999638 15.000000 15.000000 14.999956 14.908063


g02 -0.673356 -0.695209 -0.754524 -0.726812 -0.717223 -0.662088
g03 0.999910 0.999985 1.000000 1.000000 0.999993 0.999862
g04 -30,664.72109 -30,665.82135 -30,665.53867 -30,665.53867 -30,665.54288 -30,664.25011
g05 5298.3306 5237.1280 5126.4981 5127.3586 5185.1390 5338.2210
g06 -6960.65028 -6961.23711 -6961.813876 -6961.813876 -6961.810126 -6960.26634
g07 24.763097 24.540238 24.306209 24.307260 24.425036 25.108320
g08 -0.095825 -0.095825 -0.095825 -0.095825 -0.095825 -0.095825
g09 630.688092 630.662508 630.630052 630.637238 630.649328 630.712033
g10 7285.40923 7251.33679 7049.23678 7051.33691 7186.22893 7365.81022
g11 0.750008 0.750003 0.750000 0.750000 0.750000 0.750012
g12 -1.000000 -1.000000 -1.000000 -1.000000 -1.000000 -1.000000
g13 0.180236 0.092344 0.0539498 0.0539852 0.0553267 0.592163
g14 46.8623 47.6911 47.7649 47.7593 47.7194 46.5249
g15 961.788 961.742 961.715 961.715 961.721 961.793
g16 -1.905155 -1.905155 -1.905155 -1.905155 -1.905155 -1.905155
g17 9133.76138 8957.48042 8853.62034 8898.61827 8936.75435 9196.51245
g18 -0.795295 -0.803698 -0.866025 -0.846937 -0.837034 -0.787560
g19 34.88302 34.54395 32.65559 33.51324 34.14988 35.45822
g20 NF NF NF NF NF NF
g21 213.07758 199.99288 193.72451 193.72686 194.97840 290.48750
g22 NF NF NF NF NF NF
g23 -398.9719 -399.1380 -400.0551 -400.0466 -400.0437 -396.6077
g24 -4.41421 -4.87454 -5.50801 -5.50801 -5.50801 -4.10778
NF means that no feasible solutions were found

PSO algorithm to solve this problem, Shen et al. [43]


applied an improved GSO method to solve this problem, an
accelerating ATM with evolutionary algorithm was pro-
posed by Wang et al. [44] for solve this problem. In
addition, Huang et al. [45] solve this problem by using co-
evolutionary DE algorithm. Table 11 provides the optimal
Fig. 5 Schematic view of tension/compression spring design problem solutions of pressure vessel design problem that have been
determined by GADTS, CPSO, IGSO, AATMEA, and
CEDE algorithms, as well as the proposed MAL-IGWO in
g1 ðx
~Þ ¼ x1 þ 0:0193x3  0 this work, and their statistical results are given in Table 12.
Subject to g2 ðx
~Þ ¼ x2 þ 0:00954x3  0 The results of the compared algorithms are all derived
4
~Þ ¼ px23 x4 þ px33 þ 1296000  0
g3 ðx directly from their corresponding references. For clarity,
3
g4 ðx
~Þ ¼ x4  240  0 the results of the best algorithms are marked in boldface.
As can be seen in Tables 11 and 12, with respect to
ð25Þ
GADTS, CPSO, AATMEA, and CEDE algorithms, the
where 1 B x1 B 99, 1 B x2 B 99, 10 B x3 B 200, and proposed MAL-IGWO algorithm provided better ‘‘best’’,
10 B x4 B 200. ‘‘mean’’, ‘‘worst’’, and ‘‘standard deviation’’ results for
This problem has already been solved by several pressure vessel design problem. Compared with IGSO,
research works, such as Mezura-Montes and Coello [41] MAL-IGWO found better ‘‘mean’’, ‘‘worst’’, and ‘‘standard
proposed a GA-based method for solving pressure vessel deviation’’ results for the pressure vessel design problem.
design problem, He and Wang [42] used co-evolutionary However, IGSO algorithm obtains better ‘‘best’’ results.

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S435

Table 9 Comparison of the best solution for the spring design problem obtained by six methods
GADTS [41] CPSO [42] IGSO [43] AATMEA [44] CEDE [45] MAL-IGWO

x1(d) 0.051989 0.0517280 0.051691 0.0518130955 0.051609 0.0517030133


x2(D) 0.363965 0.3576440 0.356765 0.3596904119 0.354714 0.3570705348
x3(P) 10.890522 11.244543 11.286172 11.1192526803 11.410831 11.269309420
g1(x) -1.30E-05 -8.45E-04 -2.095E-11 -1.62E-04 -30665.54 -6.70E-09
g2(x) -2.10E-05 -1.26E-05 -1.302E-11 -4.20E-05 -5127.29 -4.73E-09
g3(x) -1.061338 -4.051300 -4.053880 -4.058572 -6961.81 -4.0544448
g4(x) -0.722698 -0.727090 -0.727696 -0.725664 -24.31 -0.7274957
f(x) 0.012681 0.0126747 0.012665 0.012668262 0.0126702 0.01266523

Table 10 Statistical results of


GADTS CPSO IGSO AATMEA CEDE MAL-
six methods for the spring
[41] [42] [43] [44] [45] IGWO
design problem
Best 0.012681 0.0126747 0.012665 0.012668262 0.0126702 0.01266523
Mean 0.012742 0.0127300 0.012708 0.012708075 0.0126703 0.01266880
Worst 0.012973 0.0129240 0.012994 0.012861375 0.0126790 0.01268105
Standard deviation 5.90E-03 5.1985E-04 5.10E-05 4.50E-05 2.70E-05 6.94E-06

4.6.3 Speed reducer design problem

The speed reducer design problem (as shown in Fig. 7) is


described in Mezura-Montes [46] who proposed this
problem. The objective of this problem is to minimize the
total weight of the speed reducer subject to eleven
inequality constraints with seven continuous design vari-
ables. The constraints involve limitations on the bending
stress of the gear teeth, surface stress, transverse deflections
of the shafts, and stresses in the shafts.
Fig. 6 Schematic view of pressure vessel design problem

Table 11 Comparison of the


GADTS [41] CPSO [42] IGSO [43] AATMEA [44] CEDE [45] MAL-IGWO
best solution for the pressure
vessel design problem obtained x1(Ts) 0.8125 0.8125 0.8125 0.8125 0.8125 0.8125
by six methods
x2(Th) 0.4375 0.4375 0.4375 0.4375 0.4375 0.4375
x3(R) 42.0974 42.0913 42.098446 42.0984 42.0984 42.098445596
x4(L) 176.6540 176.7465 176.636596 176.6375 176.6376 176.63659584
g1(x) -2.01E-03 -1.37E-06 -3.40E-10 -1.13E-06 -6.67E-03 1.443290E-15
g2(x) -3.58E-02 -3.59E-04 -3.59E-02 -0.035881 -3.58E-02 -0.035880829
g3(x) -24.7593 -118.7687 -2.90E-02 -0.857920 -3.705123 -2.96859E-09
g4(x) -63.3460 -63.2535 -63.363404 -63.362471 -63.3623 -63.36340416
f(x) 6059.9463 6061.0777 6059.7140 6059.72558 6059.7340 6059.7143

Table 12 Statistical results of


GADTS CPSO IGSO AATMEA CEDE MAL-
six methods for the pressure
[41] [42] [43] [44] [45] IGWO
vessel design problem
Best 6059.9463 6061.0777 6059.7140 6059.7255 6059.7340 6059.7143
Mean 6177.2533 6147.1332 6238.801 6061.9878 6085.2303 6059.7143
Worst 6469.3220 6363.8041 6820.410 6090.8022 6371.0455 6059.7143
Standard deviation 130.9297 86.45 194.315 4.70 43.013 4.55E-13

123
S436 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

evolutionary algorithm (abbreviated as ISEA) [47], a socio-


behavioural simulation model (denoted as SSM) [48],
cuckoo search algorithm (abbreviated as CSA) [49], a
simple evolutionary algorithm (denoted as SEA) [50], and
a swarm with an intelligent information sharing (denoted as
SIIS) [51]. The best solutions obtained by these algorithms
as well as the proposed MAL-IGWO are listed in Table 13,
and their statistical results are shown in Table 14. The
results of the compared algorithms in Tables 13 and 14 are
all derived directly from their corresponding references.
Fig. 7 Schematic view of speed reducer design problem The character ‘‘-’’ means that the result is not available in
the paper.
As shown in Table 13, it can be seen that the proposed
The mathematical formulation of speed reducer design MAL-IGWO algorithm obtained better best solution for
problem can be summarized as follows: speed reducer design problem than ISEA, SSM, CSA, and
~Þ ¼ 0:7854x1 x22 ð3:3333x23 þ 14:9334x3  43:0934Þ
Minimize f ðx SEA algorithms. Compared with SIIS algorithm, although
 1:508x1 ðx26 þ x27 Þ þ 7:4777ðx36 þ x37 Þ the best solutions derived by SIIS algorithm are better than
those of MAL-IGWO, the reported results are not feasible.
þ 0:7854ðx4 x26 þ x5 x27 Þ
The best results of ISEA achieved are very close to the
ð26Þ solutions obtained by MAL-IGWO. This is because the
27 fifth and sixth constraints (g5(x), g6(x)) are significantly
g1 ðx
~Þ ¼  10
x1 x22 x3 violated in the results of SIIS algorithm. From Table 14,
397:5 MAL-IGWO found better results for speed reducer design
g2 ðx
~Þ ¼  10
x1 x22 x23 problem than other algorithms except for SIIS algorithm.
1:93x34 Based on the aforementioned experimental results and
g3 ðx
~Þ ¼  10 comparisons, it can be concluded that the proposed MAL-
x2 x46 x3
1:93x35 IGWO sustains competitiveness on solving constrained
g4 ðx
~Þ ¼  10 numerical optimization and engineering design problems.
x2 x47 x3
½ð745x4 =x2 x3 Þ2 þ 16:9  106 1=2
g5 ðx
~Þ ¼  10
110:0x36 5 Conclusions
½ð745x5 =x2 x3 Þ2 þ 157:5  106 1=2
g6 ðx
~Þ ¼  10
85:0x37 In this work, we have proposed a modified augmented
x2 x3 Lagrangian grey wolf optimization algorithm to deal with
g7 ðx
~Þ ¼  10
40 constrained optimization problems, which uses the grey
5x2 wolf optimization algorithm to provide more chance of
g8 ðx
~Þ ¼  10
x1 finding better solutions. Each sub-problem in modified
x1
g9 ðx
~Þ ¼  10 augmented Lagrangian method is optimized by the IGWO
12x2
1:5x6 þ 1:9 algorithm. Because of many advantages of the GWO
Subject to g10 ðx
~Þ ¼  10 algorithm such as easy to implement, few parameters, it is
x4
1:1x7 þ 1:9 able to find optimal or near-optimal solutions effectively.
g11 ðx
~Þ ¼  10 Numerical results for a set of benchmark test problems
x5
ð27Þ seem to show that the nonlinearly adjust of parameter ~ a
provides a more effective trade-off between exploration
where 2.6 B x1 B 3.6, 0.7 B x2 B 0.8, 17 B x3 B 28, and exploitation of the search space. Moreover, the simu-
7.3 B x4 B 8.37.3 B x5 B 8.3, 2.9 B x6 B 3.9, and lation results validate the advantages of using of modified
5.0 B x7 B 5.5 augmented Lagrangian method for handling constraints. As
The available approaches solving the speed reducer future work, we intend to perform comparisons with other
design problem include useful infeasible solutions with evolutionary computation-based algorithms and solve other

123
Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438 S437

Table 13 Comparison of the


ISEA [47] SSM [48] CSA [49] SEA [50] SIIS [51] MAL-IGWO
best solution for the speed
reducer design problem x1(b) 3.49999 3.506122 3.5015 3.506163 3.514185 3.5000000000
obtained by six methods
x2(m) 0.69999 0.700006 0.7000 0.700831 0.700005 0.7000000000
x3(z) 17 17 17.000 17 17 17.000000000
x4(l1) 7.3000 7.549126 7.6050 7.460181 7.497343 7.3000000000
x5(l2) 7.8000 7.859330 7.8181 7.962143 7.8346 7.7153199115
x6(d1) 3.350215 3.365576 3.3520 3.3629 2.9018 3.3502146661
x7(d2) 5.286683 5.289773 5.2875 5.3090 5.0022 5.2866544650
g1(x) -0.0739 -0.0755 -0.0743 -0.0777 -0.0777 -0.073915280
g2(x) -0.0198 -0.1994 -0.1983 -0.2013 -0.2012 -0.197998527
g3(x) -0.49917 -0.4562 -0.4349 -0.4741 -0.0360 -0.499172248
g4(x) -0.9015 -0.8994 -0.9008 -0.8971 -0.8754 -0.904643905
g5(x) -0.0000 -0.0132 -0.0011 -0.0110 0.5395a -3.18E-12
g6(x) -0.0000 -0.0017 -0.0004 -0.0125 0.1805a -1.12E-11
g7(x) -0.7025 -0.7025 -0.7025 -0.7022 -0.7025 -0.702500000
g8(x) -0.0000 -0.0017 -0.0004 -0.0006 -0.0040 -0.000000000
g9(x) -0.5833 -0.5826 -0.5832 -0.5831 -0.5816 -0.583333333
g10(x) -0.0513 -0.0796 -0.0890 -0.0691 -0.1660 -0.051325753
g11(x) -0.0109 -0.0179 -0.0130 -0.0279 -0.0552 -0.000000000
f(x) 2996.3481 3008.08 3000.9810 3025.005 2732.9006 2994.4711
a
The result of this table in boldface indicates the fifth and sixth constraints are violated

Table 14 Statistical results of


ISEA [47] SSM [48] CSA [49] SEA [50] SIIS [51] MAL-IGWO
six methods for the speed
reducer design problem Best 2996.3481 3008.08 3000.9810 3025.0050 2732.9006 2994.4711
Mean 2996.3481 3012.12 3007.1977 3088.7778 2758.8878 2994.4711
Worst 2996.3481 3028.28 3009.0000 3078.5918 2758.3071 2994.4711
Standard deviation 0.00E?00 – 4.9634 – – 6.01E-13

benchmark problems and to tune the parameters of the 4. Long W, Liang XM, Huang YF, Chen YX (2013) A hybrid dif-
algorithm. ferential evolution augmented Lagrangian method for constrained
numerical and engineering optimization. Comput Aided Des
45(12):1562–1574
Acknowledgments This work was supported in part by the National 5. Wang Y, Cai Z, Zhou Y, Zeng W (2008) An adaptive tradeoff
Natural Science Foundation of China under Grant No. 61463009, the model for constrained evolutionary optimization. IEEE Trans
Humanities and Social Sciences Planning Foundation of Ministry of Evol Comput 12(1):80–92
Education of China under Grant No. 12XJA910001, the Beijing 6. Tuba M, Bacanin N (2014) Improved seeker optimization algo-
Natural Science Foundation under Grant No. 4122022, the 125 Spe- rithm hybridized with firefly algorithm for constrained opti-
cial Major Science and Technology of Department of Education of mization problems. Neurocomputing 143:197–207
Guizhou Province under Grant No. [2012]011, and the Science and 7. Gandomi AH, Yang XS, Talatahari S, Deb S (2012) Couple eagle
Technology Foundation of Guizhou Province under Grant No. strategy and differential evolution for unconstrained and con-
[2016]2082. strained global optimization. Comput Math Appl 63(1):191–200
8. Long W, Liang X, Huang Y, Chen Y (2014) An effective hybrid
cuckoo search algorithm for constrained global optimi- zation.
References Neural Comput Appl 25(3–4):911–926
9. Brajevic I (2015) Crossover-based artificial bee colony algorithm
1. Ali MM, Zhu WX (2013) A penalty function-based differential for constrained optimization problems. Neural Comput Appl
evolution algorithm for constrained global optimization. Comput 26(7):1587–1601
Optim Appl 54(3):707–739 10. Sadollah A, Eskandar H, Bahreininejad A, Kim JH (2015) Water
2. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching-learning- cycle algorithm with evaporation rate for solving constrained and
based optimization: a novel method for constrained mechanical unconstrained optimization problems. Appl Soft Comput
design optimization problems. Comput Aided Des 43(3):303–315 30:58–71
3. Deb K, Srivastava S (2012) A genetic algorithm based augmented 11. Wang Y, Cai Z (2012) Combining multiobjective optimization
Lagrangian method for constrained optimization. Comput Optim with differential evolution to solve constrained optimization
Appl 51(3):869–902 problems. IEEE Trans Evol Comput 16(1):117–134

123
S438 Neural Comput & Applic (2017) 28 (Suppl 1):S421–S438

12. Niu B, Wang JW, Wang H (2014) Bacterial-inspired algorithm 32. Liang XM, Hu JB, Zhong WT, Qian JX (2001) A modified
for solving constrained optimization problems. Neurocomputing augmented Lagrange multiplier methods for large-scale opti-
148:54–62 mization. Chin J Chem Eng 9(2):167–172
13. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. 33. Liang J, Runarsson TP, Mezura-Montes E, Clerc M, Suganthan P,
Adv Eng Softw 69(3):46–61 Coello CC, Deb K (2006) Problem definitions and evaluation
14. Zhu A, Xu C, Li Z, Wu J, Liu Z (2015) Hybridizing grey wolf criteria for the CEC 2006 special session on constrained real-
optimization with differential evolution for global optimization parameter optimization. J Appl Mech 41:1–8
and test scheduling for 3D stacked SoC. J Syst Eng Electron 34. Rocha AMA, Martins TF, Fernandes EM (2011) An augmented
26(2):317–328 Lagrangian fish swarm based method for global optimization.
15. Saremi S, Mirjalili SZ, Mirjalili SM (2015) Evolutionary popu- J Comput Appl Math 235(16):4611–4620
lation dynamics and grey wolf optimizer. Neural Comput Appl 35. Mahdavi A, Shiri ME (2015) An augmented Lagrangian ant
26(5):1257–1263 colony based method for constrained optimization. Comput
16. Emary E, Zawbaa HM, Hassanien AE (2016) Binary grey wolf Optim Appl 60(1):263–276
optimization approaches for feature selection. Neurocomputing 36. Mezura-Montes E, Cetina-Dominguez O (2012) Empirical anal-
172:371–381 ysis of a modified artificial bee colony for constrained numerical
17. Kamboj VK (2015) A novel hybrid PSO-GWO approach for unit optimization. Appl Math Comput 218(22):10943–10973
commitment problem. Neural Comput Appl. doi:10.1007/s00521- 37. Wang Y, Cai ZX, Guo GQ, Zhou YR (2007) Multiobjective
015-1962-4 optimization and hybrid evolutionary algorithm to solve con-
18. Jayakumar N, Subramanian S, Ganesan S, Elanchezhian EB strained optimization problems. IEEE Trans Syst Man Cybern
(2016) Grey wolf optimization for combined heat and power 37(3):560–575
dispatch with cogeneration systems. Electr Power Energy Syst 38. Lin CH (2013) A rough penalty genetic algorithm for constrained
74:252–264 optimization. Inf Sci 241:119–137
19. El-Gaafary AAM, Mohamed YS, Hemeida AM, Mohamed AA 39. Deb K (2000) A efficient constraint handling method for genetic
(2015) Grey wolf optimization for multi input multi output sys- algorithms. Comput Meth Appl Mech Eng 186(2–4):311–338
tem. Univers J Commun Netw 3(1):1–6 40. Belegundu AD (1982) A study of mathematical programming
20. Komaki GM, Kayvanfar V (2015) Grey wolf optimizer algorithm methods for structural optimization. Ph.D. Thesis, Deparment of
for the two-stage assembly flow shop scheduling problem with Civil and Environmental Engineering, University of Iowa, Iowa
release time. J Comput Sci 8:109–120 41. Mezura-Montes E, Coello CAC (2002) Constraint-handling in
21. Song X, Tang L, Zhao S, Zhang X, Li L, Huang J, Cai W (2015) genetic algorithms through the use of dominance-based tourna-
Grey wolf optimizer for parameter estimation in surface waves. ment selection. Adv Eng Inform 16(3):193–203
Soil Dyn Earthq Eng 75:147–157 42. He Q, Wang L (2007) An effective co-evolutionary particle
22. Mirjalili S (2015) How effective is the grey wolf optimizer in swarm optimization for constrained engineering design problem.
training multi-layer perceptrons. Appl Intell 43(1):150–161 Eng Appl Artif Intell 20(1):89–99
23. Madadi A, Motlagh MM (2014) Optimal control of DC motor 43. Shen H, Zhu Y, Niu B, Wu QH (2009) An improved group search
using grey wolf optimizer algorithm. Tech J Eng Appl Sci optimizer for mechanical design optimization problems. Progress
4(4):373–379 Nat Sci 19(1):91–97
24. El-Fergany AA, Hasanien HM (2015) Single and multi-objective 44. Wang Y, Cai ZX, Zhou YR (2009) Accelerating adaptive trade-
optimal power flow using grey wolf optimizer and differential off model using shrinking space technique for constrained evo-
evolution algorithms. Electr Power Compon Syst lutionary optimization. Int J Numer Meth Eng 77(11):1501–1534
43(13):1548–1559 45. Huang FZ, Wang L, He Q (2007) An effective co-evolutionary
25. Kamboj VK, Bath SK, Dhillon JS (2015) Solution of non-convex differential evolution for constrained optimization. Appl Math
economic load dispatch problem using grey wolf optimizer. Comput 186(1):340–356
Neural Comput Appl. doi:10.1007/s00521-015-1934-8 46. Sandgren E (1990) Nonlinear integer and discrete programming
26. Sulaiman MH, Mustaffa Z, Mohamed MR, Aliman O (2015) in mechanical design optimization. ASME J Mech Des
Using the gray wolf optimizer for solving optimal reactive power 112(2):223–229
dispatch problem. Appl Soft Comput 32:286–292 47. Mezura-Montes E, Coello CAC (2005) Useful infeasible solu-
27. Metz MC, Vucetich JA, Smith DW, Stahler DR, Peterson RO tions in engineering optimization with evolutionary algorithm.
(2011) Effect of sociality and season on gray wolf (Canis lupus) MICAI’2005 Lect Notes Artif Int 3789:652–662
foraging behavior: implications for estimating summer kill rate. 48. Akhtar S, Tai K, Ray T (2002) A socio-behavioral simulation
PLoS ONE 6(3):1–10 model for engineering design optimization. Eng Optim
28. Muro C, Escobedo R, Spector L, Coppinger RP (2011) Wolf-pack 34(4):341–354
(Canis lupus) hunting strategies emerge from simple rules in 49. Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algo-
computational simulations. Behav Process 88(3):192–197 rithm: a meta-heuristic approach to solve structural optimization
29. Michalewicz Z, Schoenauer M (1996) Evolutionary algorithm for problem. Eng Comput 29(1):17–35
constrained parameter optimization problems. Evol Comput 50. Mezura-Montes E, Coello CAC, Ricardo L (2003) Engineering
4(1):1–32 optimization using a simple evolutionary algorithm. In: Pro-
30. Mezura-Montes E, Coello CAC (2005) Self-adaptive fitness ceedings of International Conference on Tools Artificial Intelli-
formulation for constrained optimization. IEEE Trans Evol gence, pp 149–156
Comput 9(1):1–17 51. Ray T, Saini P (2001) Engineering design optimization using a
31. Costa L, Santo IACPE, Fernandes EMGP (2012) A hybrid swarm with an intelligent information sharing among individuals.
genetic pattern search augmented Lagrangian method for con- Eng Optim 33(6):735–748
strained global optimization. Appl Math Comput
218(18):9415–9426

123

View publication stats

You might also like