0% found this document useful (0 votes)
20 views12 pages

Liu 2010

This document describes a hybrid algorithm called PSO-DE that combines particle swarm optimization (PSO) and differential evolution (DE) to solve constrained optimization problems. PSO can get stuck when particles stop finding better positions, while DE has strong exploration abilities. PSO-DE incorporates DE updates to particle best positions to help PSO escape stagnation. The algorithm is tested on benchmark functions and engineering problems, showing it outperforms or performs similarly to other state-of-the-art approaches.

Uploaded by

Chakib Benmhamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views12 pages

Liu 2010

This document describes a hybrid algorithm called PSO-DE that combines particle swarm optimization (PSO) and differential evolution (DE) to solve constrained optimization problems. PSO can get stuck when particles stop finding better positions, while DE has strong exploration abilities. PSO-DE incorporates DE updates to particle best positions to help PSO escape stagnation. The algorithm is tested on benchmark functions and engineering problems, showing it outperforms or performs similarly to other state-of-the-art approaches.

Uploaded by

Chakib Benmhamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Applied Soft Computing 10 (2010) 629–640

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

Hybridizing particle swarm optimization with differential evolution


for constrained numerical and engineering optimization
Hui Liu, Zixing Cai, Yong Wang *
School of Information Science and Engineering, Central South University, Changsha 410083, People’s Republic of China

A R T I C L E I N F O A B S T R A C T

Article history: We propose a novel hybrid algorithm named PSO-DE, which integrates particle swarm optimization
Received 11 April 2008 (PSO) with differential evolution (DE) to solve constrained numerical and engineering optimization
Received in revised form 11 June 2009 problems. Traditional PSO is easy to fall into stagnation when no particle discovers a position that is
Accepted 23 August 2009
better than its previous best position for several generations. DE is incorporated into update the previous
Available online 29 August 2009
best positions of particles to force PSO jump out of stagnation, because of its strong searching ability. The
hybrid algorithm speeds up the convergence and improves the algorithm’s performance. We test the
Keywords:
presented method on 11 well-known benchmark test functions and five engineering optimization
Particle swarm optimization
Differential evolution
functions. Comparisons show that PSO-DE outperforms or performs similarly to seven state-of-the-art
Constrained optimization approaches in terms of the quality of the resulting solutions.
PSO-DE ß 2009 Elsevier B.V. All rights reserved.

1. Introduction information requirement for the problem to be solved, easy


implementation, etc. Due to those advantages, EAs have been
Constrained optimization problems (COPs) are always inevi- successfully and broadly applied to solve COPs [1–3] recently.
table in many science and engineering disciplines. Without loss of Consequently, a variety of EA-based constraint-handling techni-
generality, the nonlinear programming (NLP) problem can be ques have been proposed for real-parameter optimization
formulated as problems which can be grouped as [1]: (1) penalty functions;
(2) special representations and operators; (3) repair algorithms;
min f ð~
xÞ; ~
x ¼ ðx1 ; x2 ; . . . ; xn Þ (4) separation of objectives and constraints; (5) hybrid methods.
Penalty function methods are by far the most common and
x 2 V  S, and S is an n-dimensional rectangle space in Rn
where ~ simplest approach in handling constraints. By adding (or
defined by the parametric constraints: subtracting) a penalty term to (or from) the objective function,
a constrained optimization problem is transformed into an
lðiÞ  xi  uðiÞ; 1  i  n:
unconstrained one. In common practice, a penalty term Gð~
xÞ ¼
Here, the feasible region V  S is defined by a set of m additional
linear or nonlinear constraints (m  0): X
m
G j ð~
xÞ is based on the degree of constraint violation of an
j¼1
g j ð~
xÞ  0; j ¼ 1; . . . ; q and h j ð~
xÞ ¼ 0; j ¼ q þ 1; . . . ; m; individual ~
x. In addition, G j ð~
xÞ is defined as
where q is the number of inequality constraints and (m  q) is the 
max f0; g j ð~
xÞg; 1 jq
number of equality constraints. Feasible individuals satisfy all G j ð~
xÞ ¼ (1)
xÞj  eg;
max f0; jh j ð~ qþ1 jm
constraints while infeasible individuals do not satisfy at least one
constraint.
where e is a positive tolerance value for equality constraints. Gð~

Evolutionary algorithms (EAs) possess a number of exclusive
represents the distance of the individual ~
x from the boundaries of
advantages: generality, reliable and robust performance, little
the feasible set. A remarkable limitation of the penalty function
methods is that most of them require a careful fine-tuning of
* Corresponding author. Tel.: +86 731 8830583.
parameters to obtain competitive results. A too small penalty
E-mail addresses: [email protected] (H. Liu), [email protected] (Z. Cai), parameter results in underpenalization, and consequently, the
[email protected] (Y. Wang). population will experience difficulty in landing within the feasible

1568-4946/$ – see front matter ß 2009 Elsevier B.V. All rights reserved.
doi:10.1016/j.asoc.2009.08.031
630 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640

region and may converge to an infeasible solution. Instead, a too optimization (ACO) is proposed for continuous optimization. Taking
large penalty coefficient will result in the loss of some valuable advantage of multi-objective optimization techniques, Cai and Wang
information provided by infeasible individuals. In [4], a self- [13] presented the non-dominated individuals replacement scheme,
adaptive penalty function based on genetic algorithm (SAPF) is which selects one non-dominated individual from the offspring
proposed. Both distance value and the penalty are based on the population and then applies it to replace one dominated individual in
normalized fitness value and the normalized degree of constraint the parent population.
violation. The final fitness value of each individual is calculated by The organization of the remaining paper is as follows. In
adding the penalty value to the corresponding distance value. In Sections 2 and 3, PSO and DE are briefly introduced. In Section 4,
[5], DE based on a co-evolution mechanism, named CDE, is hybridizing particle swarm optimization with differential evolu-
proposed to solve COPs. Due to the co-evolution, not only are tion, named PSO-DE, is proposed and explained in detail.
decision solutions, but penalty factors are also adjusted by Simulation results and comparisons are presented in Section 5,
differential evolution. and the discussion is provided in Section 6. Finally, we conclude
Apart from the penalty function method, several novel the paper in Section 7.
techniques have been utilized to handle constraints. For special
representations and operators, how to determine the appropriate 2. Basics of PSO
generic representation scheme remains an open issue. The use of
special representations and operators is, with no doubt, quite Particle swarm optimization is a stochastic global optimization
useful for the intended application for which they were designed, method inspired by the choreography of a bird flock. PSO relies on
but their generalization to other (even similar) problems is by no the exchange of information between individuals, called particles,
means obvious. When an infeasible solution can be easily (or at of the population, called swarm. In PSO, each particle adjusts its
least at a low computational cost) transformed into a feasible trajectory stochastically towards the positions of its own previous
solution, repair algorithms are a good choice. However, this is not best performance (pbest) and the best previous performance of its
always possible and in some cases repair operators may introduce neighbors (nbest) or the whole swarm (gbest). At the t th iteration,
a strong bias in the search, harming the evolutionary process itself. taking the ith particle into account, the position vector and the
Furthermore, this approach is problem-dependent, since a specific velocity vector are Xit ¼ ðxti;1 ; . . . ; xti;n Þ and Vit ¼ ðvti;1 ; . . . ; vti;n Þ. The
repair algorithm has to be designed for each particular problem. velocity and position updating rules are given by
One problem with separation of constraints and objectives is that
in cases where the ratio between the feasible region and the whole
vi;tþ1 t t t t t
j ¼ vvi; j þ c1 r 1 ð pbesti; j  xi; j Þ þ c2 r 2 ðgbest j  xi; j Þ; (2)
search space is too small (for example, when there are constraints
very difficult to satisfy), this technique will fail unless a feasible xtþ1 t tþ1
i; j ¼ xi; j þ vi; j ; (3)
point is introduced in the initial population.
where j 2 f1; . . . ; ng, v 2 ½0:0; 1:0 is the inertia factor, c1 and c2 are
Deb [6] proposed a feasibility-based rule, where pair-wise
positive constants, r 1 and r 2 are two uniformly distributed random
solutions are compared using the following criteria: (1) any feasible
numbers in the range [0,1]. In this version, the variable Vit is limited
solution is preferred to any infeasible solution; (2) between two
to the range V max . When a particle discovers a position that is
feasible solutions, the one with better objective function value is
better than any it has found previously, it stores the new position
preferred; (3) between two infeasible solutions, the one with smaller
in the corresponding pbest. Clerc and Kennedy [14] introduced the
degree of constraint violation is preferred. However, his technique
velocity adjustment as
has problems to maintain diversity in the population. Runarsson and
Yao [7] introduced a stochastic ranking method to balance the vi;tþ1 t t t t t
j ¼ xðvi; j þ c1 r 1 ð pbesti; j  xi; j Þ þ c2 r 2 ðgbest j  xi; j ÞÞ
(4)
objective and penalty functions. Given pair-wise adjacent indivi- pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
duals, (1) if both individuals are feasible, the rank of them is where x ¼ 2k=j2  j  j  4jj with j ¼ c1 þ c2 > 4. Due to
2

determined according to the objective function value, else (2) the the constriction coefficient x, the algorithm requires no explicit
probability of ranking according to the objective function value is P f , limit V max . Krohling and dos Santos Coelho [15] analyzed Eq. (4)
while the probability of ranking according to the constraint violation and concluded that the interval [0.72,0.86] is a possible good
value is ð1  P f Þ. Its drawback is the need of the most appropriate choice for x. So, instead of x, the absolute value of the Gaussian
value of P f . Amirjanov [8] investigated an approach named the probability distribution with zero mean and unit variance
changing range-based genetic algorithms (CRGA), which adaptively absðNð0; 1ÞÞ is introduced into the velocity equation.
shifts and shrinks the size of the search space of the feasible region by vi;tþ1 t t t t
(5)
j ¼ R1 ð pbesti; j  xi; j Þ þ R2 ðgbest j  xi; j Þ
employing feasible and infeasible solutions in the population to
reach the global optimum. In [9], a general variable neighborhood where R1 and R2 are generated using absðNð0; 1ÞÞ. According to the
search (VNS) heuristic is developed to solve COPs. VNS defines a set of statistical knowledge, the mean of absðNð0; 1ÞÞ is 0.798 and the
neighborhood structures to conduct a search through the solution variance is 0.36. In order to solve COP, they introduced Lagrange
space. It exploits systematically the idea of neighborhood change, multipliers to transform a COP into a dual or min–max problem.
both in the descent to local minima and in the escape from the valleys Two populations of independent PSO are evolved in the co-
which contain them. Mezura-Montes and Coello [10] proposed a evolutionary particle swarm using Gaussian distribution (CPSO-
simple multi-membered evolution strategy (SMES). SMES uses a GD) [15]: the first PSO focuses on evolving the individuals while
simple diversity mechanism to allow the individual with the lowest the vector of Lagrangian multiplier is maintained frozen and the
amount of constraint violation and the best value of the objective other PSO focuses on evolving the Lagrangian multiplier vector
function to be selected for the next population. By emulating society while the individuals are maintained frozen. The two PSOs interact
behavior, Ray and Liew [11] made use of intra- and intersociety with each other through a common fitness evaluation. The first PSO
interactions within a formal society and civilization model to solve provides the optimum individual of the COP in the end.
engineering optimization problems. A society corresponds to a A novel multi-strategy ensemble PSO algorithm (MEPSO) [16]
cluster of points while a civilization is a set of all such societies at any introduces two new strategies, Gaussian local search and differential
given time. Every society has its set of better-performing individuals mutation, to one part of its population (part I) and the other part
that help others in the society to improve through an intrasociety (part II), respectively. In every iteration, for each particle of part I, it
information exchange. In [12], a direct extension of ant colony has the probability P ls to perform the Gaussian local search defined
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 631

in Eqs. (6) and (3), and has the probability (1  Pls ) to perform the Selection: The offspring individual Zit competes against the
conventional search defined in Eqs. (2) and (3). gbest is the best parent individual Xit using the greedy criterion and the survivor
solution found by all particles, in both part I and part II. For each enters the t þ 1 generation.
particle of part II, the differential mutation operator defined in Eq. (7)  t
is performed to change the direction of its velocity. Zi ; if f ðZit Þ  f ðXit Þ
Xitþ1 ¼ (10)
Xit ; otherwise
vi;tþ1
j ¼ c3 R3 (6)
Different techniques are integrated into DE to solve COPs.
vi;tþ1 t t t t t Constraint adaptation by differential evolution (CADE) [22]
j ¼ sgnðr 1  0:5Þðvvi; j þ c1 r 1 ð pbesti; j  xi; j Þ þ c2 r 2 ð pa  xi; j ÞÞ; (7)
combines the ideas of constraint adaptation and DE into a versatile
where R3 is generated using Nð0; 1Þ, c3 are positive constants and design method. CADE utilities a so-called region of acceptability
pa is the best solution found by particle a, which is chosen (ROA) as a selection operator. If Zit lies within the ROA then
randomly from part I. The moving peaks benchmark (MPB) and Xitþ1 ¼ Zit . Otherwise, the procedure of DE is repeated up to several
dynamic Rastrigin function are used to test the performance of times. If the generated offspring still lies outside the ROA, Xitþ1 is
MEPSO. dos Santos Coelho and Lee [17] proposed that random set to Xit . In [23], a cultural algorithm with DE population (CULDE)
numbers (c1 , c2 , r 1 and r 2 ) for the velocity updating equation of PSO is proposed. The variation operator of differential evolution is
are generate by using the Gaussian probability distribution and/or influenced by the belief space to generate the offspring population.
chaotic sequences in the interval [1, 1], and then mapped to the
interval [0, 1]. Maitra and Chatterjee [18] proposed hybrid 4. Proposed method
cooperative-comprehensive learning PSO algorithm for multilevel
thresholding for histogram-based image segmentation. In [19], In this section, PSO-DE is introduced in detail. In order to handle
Gaussian functions and PSO are used to select and adjust the radial the constraints, we minimize the original objective function f ð~ xÞ as
basis function neural networks. well as the degree of constraint violation Gð~ xÞ. Two kinds of
populations with the same size NP are used. In the initial step of the
3. Basics of DE algorithm, a population (denoted by pop) is created randomly and
the replication of pop is denoted as pBest. Note that pBest is utilized
Differential evolution, a stochastic, simple yet powerful to store each particle’s pbest. At each generation, pop is sorted
evolutionary algorithm, not merely possesss the advantage of a according to the degree of constraint violation in a descending
quite few control variables but also performs well in convergence. order. In order to keep one-to-one mapping between each particle
DE is introduced to solve the global optimization by Storn and Price and its pbest, the order of pBest changes when pop is sorted. Only
[20]. DE creates new candidate solutions by perturbing the parent the first half of pop are evolved by using Krohling and dos Santos
individual with the weighted difference of several other randomly Coelho’s PSO [15]. If the variable value xtþ1
i; j
of Xitþ1 generated by
chosen individuals of the same population. A candidate replaces Krohling and dos Santos Coelho’s PSO [15] violates the boundary
the parent only if it is better than its parent. Thereafter, DE guides constraint, violating variable value is reflected back from the
the population towards the vicinity of the global optimum through violated boundary using the following rule [24]:
repeated cycles of mutation, crossover and selection. The main 8 t
procedure of DE is explained in detail as follows. < 0:5ðlð jÞ þ xi; j Þ;
> if xi;tþ1
j
< lð jÞ
Mutation: For each individual Xit ¼ xtþ1
i; j ¼ 0:5ðuð jÞ þ xti; j Þ; if xi;tþ1
j > uð jÞ (11)
t t t
>
: tþ1
fxi;1 ; xi;2 ; . . . ; xi;n gði 2 f1; 2; . . . ; NPgÞ at generation t, an associated xi; j ; otherwise
mutant individual Yit ¼ fyti;1 ; yti;2 ; . . . ; yti;n g can be created by using
one of the mutation strategies. The most used strategies are:
Algorithm 1: PSO-DE

rand/1: yti; j ¼ xtr½1; j þ Fðxtr½2; j  xtr½3; j Þ Input: Population size NP, objective function f, the degree
best/1: yti; j ¼ xtbest; j þ Fðxtr½1; j  xtr½2; j Þ of constrained violation G,
current to best/1: yti; j ¼ xti; j þ Fðxtbest; j  xti; j Þ þ Fðxtr½1; j  xtr½2; j Þ upper bound of variables U ¼ fuð1Þ; . . . ; uðnÞg
best/2: yti; j ¼ xtbest; j þ Fðxtr½1; j  xtr½2; j Þ þ Fðxtr½3; j  xtr½4; j Þ and lower bound of variables L ¼ flð1Þ; . . . ; lðnÞg
rand/2: yti; j ¼ xtr½1; j þ Fðxtr½2; j  xtr½3; j Þ þ Fðxtr½4; j  xtr½5; j Þ Output: The best objective function value of f best
where r½kðk 2 f1; 2; . . . ; 5gÞ is a uniformly distributed random number Initialize a population pop that contains NP particles with
in the range ½1; NP, j 2 f1; . . . ; ng, xtbest; j is the best individual of the random positions. Note that each particle is clamped
population at generation t, FðF 2 ½0; 2Þ is a amplification factor. within ½L; U;
Salman et al. [21] introduced the self-adapting parameter F as:
Set velocity of each particle equal to zero;
F i ¼ F i1 þ Nð0; 0:5ÞðF i2  F i3 Þ (8) Evaluate f and G for all particles;
where i1 ; i2 and i3 are uniformly distributed random numbers in pBest ¼ po p;
the range ½0; NP and i1 6¼ i2 6¼ i3 . % pBest is used to store each particle’s previous best position
Crossover: DE applies a crossover operator on Xit and Yit to (pbest) %;
generate the offspring individual Zit ¼ fzti;1 ; zti;2 ; . . . ; zti;n g. The genes gbest ¼ the optimum of pBest according to Deb’s
of Zit are inherited from Xit or Yit , determined by a parameter called feasibility-based rule [6];
crossover probability(CR 2 ½0; 1), as follows:
foreach Generation do
(
t yti; j ; if rand  CR or j ¼ jrand Sort pop in descending order according to G and change
zi; j ¼ (9)
xti; j ; otherwise the order of pBest
when the order of pop changes to keep one-to-one map
where rand is a uniformly distributed random number in the range
between each particle and its pbest;
[0,1], and jrand is a uniformly distributed random number in the
range ½1; n. p1 = pop’s first half part;
632 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640

foreach individual (denoted as a) of p1 do 4.1. Why only 50% particles are involved in PSO
Update a’s velocity and position by Eqs. (5) and (3);
Krohling and dos Santos Coelho’s PSO [15] is easy to stagnate,
If a violates boundary then
which is caused by its velocity equation (Eq. (5)). The Eq. (5)
Modify its variables by Eq. (11); consists of two parts: the first is the randomly weighted difference
end between the particle and its pbest and the second is the randomly
Calculate f and G for a; weighted difference between the particle and gbest. The first part
represents the personal experience of each particle, which makes
Compare a against the corresponding pbest according
the particle move towards its own best position. The second part
to Deb’s feasibility-based rule [6],
represents the collaborative behavior of the particles in finding the
and if a wins, it replaces the corresponding pbest; global optimal solution, which pulls each particle towards the best
end position found by its swarm. Eq. (3) provides the new position of
pop’s first half part = p1; the particle by adding its new velocity to its current position. As
mentioned above, if a particle stays on the position of gbest, its
gbest ¼ the optimal individual of pBest according to Deb’s
velocity tends to be zero and its position is unchanged. If both
feasibility-based rule [6]; ð pbestit  xti Þ and ðgbestt  xti Þ are small, the particle almost freezes
foreach pbest of pBest do in its track in some generations. If the pbest and gbest are too
Generate three offspring by DE’s three mutation closed, some particles are inactive during the evolution process.
strategies(rand/1, current to best/1, and rand/2, When the position of a particle equals its pbest, the velocity is only
influenced by gbest. The Eq. (5) indicates that the pbest and gbest
see Section 3 for details), respectively,
play a primordial role in PSO’s evolution process.
and store them in a set B; Based on Deb’s feasibility-based rule [6], the lower the particle’s
foreach individual (denoted as b) of B do degree of constraint violation is, the higher the probability that it
if b violates boundary then clusters together around gbest is. So particles with lower degrees of
constraint violations are very difficult to jump out of gbest’s
Modify its variables by Eq. (12);
adjacent region. This may cause gbest to stay on the same position
end for a long time and lost the diversity of population. In other words,
Calculate f and G for b; premature convergence may occur in the early evolution stage.
Compare b with pbest by Eq. (13), then if b wins, However, if pop converges too quickly to a position, which may be a
it replaces pbest; local optimum, particles will also give up attempts for exploration
end and stagnate in the rest of the evolution process. On the other hand,
for a particle with higher degree of constraint violation, its pbest
end
has a relatively significant difference between gbest. Its perfor-
gbest ¼ the optimal individual of pBest according to Deb’s mance will be improved by extracting meaningful information
feasibility-based rule [6]; from its own pbest and gbest belonging to the same population so
f best ¼ the objective function value of gbest; that it is dragged toward better-performing point. The updated
particle may better than gbest and then gbest jumps to a new
end
position that is obviously different from its current position. This
return f best replacement may spur PSO to adjust its evolutionary direction and
guide particles to fly throughout a new region that has not been
The updated particle is compared with its corresponding pbest in searched before. For the purpose of improving the performance of
pBest by Deb’s selection criterion [6]. If the updated particle wins, it PSO, only the first half individuals are extracted from the
replaces its corresponding pbest to survive to pBest; if not, the population pop after ranking the individuals based on their
corresponding pbest remains. After the PSO evolution, we employ DE constraint violations in descending order. Thus, a temporary
to update pBest. Each pbest in pBest produces three offspring by using population p1 of size NP=2 is obtained. Thereafter, p1 is involved
DE’s three mutation strategies: rand/1 strategy, current to best/1 in PSO’s evolution. To some extent, this mechanism maintains the
strategy, and rand/2 strategy. If a variable value zti; j of an offspring Zit diversity of pop and slows down the convergence speed to avoid
violates the boundary constraint, violating variable value is reflected stagnating.
back from the violated boundary using the following rule [25]:
8 t
4.2. DE-based search for pBest
t
< 2lð jÞ  zi; j ; if zi; j < lð jÞ
>
t t
zi; j ¼ 2uð jÞ  zi; j ; if zti; j > uð jÞ (12) In order to compensate the convergence speed and supply more
>
: zt ;
i; j otherwise valuable information to adjust the particles’ trajectories, DE is
applied to update pBest which ensures highly preferable positions
We use a selection criterion to compare pbest against its
in pBest and increases the probability of finding a better solution.
offspring. The considered individual will be replaced at each
Only three representatives of the five DE’s mutation strategies are
generation only if its offspring has a better fitness value and lower
used. Because if both best/1 and best/2 strategies are integrated into
degree of constraint violation. The criterion for replacement is
PSO-DE, the information carried by gbest will be reutilized in
described as follows:
producing new individuals. Under this condition, pBest might be
 t T easily trapped in a local optimum. By applying three strategies,
Zi ; i f f ðZit Þ < f ð pbestit Þ GðZit Þ  Gð pbestit Þ
pbestitþ1 ¼ t such as rand/1 strategy, current to best/1 strategy and rand/2
pbesti ; otherwise
strategy, on a pbest, its performance might be improved, which in
(13)
turn leads to the better-performing pBest over time. DE-based
This process is repeated generation after generation until some search process motivates the particles to search for new regions
specific stopping criterion is met. PSO-DE’s main procedure can be including some lesser explored regions and enhance the particles
summarized in Algorithm 1. capability to explore the vast search space. In addition, this way
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 633

Table 1 search space, i.e., r ¼ jVj=jSj, where jSj is the number of solutions
Summary of 11 benchmark problems.
randomly generated from S, jVj is the number of feasible solutions
Function n Type of f r LI NE NI a out of these jSj solutions. In the experimental setup, jSj =1,000,000.
g01 13 Quadratic 0.0003 % 9 0 0 6 For each test case, 100 independent runs are performed in VC++
g02 20 Nonlinear 99.9965 % 1 0 1 1 6.0 (the source code may be obtained from the authors upon
g03 10 Nonlinear 0.0000 % 0 1 0 1 request). The parameters used by PSO-DE are the following:
g04 5 Quadratic 26.9356 % 0 0 6 2 NP ¼ 100, F and CR are randomly generated within [0.9, 1.0] and
g06 2 Nonlinear 0.0064 % 0 0 2 2
[0.95, 1.0], respectively. For g03, a tolerance value e is equal to
g07 10 Quadratic 0.0003 % 3 0 5 6
g08 2 Nonlinear 0.8640 % 0 0 2 0 0.001, while for g11, e equals to 0:000001. The number of FFEs is set
g09 7 Nonlinear 0.5256 % 0 0 4 2 as in Table 2 for each test function. In each run, the number of
g10 8 Linear 0.0005 % 3 0 3 3 iterations of seven test cases (i.e., g01, g02, g03, g06, g07, g09 and
g11 2 Quadratic 0.0000 % 0 1 0 1
g10) is 800. For g04 and g11, it is 400. In addition, it is 60 for g08
g12 3 Quadratic 4.779 % 0 1 93 0
and 100 for g12. Table 2 summarizes the experimental results
using the above parameters, showing the best, median, mean, and
worst objective function values, and the standard deviations for
can increase the diversity of pBest and the probability of finding each test problem. As described in Table 2, the global optima are
better pbest so as to enhance the chances of finding the global consistently found by PSO-DE over 100 independent runs in seven
optimum if it had not been yet determined. As we know, better test cases (i.e., g01, g04, g06, g07, g09, g10, and g12). For the
pbest and gbest guide particles towards the optimum effectively remaining test cases, the resulting solutions achieved are very
and speed up the convergence. close to the global optima. Note that the standard deviation over
The two populations work separately, but individuals in these 100 runs for all the problems are relatively small.
two parts are also interrelated. pBest stores the personal best of PSO-DE is compared against six aforementioned state-of-the-
particles in pop. The best solution found by pBest can be the global art approaches: CRGA [8], SAPF [4], CDE [5], CULDE [23], CPSO-GD
attractor of pop (if it is also the best of entire swarm), which will [15] and SMES [10]. As shown in Tables 3–5, the proposed method
guide the pop fly to the new best (maybe the changed optimum). outperforms CRGA, SAPF, CDE, CPSO-GD, SMES and performs
PSO can gradually search the neighborhood of the best solution similarly to CULDE in terms of the selected performance metrics,
found so far, and DE can avoid convergence to a local optimum. In a such as the best, mean, and worst objective function values. With
word, by hybridizing DE and PSO, PSO-DE is a good trade-off respect to CULDE, the proposed approach finds better best results
between accuracy and efficiency. PSO-DE can increase the in two problems (g10 and g11) and similar best results in other nine
probability of hiting the global optimum and reduce the number problems (g01, g02, g03, g04, g06, g07, g08, g09 and g12). Also, the
of fitness functions evaluations (FFEs) required to obtain compe- proposed technique reaches better mean and worst results in four
titive solutions. problems (g02, g03, g10 and g11). Similar mean and worst results
are found in seven problems (g01, g04, g06, g07, g08, g09 and g12).
5. Experimental study As far as the computational cost (i.e., the number of FFEs) is
concerned, PSO-DE requires from 10,600 to 140,100 FFEs to obtain
Eleven benchmark test functions and five engineering optimi- the reported results, compared against 500,000 FFEs used by SAPF,
zation functions are used to validate the proposed PSO-DE. These 248,000 FFEs by CDE, 100,100 FFEs by CULDE and 240,000 FFEs by
test cases include various types (linear, nonlinear and quadratic) of SMES. So we can conclude that the computational cost of PSO-DE is
objective functions with different number of decision variables and less than that of the aforementioned approaches except for CULDE
a range of types (linear inequalities, nonlinear equalities, and [23].
nonlinear inequalities) and number of constraints. These 16
problems pose a challenge for constraint-handling methods and 5.2. Engineering optimization
are a good measure for testing their ability. All test functions are
listed in Appendix A. In order to study the performance of solving the real-world
engineering design problems, the proposed method is applied to 5
5.1. Benchmark test function well-known engineering design problems. We perform 100
independent runs with the same setting of parameters as follows:
The main characteristics of 11 benchmark functions are NP ¼ 100, F and CR are randomly generated within [0.9, 1.0] and
reported in table as shown in Table 1, where a is the number of [0.95, 1.0], respectively. The number of FFEs is set as in Table 6 for
constraints active at the optimal solution. In addition, r is the ratio each test function. We will measure the quality of results (better
between the size of the feasible search space and that of the entire best and mean solutions found) and the robustness of PSO-DE (the

Table 2
Experimental results obtained by PSO-DE with 100 independent runs on 11 benchmark functions.

Function Best Median Mean SD Worst FFEs

g01 15.000000 15.000000 15.000000 2.1E08 15.000000 140,100


g02 0.8036145 0.7620745 0.7566775 3.3E02 0.63679947 140,100
g03 1.0050100 1.0050100 1.0050100 3.8E12 1.0050100 140,100
g04 30665.5387 30665.5387 30665.5387 8.3E10 30665.5387 70,100
g06 6961.81388 6961.81388 6961.81388 2.3E09 6961.81388 140,100
g07 24.3062091 24.3062096 24.3062100 1.3E06 24.3062172 140,100
g08 0.09582594 0.09582594 0.09582594 1.3E12 0.09582594 10,600
g09 680.6300574 680.6300574 680.6300574 4.6E13 680.6300574 140,100
g10 7049.248021 7049.248028 7049.248038 3.0E05 7049.248233 140,100
g11 0.749999 0.749999 0.749999 2.5E07 0.750001 70,100
g12 1.000000 1.000000 1.000000 0.0E+00 1.000000 17,600
634 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640

Table 3
Comparing of the best results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.

Function Best results of the compared techniques

PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]

g01 15.000000 14.9977 15.000 15.0000 15.000000 15.0 15.000


g02 0.8036145 0.802959 0.803202 0.794669 0.803619 NA 0.803601
g03 1.0050100 0.9997 1.000 NA 0.995413 NA 1.000
g04 30665.539 30665.520 30665.401 30665.539 30665.539 NA 30665.539
g06 6961.8139 6956.251 6961.046 6961.814 6961.8139 NA 6961.814
g07 24.306209 24.882 24.838 NA 24.306209 24.711 24.327
g08 0.095826 0.095825 0.095825 NA 0.095825 NA 0.095825
g09 680.63006 680.726 680.773 680.771 680.63006 680.678 680.632
g10 7049.2480 7114.743 7069.981 NA 7049.2481 7055.6 7051.903
g11 0.749999 0.750 0.749 NA 0.749900 NA 0.75
g12 1.000000 1.000000 1.000000 1.000000 1.000000 NA 1.000

Table 4
Comparing of the mean results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.

Function Mean results of the compared techniques

PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]

g01 15.000000 14.9850 14.552 15.0000 14.999996 14.997 15.000


g02 0.756678 0.764494 0.755798 0.785480 0.724886 NA 0.785238
g03 1.0050100 0.9972 0.964 NA 0.788635 NA 1.000
g04 30665.539 30664.398 30665.922 30665.536 30665.539 NA 30665.539
g06 6961.8139 6740.288 6953.061 6960.603 6961.8139 NA 6961.284
g07 24.306210 25.746 27.328 NA 24.306210 25.709 24.475
g08 0.0958259 0.095819 0.095635 NA 0.095825 NA 0.095825
g09 680.63006 681.347 681.246 681.503 680.63006 680.7810 680.643
g10 7049.2480 8785.149 7238.964 NA 7049.2483 8464.2 7253.047
g11 0.749999 0.752 0.751 NA 0.757995 NA 0.75
g12 1.000000 1.000000 0.99994 1.000000 1.000000 NA 1.000

Table 5
Comparing of the worst results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.

Function Worst results of the compared techniques

PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]

g01 15.000000 14.9467 13.097 15.0000 14.999993 14.994 15.000


g02 0.6367995 0.722109 0.745712 0.779837 0.590908 NA 0.751322
g03 1.0050100 0.9931 0.887 NA 0.639920 NA 1.000
g04 30665.539 30660.313 30656.471 30665.509 30665.539 NA 30665.539
g06 6961.8139 6077.123 6943.304 6901.285 6961.8139 NA 6952.482
g07 24.3062 27.381 33.095 NA 24.3062 27.166 24.843
g08 0.0958259 0.095808 0.092697 NA 0.095825 NA 0.095825
g09 680.6301 682.965 682.081 685.144 680.6301 681.371 680.719
g10 7049.2482 10826.09 7489.406 NA 7049.2485 11458 7638.366
g11 0.750001 0.757 0.757 NA 0.796455 NA 0.75
g12 1.000000 1.000000 0.999548 1.000000 1.000000 NA 1.000

standard deviation values). These statistical results are summar- feasible solution found by PSO-DE is f ð0:0516888101;
ized in Table 6. 0:3567117001; 11:289319935Þ ¼ 0:012665232900, which is the
best-known result for this problem. The problem has been studied
5.2.1. Welded beam design problem by CDE [5] and Ray and Liew [11]. A comparison of results is
The best feasible solution found by PSO-DE is f ð0:205729640; presented in Table 8. As it can be seen, our method outperforms the
3:470488666; 9:036 623910; 0:205729640Þ ¼ 1:724852309. The two compared approaches, in terms of quality and robustness. It is
problem has been solved by a number of researchers: Huang also interesting to note that our result using 24,950 FFEs is better
et al. [5] and Ray and Liew [11]. A comparison of results is than the result of CDE [5], which is reported using 240,000 FFEs.
presented in Table 7. As it can be seen, our method outperforms the
two compared approaches, in terms of quality and robustness. It is 5.2.3. Pressure vessel design problem
also interesting to note that our result using 33,000 FFEs is better The best feasible solution obtained by PSO-DE is f ð0:8125;
than the result of CDE [5], which is reported using 240,000 FFEs. 0:4375; 42:098445596; 176:636595842Þ ¼ 6059:714335048. The
statistical simulation solutions obtained by CDE [5] and PSO-DE
5.2.2. Tension compression spring design problem are listed in Table 9. As shown in Table 9, the searching quality of
This design optimization problem involves three continuous our method is superior to that of CDE [5], and even the worst
variables and four nonlinear inequality constraints. The best solution found by PSO-DE is better than the best solution reported
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 635

Table 6
Experimental results obtained by PSO-DE with 100 independent runs on five engineering design problems.

Design problem Best Mean SD Worst FFEs

Welded beam 1.724852309 1.724852309 6.7E16 1.724852309 66,600


Pressure vessel 6059.714335 6059.714335 1.0E10 6059.714335 42,100
Speed reducer 2996.348165 2996.348165 1.0E07 2996.348166 70,100
Three-bar truss 263.89584338 263.89584338 1.2E10 263.89584338 17,600
Tension/compression spring 0.012665233 0.012665233 4.9E12 0.012665233 42,100

Table 7 Table 10
Comparing of the welded beam design problem results of PSO-DE with respect to Comparing of the speed reducer design problem results of PSO-DE with respect to
two other state-of-the-art algorithms. the other state-of-the-art algorithms.

Method Best Mean Worst SD FFEs Method Best Mean Worst SD FFEs

PSO-DE 1.7248531 1.7248579 1.7248811 4.1E06 33,000 PSO-DE 2996.348167 2996.348174 2996.348204 6.4E06 54,350
CDE [5] 1.733461 1.768158 1.824105 2.2E02 240,000 Ray and 2994.744241 3001.758264 3009.964736 4.0E+00 54,456
Ray and 2.3854347 3.0025883 6.3996785 9.6E01 33,095 Liew [11]
Liew [11]

Table 11
Comparing of the three-bar truss design problem results of PSO-DE with respect to
in [5]. Moreover, the standard deviation of the results by PSO-DE in
the other state-of-the-art algorithms.
100 independent runs for this problem is much smaller than that of
CDE [5] in 50 independent runs. The total number of evaluations is Method Best Mean Worst SD FFEs
42,100 in PSO-DE, while in CDE [5] the total number of evaluations PSO-DE 263.89584338 263.89584338 263.89584338 4.5E10 17,600
is 240,000. Ray and 263.89584654 263.90335672 263.96975638 1.3E02 17,610
Liew [11]

5.2.4. Speed reducer design problem


f ð3:5000000; 0:7000000; 17:000000; 7:300000000013; 7:8000
00000005; 3:3502146 66097; 5:286683229758Þ ¼ 2996:3481649 6. Discussion
69 is the best feasible solution found by PSO-DE. Ray and Liew [11]
provided the better best objective function value which is In this section, we will carry out discussion on the effectiveness
2994.744241. However, PSO-DE provides better mean and worst of PSO-DE, only evolving 50% particles by PSO and updating pBest
objective function values and the standard deviation than those by DE. We use 11 well-known benchmark functions as the
provided by Ray and Liew’s technique [11]. Table 10 indicates PSO- examples and the parameters are the same as those aforemen-
DE is more robust than Ray and Liew [11]. tioned in Section 5. The comparison between PSO-DE and HPSO
[26] (a hybrid PSO with Deb’s feasibility-based rule [6]) shows the
5.2.5. Three-bar truss design problem effectiveness of the mechanism adopted by PSO-DE.
The best feasible solution found by PSO-DE is
f ð0:788675134746; 0:408248290037Þ ¼ 263:895843376468, 6.1. Searching efficiency of PSO-DE
which is the reported best-known result for this problem. A
comparison of results presented in Table 11 shows that PSO-DE Figs. 1 and 2 illustrate a typical evolution process of the objective
outperforms Ray and Liew [11], in terms of quality and robustness. value of gbest when solving examples g01 and g02, respectively. As
These overall results validate that PSO-DE has the substantial shown in Figs. 1 and 2, the performance of the PSO-DE is better than
capability in handling various COPs and its solution quality is the PSO and DE on the testing suite in terms of the optimization
quite stable under a low computational effort. So, it can be results. PSO converges to local optima quickly and the particles give
concluded that PSO-DE is a good alternative for constrained up attempts for exploration. Then the PSO stagnates in the rest of
optimization. evolution. Due to DE, PSO-DE escapes from local optima and
converges to the global optimum very quickly. So, it is demonstrated
that PSO-DE is of effective and efficient ability in global search.

Table 8 6.2. The effectiveness of evolving 50% particles by PSO


Comparing of the tension/compression spring design problem results of PSO-DE
with respect to three other state-of-the-art algorithms.
For the sake of studying the effectiveness of only evolving 50%
Method Best Mean Worst SD FFEs particles by PSO, we modify PSO-DE to allow all particles to be
PSO-DE 0.012665233 0.012665244 0.012665304 1.2E08 24,950 involved in PSO. Under this condition, the algorithm finds worse
CDE [5] 0.0126702 0.012703 0.012790 2.7E05 240,000 mean and worst results in six functions (i.e., g01, g02, g07, g08, g10
Ray and 0.012669249 0.012922669 0.016717272 5.9E04 25,167 and g11). The details are shown in Table 12. Compared Table 12
Liew [11] against Table 2, we can conclude that particles with lower degree
of constraint violation might cause the population to be trapped in
a local optimum.
Table 9
Comparing of the pressure vessel design problem results of PSO-DE with respect to
the other state-of-the-art algorithm. 6.3. The effectiveness of DE-based search
Method Best Mean Worst SD FFEs
In order to identify the effectiveness of updating pBest by DE, we
PSO-DE 6059.714335 6059.714335 6059.714335 1.0E10 42,100 design a experiment that is only running PSO described in Section 4.
CDE [5] 6059.7340 6085.2303 6371.0455 4.3E+01 240,000
Table 13 shows the performance of PSO without DE in detail. The
636 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640

Fig. 1. The objective function value curves of PSO-DE, PSO and DE for test function Fig. 2. The objective function value curves of PSO-DE, PSO and DE for test function
g01. g02.

global optima are found only in three functions (i.e., g06, g11 and or not. In contrast to our method, HPSO employs the mechanism of
g12), what’s worse, the standard deviations over 100 runs for all the the simulated annealing (SA) and the feasibility-based rule which
problems except g02 and g12 increase significantly. This experiment are fused as a local search for gbest to help the search escape from
indicates that the use of DE is quite beneficial to improve the local optima and make a well balance between exploration and
performance of PSO. Comparing Table 13 against Table 2, we can exploitation. As shown in Table 14, the average searching quality of
conclude that DE adjusts the PSO’s exploration and exploitation PSO-DE is superior to that of HPSO. With respect to HPSO, the
ability to satisfy the requirements of different optimization tasks. standard deviations of PSO-DE decrease significantly and PSO-DE
obtains better mean and worst solutions for the welded beam
6.4. PSO-DE vs. HPSO design problem, the pressure vessel design problem and the
tension/compression spring design problem. Besides, it needs to be
HPSO [26] updates the velocities and positions using Eqs. (2) mentioned that the maximum computational cost (70,100 FFEs)
and (3) and uses Deb’s feasibility-based rule [6] to determine and the minimum computational cost (10,600 FFEs) are used in
whether the updated particles replace their corresponding pbest s PSO-DE, whereas HPSO performs 81,000 FFEs. So Table 14 indicates

Table 12
Experimental results obtained by PSO-DE which evolves all particles by PSO with 100 independent runs on 11 benchmark functions.

Function Best Median Mean SD Worst FFEs

g01 15.000000 15.000000 14.862187 5.0E01 12.453125 140,100


g02 0.8036176 0.759234 0.7294312 7.7E02 0.4451479 140,100
g03 1.0050100 1.0050100 1.0050100 1.1E15 1.0050100 140,100
g04 30665.5387 30665.5387 30665.5387 8.7E09 30665.5387 70,100
g06 6961.81388 6961.81388 6961.81388 1.8E12 6961.81388 140,100
g07 24.3062091 24.3062103 24.3062117 4.8E06 24.3062488 140,100
g08 0.09582594 0.09582594 0.09449230 9.4E03 0.02914408 10,800
g09 680.6300574 680.6300574 680.6300574 4.6E13 680.6300574 140,100
g10 7049.248021 7049.248060 7049.248131 2.7E04 7049.250303 140,100
g11 0.749999 0.749999 0.757095 4.0E02 0.998719 70,100
g12 1.000000 1.000000 1.000000 0.0E+00 1.000000 17,700

Table 13
Experimental results obtained only by PSO with 100 independent runs on 11 benchmark functions.

Function Best Median Mean SD Worst FFEs

g01 14.826583 12.704538 12.810001 1.4E+00 9.036253 140,100


g02 0.6628897 0.4771309 0.4840580 7.6E02 0.3493741 140,100
g03 1.0049865 1.0048991 1.0048795 1.0E+00 1.0042690 140,100
g04 30663.8563 30601.0847 30570.9286 8.1E+01 30252.3258 70,100
g06 6961.81388 6961.81388 6961.81387 6.5E06 6961.81381 140,100
g07 24.3338653 26.0181836 27.1373743 3.0E+00 38.4299014 140,100
g08 0.09582594 0.09582594 0.09449230 9.4E03 0.02914408 10,600
g09 680.6345517 680.8197237 680.9710606 5.1E01 684.5289146 140,100
g10 7051.220659 7736.101904 8209.829782 3.0E05 18527.51823 140,100
g11 0.750000 0.857219 0.860530 8.4E02 0.998823 70,100
g12 1.000000 1.000000 1.000000 0.0E+00 1.000000 17,600
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 637

Table 14
Comparing PSO-DE with respect to HPSO [26].

Function Method Best Mean Worst SD FEEs

g04 PSO-DE 30665.539 30665.539 30665.539 8.4E10 70,100


HPSO [26] 30665.539 30665.539 30665.539 1.7E06 81,000

g08 PSO-DE 0.095826 0.095826 0.095826 1.3E12 10,600


HPSO [26] 0.095825 0.095825 0.095825 1.2E10 81,000

g12 PSO-DE 1.000000 1.000000 1.000000 0.0E+00 17,600


HPSO [26] 1.000000 1.000000 1.000000 1.6E15 81,000

Tension/compression PSO-DE 0.0126652 0.0126652 0.0126652 4.9E12 42,100


Spring design HPSO [26] 0.0126652 0.0127072 0.0127191 1.58E05 81,000

Pressure vessel PSO-DE 6059.7143 6059.7143 6059.7143 1.0E10 42,100


Design HPSO [26] 6059.7143 6099.9323 6288.6770 8.6E+01 81,000

Welded beam PSO-DE 1.724852 1.724852 1.724852 6.7E16 66,600


Design HPSO [26] 1.724852 1.749040 1.814295 4.0E02 81,000

the superiority of the mechanism that evolves 50% particles by PSO subject to
and updates pbest by DE in the terms of stableness as well as the
lower time consuming.
g 1 ð~
xÞ ¼ 2x1 þ 2x2 þ x10 þ x11  10  0
7. Conclusions g 2 ð~
xÞ ¼ 2x1 þ 2x3 þ x10 þ x12  10  0
g 3 ð~
xÞ ¼ 2x2 þ 2x3 þ x11 þ x12  10  0
g 4 ð~
xÞ ¼ 8x1 þ x10  0
A new method named PSO-DE is introduced in this paper, which
g 5 ð~
xÞ ¼ 8x2 þ x11  0
improves the performance of the particle swarm optimization by
g 6 ð~
xÞ ¼ 8x3 þ x12  0
incorporating differential evolution. PSO-DE allows only half a part g 7 ð~
xÞ ¼ 2x4  x5 þ x10  0
of particles to be evolved by PSO. Those particles with higher degree g 8 ð~
xÞ ¼ 2x6  x7 þ x11  0
of constraint violation fly throughout the search space according to g 9 ð~
xÞ ¼ 2x8  x9 þ x12  0
the information delivered by their pbest s and gbest to search the
better positions. Deb’s feasibility-based rule [6] is used to compare where the bounds are 0  xi  1, ði ¼ 1; . . . ; 9Þ0  xi  100ði ¼
the updated particle against its corresponding pbset, then the 10; 11; 12Þ and 0  x13  1. The global minimum is at
winner survives into pBest. Due to the utilization of DE, each pbset ~
x ¼ ð1; 1; 1; 1; 1; 1; 1; 1; 1; 3; 3; 3; 1Þ, where f ð~
xÞ ¼ 15.
communicates and collaborates with its neighbors belonging to
pBest in order to improve its performance. The approach obtains A.2. g02
competitive results on 11 well-known benchmark functions
adopted for constrained optimization and on five engineering Maximize
optimization problems at a relatively low computational cost
(measured by the number of FFEs). From the comparative study,  n 
X Y n 
PSO-DE has shown its potential to handle various COPs, and its  cos 4 ðxi Þ  2 cos 2 ðxi Þ

performance is much better than eight other state-of-the-art COEAs  
xÞ ¼  i¼1
f ð~ i¼1
sffiffiffiffiffiffiffiffiffiffiffiffiffi 

in terms of the selected performance metrics. That is to say, this  Xn

mechanism does improve the robustness of the PSO. The future work  ixi 2 
 
i¼1
will be focused on two directions: (i) the application of PSO-DE to
real COPs from industry; and (ii) the extension of the method to solve subject to
the multi-objective problems.

Acknowledgments Y
n
g 1 ð~
xÞ ¼ 0:75  xi  0
The authors sincerely thank the anonymous reviewers for their i¼1
valuable and constructive comments and suggestions. X
n
g 2 ð~
xÞ ¼ xi  7:5n  0
This research was supported in part by the National Natural
i¼1
Science Foundation of China under Grant 60805027 and 90820302,
and in part by the Research Fund for the Doctoral Program of where n ¼ 20 and 0  xi  10 ði ¼ 1; . . . ; nÞ. The global maximum
Higher Education under Grant 200805330005. is unknown; the best reported solution is f ð~ xÞ ¼ 0:803619.

Appendix A. Benchmark functions


A.3. g03
A.1. g01
Maximize
Minimize
X
4 X
4
2 X
13 pffiffiffi n Y
n
f ð~
xÞ ¼ 5 xi  5 x  xi f ð~
xÞ ¼ ð nÞ xi
i
i¼1 i¼1 i¼5 i¼1
638 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640

subject to 1:430574; 1:321644; 9:828726; 8:280092; 8:375927Þ, where


X
n
2
f ð~
xÞ ¼ 24:3062091.
hð~
xÞ ¼ x1¼0
i
i¼1
A.7. g08
where n ¼ 10 and 0  xi  10 ði ¼ 1; . . . ; nÞ. The global maximum
pffiffiffi
is at xi  ¼ 1= nði ¼ 1; . . . ; nÞ, where f ð~
xÞ ¼ 1. Minimize

A.4. g04 sin 3 ð2px1 Þ sin ð2px2 Þ


f ð~
xÞ ¼
x31 ðx1 þ x2 Þ
Minimize
subject to
xÞ ¼ 5:3578547x23 þ 0:8356891x1 x5 þ 37:293239x1
f ð~
xÞ ¼ x21  x2 þ 1  0
g 1 ð~
 40792:141
xÞ ¼ 1  x1 þ ðx2  4Þ2  0
g 2 ð~
subject to where 0  x1  10 and 0  x2  10. The optimum solution is
located at ~
x ¼ ð1:2279713; 4:2453733Þ, where f ð~
xÞ ¼ 0:095825.

g 1 ð~
xÞ ¼ 85:334407 þ 0:0056858x2 x5 þ 0:0006262x1 x4 þ 0:0022053x3 x6  92
g 2 ð~
xÞ ¼ 85:334407  0:0056858x2 x5  0:0006262x1 x4 þ 0:0022053x3 x6  0
xÞ ¼ 80:51249 þ 0:0071317x2 x5 þ 0:0029955x1 x2 þ 0:0021813x23  110  0
g 3 ð~
g 4 ðxÞ ¼ 80:51249  0:0071317x2 x5  0:0029955x1 x2  0:0021813x23 þ 90  0
~
g 5 ð~
xÞ ¼ 9:300961 þ 0:0047026x3 x5 þ 0:0012547x1 x3 þ 0:0019085x3 x4  25  0
g 6 ð~
xÞ ¼ 9:300961  0:0047026x3 x5  0:0012547x1 x3  0:0019085x3 x4 þ 20  0

where 78  x1  102, 33  x2  45 and 27  xi  45 ði ¼ 3; 4; 5Þ.


The optimum solution is ~ x ¼ ð78; 33; 29:995256025682; 45; A.8. g09
36:775812905788Þ, where f ð~
xÞ ¼ 30665:539.
Minimize
A.5. g06
xÞ ¼ ðx1  10Þ2 þ 5ðx2  12Þ2 þ x43 þ 3ðx4  11Þ2 þ 10x65 þ 7x26
f ð~
Minimize
þ x47  4x6 x7  10x6  8x7

xÞ ¼ ðx1  10Þ3 þ ðxx  20Þ3


f ð~ subject to
subject to
xÞ ¼ 127 þ 2x21 þ 3x42 þ x3 þ 4x24 þ 5x5  0
g 1 ð~
xÞ ¼ 282 þ 7x1 þ 3x2 þ 10x23 þ x4  x5  0
g 2 ð~
xÞ ¼ ðx1  5Þ2  ðx2  5Þ2 þ 100  0
g 1 ð~ xÞ ¼ 196 þ 23x1 þ x22 þ 6x26  8x7  0
g 3 ð~
xÞ ¼ ðx1  6Þ2 þ ðx2  5Þ2  82:81  0
g 2 ð~ g 4 ðxÞ ¼ 4x21 þ x22  3x1 x2 þ 2x23 þ 5x6  11x7  0
~

where 13  x1  100 and 0  x2  100. The optimum solution is


where 10  xi  10 for ði ¼ 1; 2; . . . ; 7Þ. The optimum solution is
~
x ¼ ð14:095; 0:84296Þ, where f ð~
xÞ ¼ 6961:81388.
~
x ¼ ð2:330499; 1:951372; 0:4775414; 4:365726; 0:6244870;
1:1038131; 1:594227Þ, where f ð~ xÞ ¼ 680:6300573.
A.6. g07

Minimize A.9. g10

Minimize
xÞ ¼ x21 þ x22 þ x1 x2  14x1  16x2 þ ðx3  10Þ2 þ 4ðx4  5Þ2
f ð~
þ ðx5  3Þ2 þ 2ðx6  1Þ2 þ 5x27 þ 7ðx8  11Þ2 þ 2ðx9  10Þ2 f ð~
xÞ ¼ x1 þ x2 þ x3
þ ðx10  7Þ2 þ 45
subject to
subject to

g 1 ð~
xÞ ¼ 105 þ 4x1 þ 5x2  3x7 þ 9x8  0 g 1 ð~
xÞ ¼ 1 þ 0:0025ðx4 þ x6 Þ  0
g 2 ð~
xÞ ¼ 10x1  8x2  17x7 þ 2x8  0 g 2 ð~
xÞ ¼ 1 þ 0:0025ðx5 þ x7  x4 Þ  0
g 3 ð~
xÞ ¼ 8x1 þ 2x2 þ 5x9  2x10  12  0 g 3 ð~
xÞ ¼ 1 þ 0:01ðx8  x5 Þ  0
xÞ ¼ 3ðx1  2Þ2 þ 4ðx2  3Þ2 þ 2x23  7x4  120 
g 4 ð~ g 4 ð~
xÞ ¼ x1 x6 þ 833:33252x4 þ 100x1  83333:333  0
xÞ ¼ 5x21 þ 8x2 þ ðx3  6Þ2  2x4  40  0
g 5 ð~ g 5 ð~
xÞ ¼ x2 x7 þ 1250x5 þ x2 x4  1250x4  0
g 6 ðxÞ ¼ x21 þ 2ðx2  2Þ2  2x1 x2 þ 14x5  6x6  0
~ g 6 ð~
xÞ ¼ x3 x8 þ 1250000 þ x3 x5  2500x5  0
xÞ ¼ 0:5ðx1  8Þ2 þ 2ðx2  4Þ2 þ 3x25  x6  30  0
g 7 ð~
xÞ ¼ 3x1 þ 6x2 þ 12ðx9  8Þ2  7x10  0
g 8 ð~ where 100  x1  10; 000, 1000  xi  10; 000 ði ¼ 2; 3Þ and
1000  xi  10; 000 ði ¼ 4; . . . ; 8Þ. The optimum solution is ~
x ¼
where 10  xi  10 ði ¼ 1; 2; . . . ; 10Þ. The optimum solution is ð579:3066; 1359:9707; 5109:9707; 182:0177; 295:601; 217:982;
~
x ¼ ð2:171996; 2:363683; 8:773926; 5:095984; 0:9906548; 286:165; 395:6012Þ, where f ð~ xÞ ¼ 7049:248021.
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 639

A.10. g11 where P = 6000lb, L ¼ 14, dmax ¼ 0:25 in., E = 30,106 psi,
G = 12,106 psi, t max = 13,600 psi, s max = 30,000 psi and
Minimize 0:1  xi  10:0 ði ¼ 1; 2; 3; 4Þ.

xÞ ¼ x21 þ ðx2  1Þ2


f ð~ B.2. Pressure vessel design

subject to In this problem, the objective is to minimize the total cost ð f ð~ xÞÞ,
including the cost of the material, forming and welding. There are four
xÞ ¼ x2  x21 ¼ 0
hð~ design variables: T s (x1 , thickness of the shell), T h (x2 , thickness of the
head), R (x3 , inner radius) and L (x4 , length of the cylindrical section of
where 1pffiffiffi x1  1 and 1  x1  1. The optimum solution is the vessel, not including the head). Among the four design variables,
~
x ¼ ð1= 2; 1=2Þ, where f ð~
xÞ ¼ 0:75.
T s and T h are integer multiples of 0.0625in. which are available
thicknesses of rolled steel plates, R and L are continuous variables.
A.11. g12 Minimize

Maximize xÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23 þ 3:1661x21 x4 þ 19:84x21 x3


f ð~

100  ðx1  5Þ2  ðx2  5Þ2  ðx3  5Þ2 subject to


f ð~
xÞ ¼
100
subject to g 1 ð~
xÞ ¼ x1 þ 0:0193x3  0
g 2 ð~
xÞ ¼ x2 þ 0:00954x3  0
xÞ ¼ ðx1  pÞ2 þ ðx2  qÞ2 þ ðx3  rÞ2  0:0625  0
gð~ 4
xÞ ¼ px23 x4  px33 þ 1296000  0
g 3 ð~
3
where 0  xi  10 ði ¼ 1; 2; 3Þ and p; q; r ¼ 1; 2; . . . ; 9. The feasible g 4 ð~
xÞ ¼ x4  240  0
region of the search space consists of 93 disjointed spheres. A point
ðx1 ; x2 ; x3 Þ is the feasible if and only if there exist p; q; r such that where 1  x1  99; 1  x2  99; 10  x3  200 and
the above inequality holds. The optimum solution is located at 10  x4  200.
~
x ¼ ð5; 5; 5Þ where f ð~ xÞ ¼ 1.

Appendix B. Engineering design examples B.3. Speed reducer design minimize

B.1. Welded beam design Minimize

A welded beam is designed for the minimum cost subject to xÞ ¼ 0:7854x1 x22 ð3:3333x23 þ 14:9334x3  43:0934Þ
f ð~
constraints on shear stress (t ); bending stress in the beam (u);  1:508x1 ðx26 þ x27 Þ þ 7:4777ðx36 þ x37 Þ þ 0:7854ðx4 x26 þ x5 x27 Þ
buckling load on the bar (Pc ); end deflection of the beam (d). There are
four design variables hðx1 Þ; lðx2 Þ; tðx3 Þ and bðx4 Þ. subject to
Minimize
27
g 1 ð~
xÞ ¼ 1  0
xÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14:0 þ x2 Þ
f ð~ x1 x22 x3
397:5
subject to g 2 ð~
xÞ ¼ 1  0
x1 x22 x23
g 1 ð~
xÞ ¼ t ð~
xÞ  t max  0 1:93x34
g 3 ð~
xÞ ¼ 1  0
xÞ ¼ s ð~
g 2 ð~ xÞ  s max  0 x2 x46 x3
g 3 ð~
xÞ ¼ x1  x4  0 1:93x5 3

xÞ ¼ 0:10471x21 þ 0:04811x3 x4 ð14:0 þ x2 Þ  5:0  0


g 4 ð~ g 4 ð~
xÞ ¼ 1  0
x2 x47 x3
~
g 5 ðxÞ ¼ 0:125  x1  0 1=2
xÞ ¼ dð~
g 6 ð~ xÞ  dmax  0 ½ð745x4 =x2 x3 Þ2 þ 16:9 106 
g 5 ð~
xÞ ¼ 1  0
g 7 ð~
xÞ ¼ P  P C ð~
xÞ  0 110:0x36
6 1=2
½ð745x5 =x2 x3 Þ2 þ 157:5 10 
g 6 ð~
xÞ ¼ 1  0
The other parameters are defined as follows: 85:0x37
x2 x3
~
g 7 ðxÞ ¼ 1  0
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 40
2t 0 t 00 x2 p 5x2
t ð~
xÞ ¼ ðt 0 Þ2 þ ðt 00 Þ2 þ ; t 0 ¼ pffiffiffi g 8 ð~
xÞ ¼ 1  0
2R 2x1 x2 x1
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x1
 g 9 ð~
xÞ ¼ 1  0
MR x2  x þ x 2 x2
1 3 12x2
t 00 ¼ ; M ¼P Lþ ; R¼ þ 2 1:5x6 þ 1:9
J 2 2 4 g 10 ð~
xÞ ¼ 1  0
  2   2
 x4
x1 x2 x x1 þ x3 6PL 1:1x7 þ 1:9
J ¼ 2 pffiffiffi 2 þ ; s ð~ xÞ ¼ g 11 ð~
xÞ ¼ 1  0
2 12 2 x4 x23 x5
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
rffiffiffiffiffiffi!
4PL3 4:013 EGx23 x64 =36 x3 E
dðxÞ ¼
~ ~
; P C ðxÞ ¼ 1 where 2:6  x1  3:6; 0:7  x2  0:8; 17  x3  28; 7:3  x4 
Ex4 x33 L2 2L 4G
8:3; 7:3  x5  8:3; 2:9  x6  3:9; 5:0  x7  5:5.
640 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640

B.4. Three-bar truss design [3] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained para-
meter optimization problems, Evolutionary Computation, MIT Press 4 (1) (1996)
1–32.
Minimize [4] B. Tessema, G. Yen, A self adaptive penalty function based algorithm for con-
strained optimization, in: Proceedings 2006 IEEE Congress on Evolutionary Com-
putation, 2006, pp. 246–253.
pffiffiffi [5] F.Z. Huang, L. Wang, Q. He, An effective co-evolutionary differential evolution for
f ð~
xÞ ¼ ð2 2x1 þ x2 Þ l constrained optimization, Applied Mathematics and Computation 186 (1) (2007)
340–356.
subject to [6] K. Deb, An efficient constraint handling method for genetic algorithms, Computer
Methods in Applied Mechanics and Engineering 186 (2000) 311–338.
pffiffiffi [7] T.P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimiza-
2x1 þ x2 tion, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–294.
g 1 ð~
xÞ ¼ pffiffiffi Ps 0 [8] A. Amirjanov, The development of a changing range genetic algorithm, Computer
2x21 þ 2x1 x2 Methods in Applied Mechanics and Engineering 195 (2006) c2495–c2508.
x2 [9] N. Mladenovic, M. Drazic, V. Kovacevic-Vujcic, M. Cangalovic, General variable
g 2 ð~
xÞ ¼ pffiffiffi Ps 0 neighborhood search for the continuous optimization, European Journal of Opera-
2x21 þ 2x1 x2 tional Research 191 (3) (2008) 753–770.
1 [10] E. Mezura-Montes, C.A.C. Coello, A simple multimembered evolution strategy to
xÞ ¼ pffiffiffi
g 3 ð~ Ps 0 solve constrained optimization problems, IEEE Transactions on Evolutionary
2x2 þ x1 Computation 9 (1) (2005) 1–17.
[11] T. Ray, K.M. Liew, Society and civilization: an optimization algorithm based on the
where 0  x1  1 and 0  x2  1; l ¼ 100 cm, P ¼ 2 kN/cm2, and
simulation of social behavior, IEEE Transactions on Evolutionary Computation 7
s ¼ 2 kN/cm2. (4) (2003) 386–396.
[12] K. Socha, M. Dorigo, Ant colony optimization for continuous domains, European
Journal of Operational Research 185 (3) (2008) 1155–1173.
B.5. A tension/compression spring design [13] Z. Cai, Y. Wang, A multiobjective optimization-based evolutionary algorithm for
constrained optimization, IEEE Transactions on Evolutionary Computation 10 (6)
(2006) 658–675.
This problem needs to minimize the weight ( f ð~ xÞ) of a tension/ [14] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a
multidimensional complex space, IEEE Transactions on Evolutionary Computa-
compression spring design subject to constraints on minimum tion 6 (1) (2002) 58–73.
deflection, shear stress, surge frequency, limits on outside diameter [15] R.A. Krohling, L. dos Santos Coelho, Coevolutionary particle swarm optimization
using Gaussian distribution for solving constrained optimization problems, IEEE
and on design variables. The design variables are the mean coil Transactions on Systems, Man, Cybernetics Part B: Cybernetics 36 (6) (2006)
diameter Dðx2 Þ; the wire diameter dðx1 Þ and the number of active 1407–1416.
coils Pðx3 Þ. [16] W. Du, B. Li, Multi-strategy ensemble particle swarm optimization for dynamic
optimization, Information Sciences 178 (15) (2008) 3096–3109.
Minimize [17] L. dos Santos Coelho, C.-S. Lee, Solving economic load dispatch problems in power
systems using chaotic and Gaussian particle swarm optimization approaches,
International Journal of Electrical Power & Energy Systems 30 (5) (2008) 297–
xÞ ¼ ðx3 þ 2Þx2 x21
f ð~ 307.
[18] M. Maitra, A. Chatterjee, A hybrid cooperative-comprehensive learning based pso
subject to algorithm for image segmentation using multilevel thresholding, Expert Systems
with Applications 34 (2) (2008) 1341–1350.
[19] F.A. Guerra, L. dos, S. Coelho, Multi-step ahead nonlinear identification of Lorenz’s
chaotic system using radial basis neural network with learning by clustering
x32 x3 and particle swarm optimization, Chaos, Solitons & Fractals 35 (5) (2008) 967–
g 1 ð~
xÞ ¼ 1  0
71785x41 979.
[20] R. Storn, K. Price, Differential evolution—a simple and efficient heuristic for global
4x22  x1 x2 1 optimization over continuous spaces, Journal of Global Optimization 11 (1997)
g 2 ð~
xÞ ¼ þ 1  0
12566ðx2 x31  x41 Þ 5108x21 341–359.
[21] A. Salman, A.P. Engelbrecht, M.G.H. Omran, Empirical analysis of self-adaptive
140:45x1 differential evolution, European Journal of Operational Research 183 (2) (2007)
g 3 ð~
xÞ ¼ 1  0
x22 x3 785–804.
x1 þ x2 [22] R. Storn, System design by constraint adaptation and differential evolution, IEEE
g 4 ð~
xÞ ¼ 1  0 Transactions on Evolutionary Computation 3 (1) (1999) 22–34.
1:5 [23] R.L. Becerra, C.A.C. Coello, Cultured differential evolution for constrained opti-
mization, Computer Methods in Applied Mechanics and Engineering 195 (2006)
where 0:05  x1  2, 0:25  x2  1:3, and 2  x3  15.
4303–4322.
[24] K. Zielinski, R. Laur, Constrained single-objective optimization using differential
References evolution, in: Proceedings 2006 IEEE Congress on Evolutionary Computation,
2006, pp. 223–230.
[1] C.A.C. Coello, Theoretical and numerical constraint-handling techniques used [25] S. Kukkonen, J. Lampinen, Constrained real-parameter optimization with general-
with evolutionary algorithms: a survey of the state of the art, Computer Methods ized differential evolution, in: Proceedings 2006 IEEE Congress on Evolutionary
in Applied Mechanics and Engineering 191 (2002) 1245–1287. Computation, 2006, pp. 207–214.
[2] Z. Michalewicz, K. Deby, M. Schmidtz, T. Stidsenx, Test-case generator for non- [26] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility-based rule
linear continuous parameter optimization techniques, IEEE Transactions on Evo- for constrained optimization, Applied Mathematics and Computation 186 (2)
lutionary Computation 4 (3) (2000) 187–215. (2007) 1407–1422.

You might also like