Liu 2010
Liu 2010
A R T I C L E I N F O A B S T R A C T
Article history: We propose a novel hybrid algorithm named PSO-DE, which integrates particle swarm optimization
Received 11 April 2008 (PSO) with differential evolution (DE) to solve constrained numerical and engineering optimization
Received in revised form 11 June 2009 problems. Traditional PSO is easy to fall into stagnation when no particle discovers a position that is
Accepted 23 August 2009
better than its previous best position for several generations. DE is incorporated into update the previous
Available online 29 August 2009
best positions of particles to force PSO jump out of stagnation, because of its strong searching ability. The
hybrid algorithm speeds up the convergence and improves the algorithm’s performance. We test the
Keywords:
presented method on 11 well-known benchmark test functions and five engineering optimization
Particle swarm optimization
Differential evolution
functions. Comparisons show that PSO-DE outperforms or performs similarly to seven state-of-the-art
Constrained optimization approaches in terms of the quality of the resulting solutions.
PSO-DE ß 2009 Elsevier B.V. All rights reserved.
1568-4946/$ – see front matter ß 2009 Elsevier B.V. All rights reserved.
doi:10.1016/j.asoc.2009.08.031
630 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640
region and may converge to an infeasible solution. Instead, a too optimization (ACO) is proposed for continuous optimization. Taking
large penalty coefficient will result in the loss of some valuable advantage of multi-objective optimization techniques, Cai and Wang
information provided by infeasible individuals. In [4], a self- [13] presented the non-dominated individuals replacement scheme,
adaptive penalty function based on genetic algorithm (SAPF) is which selects one non-dominated individual from the offspring
proposed. Both distance value and the penalty are based on the population and then applies it to replace one dominated individual in
normalized fitness value and the normalized degree of constraint the parent population.
violation. The final fitness value of each individual is calculated by The organization of the remaining paper is as follows. In
adding the penalty value to the corresponding distance value. In Sections 2 and 3, PSO and DE are briefly introduced. In Section 4,
[5], DE based on a co-evolution mechanism, named CDE, is hybridizing particle swarm optimization with differential evolu-
proposed to solve COPs. Due to the co-evolution, not only are tion, named PSO-DE, is proposed and explained in detail.
decision solutions, but penalty factors are also adjusted by Simulation results and comparisons are presented in Section 5,
differential evolution. and the discussion is provided in Section 6. Finally, we conclude
Apart from the penalty function method, several novel the paper in Section 7.
techniques have been utilized to handle constraints. For special
representations and operators, how to determine the appropriate 2. Basics of PSO
generic representation scheme remains an open issue. The use of
special representations and operators is, with no doubt, quite Particle swarm optimization is a stochastic global optimization
useful for the intended application for which they were designed, method inspired by the choreography of a bird flock. PSO relies on
but their generalization to other (even similar) problems is by no the exchange of information between individuals, called particles,
means obvious. When an infeasible solution can be easily (or at of the population, called swarm. In PSO, each particle adjusts its
least at a low computational cost) transformed into a feasible trajectory stochastically towards the positions of its own previous
solution, repair algorithms are a good choice. However, this is not best performance (pbest) and the best previous performance of its
always possible and in some cases repair operators may introduce neighbors (nbest) or the whole swarm (gbest). At the t th iteration,
a strong bias in the search, harming the evolutionary process itself. taking the ith particle into account, the position vector and the
Furthermore, this approach is problem-dependent, since a specific velocity vector are Xit ¼ ðxti;1 ; . . . ; xti;n Þ and Vit ¼ ðvti;1 ; . . . ; vti;n Þ. The
repair algorithm has to be designed for each particular problem. velocity and position updating rules are given by
One problem with separation of constraints and objectives is that
in cases where the ratio between the feasible region and the whole
vi;tþ1 t t t t t
j ¼ vvi; j þ c1 r 1 ð pbesti; j xi; j Þ þ c2 r 2 ðgbest j xi; j Þ; (2)
search space is too small (for example, when there are constraints
very difficult to satisfy), this technique will fail unless a feasible xtþ1 t tþ1
i; j ¼ xi; j þ vi; j ; (3)
point is introduced in the initial population.
where j 2 f1; . . . ; ng, v 2 ½0:0; 1:0 is the inertia factor, c1 and c2 are
Deb [6] proposed a feasibility-based rule, where pair-wise
positive constants, r 1 and r 2 are two uniformly distributed random
solutions are compared using the following criteria: (1) any feasible
numbers in the range [0,1]. In this version, the variable Vit is limited
solution is preferred to any infeasible solution; (2) between two
to the range V max . When a particle discovers a position that is
feasible solutions, the one with better objective function value is
better than any it has found previously, it stores the new position
preferred; (3) between two infeasible solutions, the one with smaller
in the corresponding pbest. Clerc and Kennedy [14] introduced the
degree of constraint violation is preferred. However, his technique
velocity adjustment as
has problems to maintain diversity in the population. Runarsson and
Yao [7] introduced a stochastic ranking method to balance the vi;tþ1 t t t t t
j ¼ xðvi; j þ c1 r 1 ð pbesti; j xi; j Þ þ c2 r 2 ðgbest j xi; j ÞÞ
(4)
objective and penalty functions. Given pair-wise adjacent indivi- pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
duals, (1) if both individuals are feasible, the rank of them is where x ¼ 2k=j2 j j 4jj with j ¼ c1 þ c2 > 4. Due to
2
determined according to the objective function value, else (2) the the constriction coefficient x, the algorithm requires no explicit
probability of ranking according to the objective function value is P f , limit V max . Krohling and dos Santos Coelho [15] analyzed Eq. (4)
while the probability of ranking according to the constraint violation and concluded that the interval [0.72,0.86] is a possible good
value is ð1 P f Þ. Its drawback is the need of the most appropriate choice for x. So, instead of x, the absolute value of the Gaussian
value of P f . Amirjanov [8] investigated an approach named the probability distribution with zero mean and unit variance
changing range-based genetic algorithms (CRGA), which adaptively absðNð0; 1ÞÞ is introduced into the velocity equation.
shifts and shrinks the size of the search space of the feasible region by vi;tþ1 t t t t
(5)
j ¼ R1 ð pbesti; j xi; j Þ þ R2 ðgbest j xi; j Þ
employing feasible and infeasible solutions in the population to
reach the global optimum. In [9], a general variable neighborhood where R1 and R2 are generated using absðNð0; 1ÞÞ. According to the
search (VNS) heuristic is developed to solve COPs. VNS defines a set of statistical knowledge, the mean of absðNð0; 1ÞÞ is 0.798 and the
neighborhood structures to conduct a search through the solution variance is 0.36. In order to solve COP, they introduced Lagrange
space. It exploits systematically the idea of neighborhood change, multipliers to transform a COP into a dual or min–max problem.
both in the descent to local minima and in the escape from the valleys Two populations of independent PSO are evolved in the co-
which contain them. Mezura-Montes and Coello [10] proposed a evolutionary particle swarm using Gaussian distribution (CPSO-
simple multi-membered evolution strategy (SMES). SMES uses a GD) [15]: the first PSO focuses on evolving the individuals while
simple diversity mechanism to allow the individual with the lowest the vector of Lagrangian multiplier is maintained frozen and the
amount of constraint violation and the best value of the objective other PSO focuses on evolving the Lagrangian multiplier vector
function to be selected for the next population. By emulating society while the individuals are maintained frozen. The two PSOs interact
behavior, Ray and Liew [11] made use of intra- and intersociety with each other through a common fitness evaluation. The first PSO
interactions within a formal society and civilization model to solve provides the optimum individual of the COP in the end.
engineering optimization problems. A society corresponds to a A novel multi-strategy ensemble PSO algorithm (MEPSO) [16]
cluster of points while a civilization is a set of all such societies at any introduces two new strategies, Gaussian local search and differential
given time. Every society has its set of better-performing individuals mutation, to one part of its population (part I) and the other part
that help others in the society to improve through an intrasociety (part II), respectively. In every iteration, for each particle of part I, it
information exchange. In [12], a direct extension of ant colony has the probability P ls to perform the Gaussian local search defined
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 631
in Eqs. (6) and (3), and has the probability (1 Pls ) to perform the Selection: The offspring individual Zit competes against the
conventional search defined in Eqs. (2) and (3). gbest is the best parent individual Xit using the greedy criterion and the survivor
solution found by all particles, in both part I and part II. For each enters the t þ 1 generation.
particle of part II, the differential mutation operator defined in Eq. (7) t
is performed to change the direction of its velocity. Zi ; if f ðZit Þ f ðXit Þ
Xitþ1 ¼ (10)
Xit ; otherwise
vi;tþ1
j ¼ c3 R3 (6)
Different techniques are integrated into DE to solve COPs.
vi;tþ1 t t t t t Constraint adaptation by differential evolution (CADE) [22]
j ¼ sgnðr 1 0:5Þðvvi; j þ c1 r 1 ð pbesti; j xi; j Þ þ c2 r 2 ð pa xi; j ÞÞ; (7)
combines the ideas of constraint adaptation and DE into a versatile
where R3 is generated using Nð0; 1Þ, c3 are positive constants and design method. CADE utilities a so-called region of acceptability
pa is the best solution found by particle a, which is chosen (ROA) as a selection operator. If Zit lies within the ROA then
randomly from part I. The moving peaks benchmark (MPB) and Xitþ1 ¼ Zit . Otherwise, the procedure of DE is repeated up to several
dynamic Rastrigin function are used to test the performance of times. If the generated offspring still lies outside the ROA, Xitþ1 is
MEPSO. dos Santos Coelho and Lee [17] proposed that random set to Xit . In [23], a cultural algorithm with DE population (CULDE)
numbers (c1 , c2 , r 1 and r 2 ) for the velocity updating equation of PSO is proposed. The variation operator of differential evolution is
are generate by using the Gaussian probability distribution and/or influenced by the belief space to generate the offspring population.
chaotic sequences in the interval [1, 1], and then mapped to the
interval [0, 1]. Maitra and Chatterjee [18] proposed hybrid 4. Proposed method
cooperative-comprehensive learning PSO algorithm for multilevel
thresholding for histogram-based image segmentation. In [19], In this section, PSO-DE is introduced in detail. In order to handle
Gaussian functions and PSO are used to select and adjust the radial the constraints, we minimize the original objective function f ð~ xÞ as
basis function neural networks. well as the degree of constraint violation Gð~ xÞ. Two kinds of
populations with the same size NP are used. In the initial step of the
3. Basics of DE algorithm, a population (denoted by pop) is created randomly and
the replication of pop is denoted as pBest. Note that pBest is utilized
Differential evolution, a stochastic, simple yet powerful to store each particle’s pbest. At each generation, pop is sorted
evolutionary algorithm, not merely possesss the advantage of a according to the degree of constraint violation in a descending
quite few control variables but also performs well in convergence. order. In order to keep one-to-one mapping between each particle
DE is introduced to solve the global optimization by Storn and Price and its pbest, the order of pBest changes when pop is sorted. Only
[20]. DE creates new candidate solutions by perturbing the parent the first half of pop are evolved by using Krohling and dos Santos
individual with the weighted difference of several other randomly Coelho’s PSO [15]. If the variable value xtþ1
i; j
of Xitþ1 generated by
chosen individuals of the same population. A candidate replaces Krohling and dos Santos Coelho’s PSO [15] violates the boundary
the parent only if it is better than its parent. Thereafter, DE guides constraint, violating variable value is reflected back from the
the population towards the vicinity of the global optimum through violated boundary using the following rule [24]:
repeated cycles of mutation, crossover and selection. The main 8 t
procedure of DE is explained in detail as follows. < 0:5ðlð jÞ þ xi; j Þ;
> if xi;tþ1
j
< lð jÞ
Mutation: For each individual Xit ¼ xtþ1
i; j ¼ 0:5ðuð jÞ þ xti; j Þ; if xi;tþ1
j > uð jÞ (11)
t t t
>
: tþ1
fxi;1 ; xi;2 ; . . . ; xi;n gði 2 f1; 2; . . . ; NPgÞ at generation t, an associated xi; j ; otherwise
mutant individual Yit ¼ fyti;1 ; yti;2 ; . . . ; yti;n g can be created by using
one of the mutation strategies. The most used strategies are:
Algorithm 1: PSO-DE
rand/1: yti; j ¼ xtr½1; j þ Fðxtr½2; j xtr½3; j Þ Input: Population size NP, objective function f, the degree
best/1: yti; j ¼ xtbest; j þ Fðxtr½1; j xtr½2; j Þ of constrained violation G,
current to best/1: yti; j ¼ xti; j þ Fðxtbest; j xti; j Þ þ Fðxtr½1; j xtr½2; j Þ upper bound of variables U ¼ fuð1Þ; . . . ; uðnÞg
best/2: yti; j ¼ xtbest; j þ Fðxtr½1; j xtr½2; j Þ þ Fðxtr½3; j xtr½4; j Þ and lower bound of variables L ¼ flð1Þ; . . . ; lðnÞg
rand/2: yti; j ¼ xtr½1; j þ Fðxtr½2; j xtr½3; j Þ þ Fðxtr½4; j xtr½5; j Þ Output: The best objective function value of f best
where r½kðk 2 f1; 2; . . . ; 5gÞ is a uniformly distributed random number Initialize a population pop that contains NP particles with
in the range ½1; NP, j 2 f1; . . . ; ng, xtbest; j is the best individual of the random positions. Note that each particle is clamped
population at generation t, FðF 2 ½0; 2Þ is a amplification factor. within ½L; U;
Salman et al. [21] introduced the self-adapting parameter F as:
Set velocity of each particle equal to zero;
F i ¼ F i1 þ Nð0; 0:5ÞðF i2 F i3 Þ (8) Evaluate f and G for all particles;
where i1 ; i2 and i3 are uniformly distributed random numbers in pBest ¼ po p;
the range ½0; NP and i1 6¼ i2 6¼ i3 . % pBest is used to store each particle’s previous best position
Crossover: DE applies a crossover operator on Xit and Yit to (pbest) %;
generate the offspring individual Zit ¼ fzti;1 ; zti;2 ; . . . ; zti;n g. The genes gbest ¼ the optimum of pBest according to Deb’s
of Zit are inherited from Xit or Yit , determined by a parameter called feasibility-based rule [6];
crossover probability(CR 2 ½0; 1), as follows:
foreach Generation do
(
t yti; j ; if rand CR or j ¼ jrand Sort pop in descending order according to G and change
zi; j ¼ (9)
xti; j ; otherwise the order of pBest
when the order of pop changes to keep one-to-one map
where rand is a uniformly distributed random number in the range
between each particle and its pbest;
[0,1], and jrand is a uniformly distributed random number in the
range ½1; n. p1 = pop’s first half part;
632 H. Liu et al. / Applied Soft Computing 10 (2010) 629–640
foreach individual (denoted as a) of p1 do 4.1. Why only 50% particles are involved in PSO
Update a’s velocity and position by Eqs. (5) and (3);
Krohling and dos Santos Coelho’s PSO [15] is easy to stagnate,
If a violates boundary then
which is caused by its velocity equation (Eq. (5)). The Eq. (5)
Modify its variables by Eq. (11); consists of two parts: the first is the randomly weighted difference
end between the particle and its pbest and the second is the randomly
Calculate f and G for a; weighted difference between the particle and gbest. The first part
represents the personal experience of each particle, which makes
Compare a against the corresponding pbest according
the particle move towards its own best position. The second part
to Deb’s feasibility-based rule [6],
represents the collaborative behavior of the particles in finding the
and if a wins, it replaces the corresponding pbest; global optimal solution, which pulls each particle towards the best
end position found by its swarm. Eq. (3) provides the new position of
pop’s first half part = p1; the particle by adding its new velocity to its current position. As
mentioned above, if a particle stays on the position of gbest, its
gbest ¼ the optimal individual of pBest according to Deb’s
velocity tends to be zero and its position is unchanged. If both
feasibility-based rule [6]; ð pbestit xti Þ and ðgbestt xti Þ are small, the particle almost freezes
foreach pbest of pBest do in its track in some generations. If the pbest and gbest are too
Generate three offspring by DE’s three mutation closed, some particles are inactive during the evolution process.
strategies(rand/1, current to best/1, and rand/2, When the position of a particle equals its pbest, the velocity is only
influenced by gbest. The Eq. (5) indicates that the pbest and gbest
see Section 3 for details), respectively,
play a primordial role in PSO’s evolution process.
and store them in a set B; Based on Deb’s feasibility-based rule [6], the lower the particle’s
foreach individual (denoted as b) of B do degree of constraint violation is, the higher the probability that it
if b violates boundary then clusters together around gbest is. So particles with lower degrees of
constraint violations are very difficult to jump out of gbest’s
Modify its variables by Eq. (12);
adjacent region. This may cause gbest to stay on the same position
end for a long time and lost the diversity of population. In other words,
Calculate f and G for b; premature convergence may occur in the early evolution stage.
Compare b with pbest by Eq. (13), then if b wins, However, if pop converges too quickly to a position, which may be a
it replaces pbest; local optimum, particles will also give up attempts for exploration
end and stagnate in the rest of the evolution process. On the other hand,
for a particle with higher degree of constraint violation, its pbest
end
has a relatively significant difference between gbest. Its perfor-
gbest ¼ the optimal individual of pBest according to Deb’s mance will be improved by extracting meaningful information
feasibility-based rule [6]; from its own pbest and gbest belonging to the same population so
f best ¼ the objective function value of gbest; that it is dragged toward better-performing point. The updated
particle may better than gbest and then gbest jumps to a new
end
position that is obviously different from its current position. This
return f best replacement may spur PSO to adjust its evolutionary direction and
guide particles to fly throughout a new region that has not been
The updated particle is compared with its corresponding pbest in searched before. For the purpose of improving the performance of
pBest by Deb’s selection criterion [6]. If the updated particle wins, it PSO, only the first half individuals are extracted from the
replaces its corresponding pbest to survive to pBest; if not, the population pop after ranking the individuals based on their
corresponding pbest remains. After the PSO evolution, we employ DE constraint violations in descending order. Thus, a temporary
to update pBest. Each pbest in pBest produces three offspring by using population p1 of size NP=2 is obtained. Thereafter, p1 is involved
DE’s three mutation strategies: rand/1 strategy, current to best/1 in PSO’s evolution. To some extent, this mechanism maintains the
strategy, and rand/2 strategy. If a variable value zti; j of an offspring Zit diversity of pop and slows down the convergence speed to avoid
violates the boundary constraint, violating variable value is reflected stagnating.
back from the violated boundary using the following rule [25]:
8 t
4.2. DE-based search for pBest
t
< 2lð jÞ zi; j ; if zi; j < lð jÞ
>
t t
zi; j ¼ 2uð jÞ zi; j ; if zti; j > uð jÞ (12) In order to compensate the convergence speed and supply more
>
: zt ;
i; j otherwise valuable information to adjust the particles’ trajectories, DE is
applied to update pBest which ensures highly preferable positions
We use a selection criterion to compare pbest against its
in pBest and increases the probability of finding a better solution.
offspring. The considered individual will be replaced at each
Only three representatives of the five DE’s mutation strategies are
generation only if its offspring has a better fitness value and lower
used. Because if both best/1 and best/2 strategies are integrated into
degree of constraint violation. The criterion for replacement is
PSO-DE, the information carried by gbest will be reutilized in
described as follows:
producing new individuals. Under this condition, pBest might be
t T easily trapped in a local optimum. By applying three strategies,
Zi ; i f f ðZit Þ < f ð pbestit Þ GðZit Þ Gð pbestit Þ
pbestitþ1 ¼ t such as rand/1 strategy, current to best/1 strategy and rand/2
pbesti ; otherwise
strategy, on a pbest, its performance might be improved, which in
(13)
turn leads to the better-performing pBest over time. DE-based
This process is repeated generation after generation until some search process motivates the particles to search for new regions
specific stopping criterion is met. PSO-DE’s main procedure can be including some lesser explored regions and enhance the particles
summarized in Algorithm 1. capability to explore the vast search space. In addition, this way
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 633
Table 1 search space, i.e., r ¼ jVj=jSj, where jSj is the number of solutions
Summary of 11 benchmark problems.
randomly generated from S, jVj is the number of feasible solutions
Function n Type of f r LI NE NI a out of these jSj solutions. In the experimental setup, jSj =1,000,000.
g01 13 Quadratic 0.0003 % 9 0 0 6 For each test case, 100 independent runs are performed in VC++
g02 20 Nonlinear 99.9965 % 1 0 1 1 6.0 (the source code may be obtained from the authors upon
g03 10 Nonlinear 0.0000 % 0 1 0 1 request). The parameters used by PSO-DE are the following:
g04 5 Quadratic 26.9356 % 0 0 6 2 NP ¼ 100, F and CR are randomly generated within [0.9, 1.0] and
g06 2 Nonlinear 0.0064 % 0 0 2 2
[0.95, 1.0], respectively. For g03, a tolerance value e is equal to
g07 10 Quadratic 0.0003 % 3 0 5 6
g08 2 Nonlinear 0.8640 % 0 0 2 0 0.001, while for g11, e equals to 0:000001. The number of FFEs is set
g09 7 Nonlinear 0.5256 % 0 0 4 2 as in Table 2 for each test function. In each run, the number of
g10 8 Linear 0.0005 % 3 0 3 3 iterations of seven test cases (i.e., g01, g02, g03, g06, g07, g09 and
g11 2 Quadratic 0.0000 % 0 1 0 1
g10) is 800. For g04 and g11, it is 400. In addition, it is 60 for g08
g12 3 Quadratic 4.779 % 0 1 93 0
and 100 for g12. Table 2 summarizes the experimental results
using the above parameters, showing the best, median, mean, and
worst objective function values, and the standard deviations for
can increase the diversity of pBest and the probability of finding each test problem. As described in Table 2, the global optima are
better pbest so as to enhance the chances of finding the global consistently found by PSO-DE over 100 independent runs in seven
optimum if it had not been yet determined. As we know, better test cases (i.e., g01, g04, g06, g07, g09, g10, and g12). For the
pbest and gbest guide particles towards the optimum effectively remaining test cases, the resulting solutions achieved are very
and speed up the convergence. close to the global optima. Note that the standard deviation over
The two populations work separately, but individuals in these 100 runs for all the problems are relatively small.
two parts are also interrelated. pBest stores the personal best of PSO-DE is compared against six aforementioned state-of-the-
particles in pop. The best solution found by pBest can be the global art approaches: CRGA [8], SAPF [4], CDE [5], CULDE [23], CPSO-GD
attractor of pop (if it is also the best of entire swarm), which will [15] and SMES [10]. As shown in Tables 3–5, the proposed method
guide the pop fly to the new best (maybe the changed optimum). outperforms CRGA, SAPF, CDE, CPSO-GD, SMES and performs
PSO can gradually search the neighborhood of the best solution similarly to CULDE in terms of the selected performance metrics,
found so far, and DE can avoid convergence to a local optimum. In a such as the best, mean, and worst objective function values. With
word, by hybridizing DE and PSO, PSO-DE is a good trade-off respect to CULDE, the proposed approach finds better best results
between accuracy and efficiency. PSO-DE can increase the in two problems (g10 and g11) and similar best results in other nine
probability of hiting the global optimum and reduce the number problems (g01, g02, g03, g04, g06, g07, g08, g09 and g12). Also, the
of fitness functions evaluations (FFEs) required to obtain compe- proposed technique reaches better mean and worst results in four
titive solutions. problems (g02, g03, g10 and g11). Similar mean and worst results
are found in seven problems (g01, g04, g06, g07, g08, g09 and g12).
5. Experimental study As far as the computational cost (i.e., the number of FFEs) is
concerned, PSO-DE requires from 10,600 to 140,100 FFEs to obtain
Eleven benchmark test functions and five engineering optimi- the reported results, compared against 500,000 FFEs used by SAPF,
zation functions are used to validate the proposed PSO-DE. These 248,000 FFEs by CDE, 100,100 FFEs by CULDE and 240,000 FFEs by
test cases include various types (linear, nonlinear and quadratic) of SMES. So we can conclude that the computational cost of PSO-DE is
objective functions with different number of decision variables and less than that of the aforementioned approaches except for CULDE
a range of types (linear inequalities, nonlinear equalities, and [23].
nonlinear inequalities) and number of constraints. These 16
problems pose a challenge for constraint-handling methods and 5.2. Engineering optimization
are a good measure for testing their ability. All test functions are
listed in Appendix A. In order to study the performance of solving the real-world
engineering design problems, the proposed method is applied to 5
5.1. Benchmark test function well-known engineering design problems. We perform 100
independent runs with the same setting of parameters as follows:
The main characteristics of 11 benchmark functions are NP ¼ 100, F and CR are randomly generated within [0.9, 1.0] and
reported in table as shown in Table 1, where a is the number of [0.95, 1.0], respectively. The number of FFEs is set as in Table 6 for
constraints active at the optimal solution. In addition, r is the ratio each test function. We will measure the quality of results (better
between the size of the feasible search space and that of the entire best and mean solutions found) and the robustness of PSO-DE (the
Table 2
Experimental results obtained by PSO-DE with 100 independent runs on 11 benchmark functions.
Table 3
Comparing of the best results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.
PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]
Table 4
Comparing of the mean results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.
PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]
Table 5
Comparing of the worst results of PSO-DE with respect to six other state-of-the-art algorithms. ‘‘NA’’ means not available.
PSO-DE CRGA [8] SAPF [4] CDE [5] CULDE [23] CPSO-GD [15] SMES [10]
standard deviation values). These statistical results are summar- feasible solution found by PSO-DE is f ð0:0516888101;
ized in Table 6. 0:3567117001; 11:289319935Þ ¼ 0:012665232900, which is the
best-known result for this problem. The problem has been studied
5.2.1. Welded beam design problem by CDE [5] and Ray and Liew [11]. A comparison of results is
The best feasible solution found by PSO-DE is f ð0:205729640; presented in Table 8. As it can be seen, our method outperforms the
3:470488666; 9:036 623910; 0:205729640Þ ¼ 1:724852309. The two compared approaches, in terms of quality and robustness. It is
problem has been solved by a number of researchers: Huang also interesting to note that our result using 24,950 FFEs is better
et al. [5] and Ray and Liew [11]. A comparison of results is than the result of CDE [5], which is reported using 240,000 FFEs.
presented in Table 7. As it can be seen, our method outperforms the
two compared approaches, in terms of quality and robustness. It is 5.2.3. Pressure vessel design problem
also interesting to note that our result using 33,000 FFEs is better The best feasible solution obtained by PSO-DE is f ð0:8125;
than the result of CDE [5], which is reported using 240,000 FFEs. 0:4375; 42:098445596; 176:636595842Þ ¼ 6059:714335048. The
statistical simulation solutions obtained by CDE [5] and PSO-DE
5.2.2. Tension compression spring design problem are listed in Table 9. As shown in Table 9, the searching quality of
This design optimization problem involves three continuous our method is superior to that of CDE [5], and even the worst
variables and four nonlinear inequality constraints. The best solution found by PSO-DE is better than the best solution reported
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 635
Table 6
Experimental results obtained by PSO-DE with 100 independent runs on five engineering design problems.
Table 7 Table 10
Comparing of the welded beam design problem results of PSO-DE with respect to Comparing of the speed reducer design problem results of PSO-DE with respect to
two other state-of-the-art algorithms. the other state-of-the-art algorithms.
Method Best Mean Worst SD FFEs Method Best Mean Worst SD FFEs
PSO-DE 1.7248531 1.7248579 1.7248811 4.1E06 33,000 PSO-DE 2996.348167 2996.348174 2996.348204 6.4E06 54,350
CDE [5] 1.733461 1.768158 1.824105 2.2E02 240,000 Ray and 2994.744241 3001.758264 3009.964736 4.0E+00 54,456
Ray and 2.3854347 3.0025883 6.3996785 9.6E01 33,095 Liew [11]
Liew [11]
Table 11
Comparing of the three-bar truss design problem results of PSO-DE with respect to
in [5]. Moreover, the standard deviation of the results by PSO-DE in
the other state-of-the-art algorithms.
100 independent runs for this problem is much smaller than that of
CDE [5] in 50 independent runs. The total number of evaluations is Method Best Mean Worst SD FFEs
42,100 in PSO-DE, while in CDE [5] the total number of evaluations PSO-DE 263.89584338 263.89584338 263.89584338 4.5E10 17,600
is 240,000. Ray and 263.89584654 263.90335672 263.96975638 1.3E02 17,610
Liew [11]
Fig. 1. The objective function value curves of PSO-DE, PSO and DE for test function Fig. 2. The objective function value curves of PSO-DE, PSO and DE for test function
g01. g02.
global optima are found only in three functions (i.e., g06, g11 and or not. In contrast to our method, HPSO employs the mechanism of
g12), what’s worse, the standard deviations over 100 runs for all the the simulated annealing (SA) and the feasibility-based rule which
problems except g02 and g12 increase significantly. This experiment are fused as a local search for gbest to help the search escape from
indicates that the use of DE is quite beneficial to improve the local optima and make a well balance between exploration and
performance of PSO. Comparing Table 13 against Table 2, we can exploitation. As shown in Table 14, the average searching quality of
conclude that DE adjusts the PSO’s exploration and exploitation PSO-DE is superior to that of HPSO. With respect to HPSO, the
ability to satisfy the requirements of different optimization tasks. standard deviations of PSO-DE decrease significantly and PSO-DE
obtains better mean and worst solutions for the welded beam
6.4. PSO-DE vs. HPSO design problem, the pressure vessel design problem and the
tension/compression spring design problem. Besides, it needs to be
HPSO [26] updates the velocities and positions using Eqs. (2) mentioned that the maximum computational cost (70,100 FFEs)
and (3) and uses Deb’s feasibility-based rule [6] to determine and the minimum computational cost (10,600 FFEs) are used in
whether the updated particles replace their corresponding pbest s PSO-DE, whereas HPSO performs 81,000 FFEs. So Table 14 indicates
Table 12
Experimental results obtained by PSO-DE which evolves all particles by PSO with 100 independent runs on 11 benchmark functions.
Table 13
Experimental results obtained only by PSO with 100 independent runs on 11 benchmark functions.
Table 14
Comparing PSO-DE with respect to HPSO [26].
the superiority of the mechanism that evolves 50% particles by PSO subject to
and updates pbest by DE in the terms of stableness as well as the
lower time consuming.
g 1 ð~
xÞ ¼ 2x1 þ 2x2 þ x10 þ x11 10 0
7. Conclusions g 2 ð~
xÞ ¼ 2x1 þ 2x3 þ x10 þ x12 10 0
g 3 ð~
xÞ ¼ 2x2 þ 2x3 þ x11 þ x12 10 0
g 4 ð~
xÞ ¼ 8x1 þ x10 0
A new method named PSO-DE is introduced in this paper, which
g 5 ð~
xÞ ¼ 8x2 þ x11 0
improves the performance of the particle swarm optimization by
g 6 ð~
xÞ ¼ 8x3 þ x12 0
incorporating differential evolution. PSO-DE allows only half a part g 7 ð~
xÞ ¼ 2x4 x5 þ x10 0
of particles to be evolved by PSO. Those particles with higher degree g 8 ð~
xÞ ¼ 2x6 x7 þ x11 0
of constraint violation fly throughout the search space according to g 9 ð~
xÞ ¼ 2x8 x9 þ x12 0
the information delivered by their pbest s and gbest to search the
better positions. Deb’s feasibility-based rule [6] is used to compare where the bounds are 0 xi 1, ði ¼ 1; . . . ; 9Þ0 xi 100ði ¼
the updated particle against its corresponding pbset, then the 10; 11; 12Þ and 0 x13 1. The global minimum is at
winner survives into pBest. Due to the utilization of DE, each pbset ~
x ¼ ð1; 1; 1; 1; 1; 1; 1; 1; 1; 3; 3; 3; 1Þ, where f ð~
xÞ ¼ 15.
communicates and collaborates with its neighbors belonging to
pBest in order to improve its performance. The approach obtains A.2. g02
competitive results on 11 well-known benchmark functions
adopted for constrained optimization and on five engineering Maximize
optimization problems at a relatively low computational cost
(measured by the number of FFEs). From the comparative study, n
X Y n
PSO-DE has shown its potential to handle various COPs, and its cos 4 ðxi Þ 2 cos 2 ðxi Þ
performance is much better than eight other state-of-the-art COEAs
xÞ ¼ i¼1
f ð~ i¼1
sffiffiffiffiffiffiffiffiffiffiffiffiffi
in terms of the selected performance metrics. That is to say, this Xn
mechanism does improve the robustness of the PSO. The future work ixi 2
i¼1
will be focused on two directions: (i) the application of PSO-DE to
real COPs from industry; and (ii) the extension of the method to solve subject to
the multi-objective problems.
Acknowledgments Y
n
g 1 ð~
xÞ ¼ 0:75 xi 0
The authors sincerely thank the anonymous reviewers for their i¼1
valuable and constructive comments and suggestions. X
n
g 2 ð~
xÞ ¼ xi 7:5n 0
This research was supported in part by the National Natural
i¼1
Science Foundation of China under Grant 60805027 and 90820302,
and in part by the Research Fund for the Doctoral Program of where n ¼ 20 and 0 xi 10 ði ¼ 1; . . . ; nÞ. The global maximum
Higher Education under Grant 200805330005. is unknown; the best reported solution is f ð~ xÞ ¼ 0:803619.
g 1 ð~
xÞ ¼ 85:334407 þ 0:0056858x2 x5 þ 0:0006262x1 x4 þ 0:0022053x3 x6 92
g 2 ð~
xÞ ¼ 85:334407 0:0056858x2 x5 0:0006262x1 x4 þ 0:0022053x3 x6 0
xÞ ¼ 80:51249 þ 0:0071317x2 x5 þ 0:0029955x1 x2 þ 0:0021813x23 110 0
g 3 ð~
g 4 ðxÞ ¼ 80:51249 0:0071317x2 x5 0:0029955x1 x2 0:0021813x23 þ 90 0
~
g 5 ð~
xÞ ¼ 9:300961 þ 0:0047026x3 x5 þ 0:0012547x1 x3 þ 0:0019085x3 x4 25 0
g 6 ð~
xÞ ¼ 9:300961 0:0047026x3 x5 0:0012547x1 x3 0:0019085x3 x4 þ 20 0
Minimize
xÞ ¼ x21 þ x22 þ x1 x2 14x1 16x2 þ ðx3 10Þ2 þ 4ðx4 5Þ2
f ð~
þ ðx5 3Þ2 þ 2ðx6 1Þ2 þ 5x27 þ 7ðx8 11Þ2 þ 2ðx9 10Þ2 f ð~
xÞ ¼ x1 þ x2 þ x3
þ ðx10 7Þ2 þ 45
subject to
subject to
g 1 ð~
xÞ ¼ 105 þ 4x1 þ 5x2 3x7 þ 9x8 0 g 1 ð~
xÞ ¼ 1 þ 0:0025ðx4 þ x6 Þ 0
g 2 ð~
xÞ ¼ 10x1 8x2 17x7 þ 2x8 0 g 2 ð~
xÞ ¼ 1 þ 0:0025ðx5 þ x7 x4 Þ 0
g 3 ð~
xÞ ¼ 8x1 þ 2x2 þ 5x9 2x10 12 0 g 3 ð~
xÞ ¼ 1 þ 0:01ðx8 x5 Þ 0
xÞ ¼ 3ðx1 2Þ2 þ 4ðx2 3Þ2 þ 2x23 7x4 120
g 4 ð~ g 4 ð~
xÞ ¼ x1 x6 þ 833:33252x4 þ 100x1 83333:333 0
xÞ ¼ 5x21 þ 8x2 þ ðx3 6Þ2 2x4 40 0
g 5 ð~ g 5 ð~
xÞ ¼ x2 x7 þ 1250x5 þ x2 x4 1250x4 0
g 6 ðxÞ ¼ x21 þ 2ðx2 2Þ2 2x1 x2 þ 14x5 6x6 0
~ g 6 ð~
xÞ ¼ x3 x8 þ 1250000 þ x3 x5 2500x5 0
xÞ ¼ 0:5ðx1 8Þ2 þ 2ðx2 4Þ2 þ 3x25 x6 30 0
g 7 ð~
xÞ ¼ 3x1 þ 6x2 þ 12ðx9 8Þ2 7x10 0
g 8 ð~ where 100 x1 10; 000, 1000 xi 10; 000 ði ¼ 2; 3Þ and
1000 xi 10; 000 ði ¼ 4; . . . ; 8Þ. The optimum solution is ~
x ¼
where 10 xi 10 ði ¼ 1; 2; . . . ; 10Þ. The optimum solution is ð579:3066; 1359:9707; 5109:9707; 182:0177; 295:601; 217:982;
~
x ¼ ð2:171996; 2:363683; 8:773926; 5:095984; 0:9906548; 286:165; 395:6012Þ, where f ð~ xÞ ¼ 7049:248021.
H. Liu et al. / Applied Soft Computing 10 (2010) 629–640 639
A.10. g11 where P = 6000lb, L ¼ 14, dmax ¼ 0:25 in., E = 30,106 psi,
G = 12,106 psi, t max = 13,600 psi, s max = 30,000 psi and
Minimize 0:1 xi 10:0 ði ¼ 1; 2; 3; 4Þ.
subject to In this problem, the objective is to minimize the total cost ð f ð~ xÞÞ,
including the cost of the material, forming and welding. There are four
xÞ ¼ x2 x21 ¼ 0
hð~ design variables: T s (x1 , thickness of the shell), T h (x2 , thickness of the
head), R (x3 , inner radius) and L (x4 , length of the cylindrical section of
where 1pffiffiffi x1 1 and 1 x1 1. The optimum solution is the vessel, not including the head). Among the four design variables,
~
x ¼ ð1= 2; 1=2Þ, where f ð~
xÞ ¼ 0:75.
T s and T h are integer multiples of 0.0625in. which are available
thicknesses of rolled steel plates, R and L are continuous variables.
A.11. g12 Minimize
A welded beam is designed for the minimum cost subject to xÞ ¼ 0:7854x1 x22 ð3:3333x23 þ 14:9334x3 43:0934Þ
f ð~
constraints on shear stress (t ); bending stress in the beam (u); 1:508x1 ðx26 þ x27 Þ þ 7:4777ðx36 þ x37 Þ þ 0:7854ðx4 x26 þ x5 x27 Þ
buckling load on the bar (Pc ); end deflection of the beam (d). There are
four design variables hðx1 Þ; lðx2 Þ; tðx3 Þ and bðx4 Þ. subject to
Minimize
27
g 1 ð~
xÞ ¼ 1 0
xÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14:0 þ x2 Þ
f ð~ x1 x22 x3
397:5
subject to g 2 ð~
xÞ ¼ 1 0
x1 x22 x23
g 1 ð~
xÞ ¼ t ð~
xÞ t max 0 1:93x34
g 3 ð~
xÞ ¼ 1 0
xÞ ¼ s ð~
g 2 ð~ xÞ s max 0 x2 x46 x3
g 3 ð~
xÞ ¼ x1 x4 0 1:93x5 3
B.4. Three-bar truss design [3] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained para-
meter optimization problems, Evolutionary Computation, MIT Press 4 (1) (1996)
1–32.
Minimize [4] B. Tessema, G. Yen, A self adaptive penalty function based algorithm for con-
strained optimization, in: Proceedings 2006 IEEE Congress on Evolutionary Com-
putation, 2006, pp. 246–253.
pffiffiffi [5] F.Z. Huang, L. Wang, Q. He, An effective co-evolutionary differential evolution for
f ð~
xÞ ¼ ð2 2x1 þ x2 Þ l constrained optimization, Applied Mathematics and Computation 186 (1) (2007)
340–356.
subject to [6] K. Deb, An efficient constraint handling method for genetic algorithms, Computer
Methods in Applied Mechanics and Engineering 186 (2000) 311–338.
pffiffiffi [7] T.P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimiza-
2x1 þ x2 tion, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–294.
g 1 ð~
xÞ ¼ pffiffiffi Ps 0 [8] A. Amirjanov, The development of a changing range genetic algorithm, Computer
2x21 þ 2x1 x2 Methods in Applied Mechanics and Engineering 195 (2006) c2495–c2508.
x2 [9] N. Mladenovic, M. Drazic, V. Kovacevic-Vujcic, M. Cangalovic, General variable
g 2 ð~
xÞ ¼ pffiffiffi Ps 0 neighborhood search for the continuous optimization, European Journal of Opera-
2x21 þ 2x1 x2 tional Research 191 (3) (2008) 753–770.
1 [10] E. Mezura-Montes, C.A.C. Coello, A simple multimembered evolution strategy to
xÞ ¼ pffiffiffi
g 3 ð~ Ps 0 solve constrained optimization problems, IEEE Transactions on Evolutionary
2x2 þ x1 Computation 9 (1) (2005) 1–17.
[11] T. Ray, K.M. Liew, Society and civilization: an optimization algorithm based on the
where 0 x1 1 and 0 x2 1; l ¼ 100 cm, P ¼ 2 kN/cm2, and
simulation of social behavior, IEEE Transactions on Evolutionary Computation 7
s ¼ 2 kN/cm2. (4) (2003) 386–396.
[12] K. Socha, M. Dorigo, Ant colony optimization for continuous domains, European
Journal of Operational Research 185 (3) (2008) 1155–1173.
B.5. A tension/compression spring design [13] Z. Cai, Y. Wang, A multiobjective optimization-based evolutionary algorithm for
constrained optimization, IEEE Transactions on Evolutionary Computation 10 (6)
(2006) 658–675.
This problem needs to minimize the weight ( f ð~ xÞ) of a tension/ [14] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a
multidimensional complex space, IEEE Transactions on Evolutionary Computa-
compression spring design subject to constraints on minimum tion 6 (1) (2002) 58–73.
deflection, shear stress, surge frequency, limits on outside diameter [15] R.A. Krohling, L. dos Santos Coelho, Coevolutionary particle swarm optimization
using Gaussian distribution for solving constrained optimization problems, IEEE
and on design variables. The design variables are the mean coil Transactions on Systems, Man, Cybernetics Part B: Cybernetics 36 (6) (2006)
diameter Dðx2 Þ; the wire diameter dðx1 Þ and the number of active 1407–1416.
coils Pðx3 Þ. [16] W. Du, B. Li, Multi-strategy ensemble particle swarm optimization for dynamic
optimization, Information Sciences 178 (15) (2008) 3096–3109.
Minimize [17] L. dos Santos Coelho, C.-S. Lee, Solving economic load dispatch problems in power
systems using chaotic and Gaussian particle swarm optimization approaches,
International Journal of Electrical Power & Energy Systems 30 (5) (2008) 297–
xÞ ¼ ðx3 þ 2Þx2 x21
f ð~ 307.
[18] M. Maitra, A. Chatterjee, A hybrid cooperative-comprehensive learning based pso
subject to algorithm for image segmentation using multilevel thresholding, Expert Systems
with Applications 34 (2) (2008) 1341–1350.
[19] F.A. Guerra, L. dos, S. Coelho, Multi-step ahead nonlinear identification of Lorenz’s
chaotic system using radial basis neural network with learning by clustering
x32 x3 and particle swarm optimization, Chaos, Solitons & Fractals 35 (5) (2008) 967–
g 1 ð~
xÞ ¼ 1 0
71785x41 979.
[20] R. Storn, K. Price, Differential evolution—a simple and efficient heuristic for global
4x22 x1 x2 1 optimization over continuous spaces, Journal of Global Optimization 11 (1997)
g 2 ð~
xÞ ¼ þ 1 0
12566ðx2 x31 x41 Þ 5108x21 341–359.
[21] A. Salman, A.P. Engelbrecht, M.G.H. Omran, Empirical analysis of self-adaptive
140:45x1 differential evolution, European Journal of Operational Research 183 (2) (2007)
g 3 ð~
xÞ ¼ 1 0
x22 x3 785–804.
x1 þ x2 [22] R. Storn, System design by constraint adaptation and differential evolution, IEEE
g 4 ð~
xÞ ¼ 1 0 Transactions on Evolutionary Computation 3 (1) (1999) 22–34.
1:5 [23] R.L. Becerra, C.A.C. Coello, Cultured differential evolution for constrained opti-
mization, Computer Methods in Applied Mechanics and Engineering 195 (2006)
where 0:05 x1 2, 0:25 x2 1:3, and 2 x3 15.
4303–4322.
[24] K. Zielinski, R. Laur, Constrained single-objective optimization using differential
References evolution, in: Proceedings 2006 IEEE Congress on Evolutionary Computation,
2006, pp. 223–230.
[1] C.A.C. Coello, Theoretical and numerical constraint-handling techniques used [25] S. Kukkonen, J. Lampinen, Constrained real-parameter optimization with general-
with evolutionary algorithms: a survey of the state of the art, Computer Methods ized differential evolution, in: Proceedings 2006 IEEE Congress on Evolutionary
in Applied Mechanics and Engineering 191 (2002) 1245–1287. Computation, 2006, pp. 207–214.
[2] Z. Michalewicz, K. Deby, M. Schmidtz, T. Stidsenx, Test-case generator for non- [26] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility-based rule
linear continuous parameter optimization techniques, IEEE Transactions on Evo- for constrained optimization, Applied Mathematics and Computation 186 (2)
lutionary Computation 4 (3) (2000) 187–215. (2007) 1407–1422.