J Swevo 2018 10 004

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Accepted Manuscript

Bound constraints handling in Differential Evolution: An experimental study

Rafał Biedrzycki, Jarosław Arabas, Dariusz Jagodziński

PII: S2210-6502(18)30154-8
DOI: 10.1016/j.swevo.2018.10.004
Reference: SWEVO 453

To appear in: Swarm and Evolutionary Computation BASE DATA

Received Date: 1 March 2018


Revised Date: 9 October 2018
Accepted Date: 11 October 2018

Please cite this article as: Rafał. Biedrzycki, Jarosł. Arabas, D. Jagodziński, Bound constraints handling
in Differential Evolution: An experimental study, Swarm and Evolutionary Computation BASE DATA
(2018), doi: https://doi.org/10.1016/j.swevo.2018.10.004.

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to
our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and all
legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT

Bound constraints handling in Differential Evolution:


an experimental study

PT
Rafal Biedrzycki∗, Jaroslaw Arabas, Dariusz Jagodziński
Institute of Computer Science, Warsaw University of Technology

RI
SC
Abstract
Bound constraints are lower and upper limits for coordinate values of feasible
solutions. This paper is devoted to the issue of handling bound constraints in
Differential Evolution (DE). We overview the majority of popular Bound Con-

U
straint Handling Methods (BCHMs), which can be classified as penalty function
methods, repair methods and specialized mutation methods. We discuss and
AN
empirically verify how BCHMs influence the pattern of individuals generated
by DE. We take 7 different DE algorithms and 17 different BCHMs, and per-
form experimental analysis of their combinations to characterize their efficiency
in exploitation and exploration. As a basis for the experiments, we take a
M
10-dimensional quadratic function with various locations of the minimum and
CEC’2017 benchmark set, which defines 30 optimization problems in 10, 30, 50
and 100 dimensions. We observe that DE algorithms differ significantly in the
degree to which their efficiency depends on the choice of particular BCHMs.
D

We identify which BCHMs usually lead to efficient optimization for the major-
ity of DE algorithms. Similarly, we identify BCHMs which should definitely be
TE

avoided.
Keywords: differential evolution, bound constraints handling
EP

1. Introduction

Like many other optimization algorithms, Differential Evolution (DE) has


been formulated for unconstrained optimization in Rn , under the tacit assump-
C

tion that the whole space of real vectors is admissible. Yet in practice, it is
typical that certain combinations of parameter values are not admissible. In
AC

particular, it often happens that the design parameter values have lower and
upper bounds, usually imposed by some physical limitations. Such a type of
constraints is called bound constraints or box constraints. Numerous examples
of design problems with constraints can be found in the literature, e.g., in me-

∗ Correspondingauthor
Email address: [email protected] (Rafal Biedrzycki)

Preprint submitted to Swarm and Evolutionary Computation October 9, 2018


ACCEPTED MANUSCRIPT

chanical and structural design problems [1, 2, 3, 4, 5, 6, 7] or in materials science


and technology [8, 9, 10, 11].
Quite often, the choice of the method to handle them is intuitive, yet there

PT
is evidence [12] that different choices may significantly change the efficiency of
the optimization method.
Bound constraints also play an important, yet undervalued role, in the as-
sessment of the efficiency of global optimization methods via black-box bench-

RI
marking. In the majority of optimization benchmark sets, including those used
by optimization competitions at the Congress on Evolutionary Computation
(CEC benchmark sets family — see [13] for an archive), Genetic and Evolution-
ary Computation Conference (COCO framework [14]) and others, it is usually

SC
assumed that the set of feasible solutions is a hyperrectangle in Rn .
The competition organizers usually define the methodology of testing quite
precisely, but do not oblige the competition participants to use a particular
Bound Constraint Handling Method (BCHM). On the one hand, this policy is

U
quite reasonable, since it may easily be predicted that each optimization algo-
rithm may achieve the most efficiency when coupled with a specific BCHM. On
AN
the other hand, the competition organizers do not stress the need to publish
precise information about the BCHMs used by algorithms in the competition.
Therefore, reports on competition results usually contain only types and names
of algorithms, with information about the BCHMs used missing. Consequently,
M
assessment of the superiority of one algorithm over another may be based on
incomplete information about the subjects being compared. This issue has been
illustrated in [12], where the authors recomputed the DE/rand/1/bin algorithm
taking part in the CEC’2005 competition, assuming several different BCHMs.
D

They have shown that the results obtained by the same algorithm coupled with
different BCHMs are significantly different, so that the DE/rand/1/bin algo-
rithm would have been ranked at different positions in the CEC’2005 competi-
TE

tion when the BCHM differed from that originally used.


Since the first publications [15, 16] which introduced DE/rand and DE/best,
with binomial or exponential crossover, DE has received increasing attention,
thanks to the efficiency of the method and simplicity of the basic idea. Numerous
EP

modifications have been introduced, which have been thoroughly discussed in


several overview papers [17, 18, 19].
In comparison to the original concepts, modern algorithms from the DE
family include many additional mechanisms, which have made them more ef-
C

ficient in global optimization. These modifications include new mutation and


crossover schemes, adaptation of the scaling factor and crossover rate, usage of
AC

past individuals archives, probabilistic mixing of different mutation schemes and


adapting their probabilities, changes of selection schemes, combinations with lo-
cal search, various hybrids with other metaheuristics, especially CMA-ES [20],
and many others. The development has been reinforced by black box optimiza-
tion competitions whose results have indicated that algorithms from the DE
family are efficient global optimizers. In effect, contemporary DE algorithms
include many modifications to the original DE schemes.
The volume of articles on interactions between DE and constraint handling is

2
ACCEPTED MANUSCRIPT

relatively small in comparison to all the literature on DE. Authors usually study
a specific, single representative algorithm from the DE family. Therefore, we
found it necessary to consider a broader spectrum of BCHMs and DE algorithms,

PT
and to investigate the efficiency of their compositions.
The article is composed in the following way. Section 2 introduces the con-
strained optimization problem and general ideas on how to handle constraints
in evolutionary computation. Then, we briefly overview previous research on

RI
DE and BCHMs. Among the many papers in which the efficiency of optimiza-
tion algorithms is experimentally verified using bound constrained benchmark
problems, only a few contain discussion of choice of BCHM. Other papers use
some well-known methods, among which the Lamarckian repair by projection

SC
is quite frequent due to its simplicity of implementation.
In Section 3 we define the set of 17 BCHMs which are investigated in the
experimental study, and we comment on different concepts of hybridization of
the original DE algorithm with BCHMs. We divide the BCHMs under consider-

U
ation into two major groups: methods that modify the distributions of mutants
and methods that modify the fitness function. Their properties are discussed in
AN
Sections 4 and 5. Section 6 illustrates the distribution of individuals generated
by DE/rand/1/bin coupled with different BCHMs for quadratic fitness function
in 2 dimensions. It can be observed that individuals generated in consecutive
populations group according to very different patterns, depending on the choice
M
of a particular BCHM. In Section 7 we evaluate the efficiency of 7 DE algo-
rithms combined with 17 BCHMs. We report on the results of two types of
tests. The first type is devoted to the efficiency of local optimization. A family
of optimization tasks is considered, where the objective function is quadratic
D

and takes its optimum in various places of the feasible area, starting from its
midpoint up to the corner of the feasible area boundary. The second group of
tests is aimed at characterization of the global optimization efficiency and is
TE

based on the CEC’2017 benchmark set from 10 up to 100 dimensions.


Section 8 summarizes the observations and concludes the paper. The article
shows that some widely-used BCHMs are biased and when coupled with an
optimization method, may lead to false general conclusions about the efficiency
EP

of that method. The paper also gives some advice about which BCHM to choose.

2. Optimization with bound constraints in Differential Evolution


C

Consider a search space X, a subset of the search space F ⊂ X and a fitness


function q : F → R. The constrained optimization is defined as a problem of
searching for a point x∗ such that
AC

x∗ = arg min q(x) (1)


x∈F

The set F is called the feasible set. Usually F is defined as a set of all points
from X that satisfy the following feasibility conditions:
gi (x) ≤ 0 i = 1, . . . , ng (2)
hi (x) = 0 i = 1, . . . , nh (3)

3
ACCEPTED MANUSCRIPT

Functions gi and hi are called the inequality constraints and equality constrains,
respectively.
Bound constraints are a special case of inequality constraints. It is assumed

PT
that the search space is the set of n-dimensional real vectors, i.e., X = Rn , and
for each dimension there exist lower and upper bounds for the vector values:

li ≤ xi ≤ ui , i = 1, . . . , n (4)

RI
Hence, the feasible area is the n-dimensional hyperrectangle F = [l, u].
Michalewicz and Schoenauer [21] identify several classes of approaches to
handle constraints in evolutionary computation. They mention methods based

SC
on: a) decoders, b) preservation of individuals’ feasibility, c) penalty functions,
d) distinction between feasible and infeasible solutions and e) hybrid methods.
Eiben and Smith [22] divide constraint handling methods into direct, where
only feasible individuals are evaluated, and indirect, where fitness function is

U
expanded over the whole domain so that both feasible and infeasible individ-
uals can be evaluated. In the indirect type of constraint handling, Eiben and
Smith mention only penalty function-based approaches, whereas among the di-
AN
rect constraint handling methods they distinguish techniques based on elimina-
tion, repair, special operators and decoders. A continuously updated literature
overview on constraint handling methods is maintained by Coello Coello [23].
Here we consider three types of approaches:
M
• Penalty functions: In this approach an expanded fitness function qe :
X → R is explicitly defined for the whole domain X, including infeasible
individuals. A general idea is to define qe in such a way that optimiza-
D

tion methods are encouraged to navigate into the feasible area. Also, it
is desirable that the fitness functions q and qe take their corresponding
TE

minima, in particular global minima, for the same arguments from F.


In evolutionary computation, the most popular implementation of the
penalty function technique is to define qe as
EP

(
q(x) x∈F
qe (x) = (5)
q(x) + p(x) x ∈/F

It is assumed that q is properly defined outside the feasible area. The


C

function p : X → R, called the penalty function, is usually defined as


X X
AC

p(x) = φ(gi (x)) + φ(hi (x)) (6)


i:gi (x)>0 i:hi (x)6=0

A popular choice is to define φ as a quadratic function, i.e., φ(z) = az 2 ,


where a is a parameter.
• Repair methods: A repair method is a mapping r : X → F which assigns
a feasible individual to an infeasible one. After an infeasible offspring x

4
ACCEPTED MANUSCRIPT

has been generated and its repaired version r(x) has been found, two sce-
narios are possible. According to the first scenario, the point r(x) replaces
x in the offspring population and undergoes the selection mechanism. In

PT
evolutionary computation this mode of operation is called the Lamarck-
ian evolution and fits into the “preservation of individuals’ feasibility”
approach to constraint handling. The second scenario — the Darwinian
evolution with the Baldwin effect — assigns the infeasible offspring the

RI
fitness value of the repaired one. This scenario expands the definition
of the fitness function over the whole domain, so it can be regarded an
example of “indirect constraint handling”, since the algorithm maintains
infeasible solutions. The method implicitly defines the expanded fitness

SC
function qe : X → R in the following fashion:
(
q(x) x∈F
qe (x) = (7)
q(r(x)) x ∈ /F

U
• Feasibility preserving genetic operators: This technique assumes
AN
that the genetic operators are defined in such a fashion that they will al-
ways produce feasible individuals, provided that the population does con-
tain feasible ones. This effect can be achieved explicitly, e.g., by defining
a mutation distribution whose support is contained by the feasible area,
M
or implicitly, by a procedure that assigns a feasible point to an infeasible
one. A similar effect can be achieved by application of a repair method to
an infeasible individual, according to the Lamarckian evolution concept.
In this paper we treat as Lamarckian repair all the methods which yield
D

the result that is uniquely defined by the position of the repaired point
and by the constraints. We shall speak of feasibility-preserving operators
TE

when the result of the procedure that reintroduces feasibility depends also
on the current population.

Historically, the first ideas to handle constraints in DE are based on the


penalty approach. For example, Storn and Price applied an additive linear
EP

penalty [15] and Lampinen and Zelinka [24] applied a static penalty function
in which fitness of infeasible individuals was the sum of the original fitness and
a penalty term. Storn proposed that the feasible area should be expanded by
relaxing the constraints, and that the relaxation would vanish gradually so that
C

the penalty term would become increasingly restrictive [25].


The penalty approach admits that infeasible individuals may be maintained
AC

in populations. Researchers found it ineffective, since it may happen that the


value of the expanded fitness of an infeasible individual is superior to the fitness
of a feasible one. In other words, expanded fitness function may have minima
outside the feasible area. This gave an incentive to enrich the penalty approach
by introducing modified selection schemes. In [26] Deb proposed a scheme in
which, when the current target individual and the candidate offspring are fea-
sible, selection is based on their fitness values. If one of them is feasible and
the other is not, the feasible one survives. When both are infeasible, selection

5
ACCEPTED MANUSCRIPT

prefers the individual which is ‘less infeasible’, e.g., violates a smaller number
of constraints or is less distant from the feasible area. A similar idea was also
proposed by Mezura-Montes [27]. Note that in DE this functionality can be

PT
achieved by a penalty function that will increase with distance from the feasible
area (or with the number of violated constraints) and will guarantee that the
fitness of every infeasible point is greater (in the case of minimization) than any
feasible one. An example of such a penalty function can be found in [28].

RI
An alternative line of research has concentrated on procedures that ensure
feasibility of offspring individuals by repairing them after they go outside the
feasible area. Such procedures are relatively easy to formulate for bound con-
straints since the repairing can be performed coordinate-wise. For example, in

SC
JADE [29] Zhang and Sanderson use a repair method that changes each infea-
sible coordinate value by replacing it with the mean value between coordinates
of the violated constraint and the target individual. Brest et al. [30] repeats
the mutation procedure until a feasible individual is generated.

U
Several papers summarize possible repair methods and test their efficiency
when coupled with DE. Usually, the comparison is based on experiments with a
AN
set of test functions, either designed by the authors or introduced for the purpose
of competitions in black box benchmarking. Quite often, only one type of DE
algorithm is tested in the comparison, with the unspoken assumption that rela-
tionship between the efficiency of BCHMs would be preserved when switching
M
to other types of algorithms from the DE family. Such experimental compar-
ison papers were published for DE/rand/1/bin [12], DE/target-to-best/1/bin
[31] and DE/best/1/bin [32].
Contemporary DE algorithms, e.g. SADE [33] and jDE [34], often use en-
D

sembles of operators which are selected randomly with probabilities based on


the history of their successful applications. It has been suggested that a similar
approach can be applied to bound constraint handling. For example, Mallipeddi
TE

and Suganthan [35] define a DE algorithm that uses a family of repair methods,
together with probability values to select between them. Probability values are
adapted on the basis of their history of successful repairs, i.e., repairs which
have resulted in the fitness improvement of the offspring. Zhang and Zhang
EP

[36] have introduced an archive of differential vectors that yield a mutant whose
fitness is superior to its parent. These archive vectors are used instead of the
original difference vector when it would lead to an infeasible offspring.
C

3. Overview of Bound Constraint Handling Methods in Differential


Evolution
AC

In subsequent discussion we will refer to the following general description


of DE, which is presented as pseudocode in Fig. 1. The algorithm maintains
a population P that contains Np individuals which are vectors from Rn , and the
Np -dimensional vector of fitness values. In each iteration, for each individual
xi ∈ P , called the target individual, a candidate offspring oi is created. The
offspring is generated either by a specific set of operators motivated by the
DE, or is generated according to the typical DE pattern. In the latter case,

6
ACCEPTED MANUSCRIPT

a sequence of operations applies: a base vector bi is selected and a number


of individuals are picked from P to form the set Di , which is used to define
the difference vector. This vector is scaled and added to bi to produce the

PT
mutant mi . When the mutant is infeasible, it can be repaired according to
the Lamarckian evolution model. The mutant is crossed over with xi to yield
the offspring oi . Instead of Lamarckian repair, the infeasible offspring can be
repaired according to the Darwinian evolution model or its fitness value can be

RI
computed with the use of penalty approach.
When the fitness value of the offspring qi0 is no greater than the fitness value
of the target individual qi , the target individual is replaced with the offspring
— otherwise it remains unchanged for the next iteration.

SC
Further in the text, we assume that the population P is initialized with ran-
domly generated individuals which are distributed uniformly in the admissible
area. Some other initialization methods, e.g. generation of individuals after pre-
liminary identification of the attraction basins [37], are also possible. It should

U
be stressed, however, that interactions between BCHMs and DE do not depend
on a particular initialization procedure.

3.1. Repair methods


AN
Literature identifies a certain number of BCHMs, which are briefly sum-
marized in the following list. Usually the authors assume that the mapping
M
r : X → F that reintroduces feasibility changes only those coordinates of the
infeasible point where the bounds have been exceeded. In other words, the re-
pair is made in a coordinate-wise fashion. Examples of this technique are listed
below, assuming that m denotes the mutant. The symbol j used in formulas
D

stands for the coordinate index and spans the range from 1 to n.
• reinitialization:
TE

(
mj lj ≤ mj ≤ uj
r(mj ) = (8)
ξ mj < lj or mj > uj
EP

where ξ is a random variable used for the initialization of population P .


A typical choice, considered here, is the uniform distribution of ξ in the
interval [lj , uj ].
C

• projection: 
mj
 lj ≤ mj ≤ uj
AC

r(mj ) = lj mj < lj (9)



uj mj > uj

• reflection: 
mj
 lj ≤ mj ≤ uj
r(mj ) = 2lj − mj mj < lj (10)

2uj − mj mj > uj

7
ACCEPTED MANUSCRIPT

PT
RI
Initialize parameters: scaling factor F , population size Np , crossover rate Cr
Initialize population P = x1 , . . . , xNp
Initialize vector of fitness values q = [q(x1 ), . . . q(xNp )]
while stop condition not met do

SC
for all i ∈ {1, 2, ..., Np } do
if feasibility preservation then
mi ← feasibility preserving mutation (P, i)
else

U
bi ← select base vector from P
Di ← sample points from P
AN
mi ← differential mutation (bi , Di )
if Lamarckian repair then
mi ← r(mi )
end if
end if
M
oi ← crossover (xi , mi )
if Darwinian repair then
qi0 ← q(r(oi ))
D

else if apply penalty then


qi0 ← qe (oi )
else
TE

qi0 ← q(oi )
end if
end for
for all i ∈ {1, 2, ..., Np } do
EP

if qi0 ≤ qi then
xi ← oi , qi ← qi0
end if
end for
C

end while
AC

Figure 1: Pseudocode of Differential Evolution coupled with examined BCHMs

8
ACCEPTED MANUSCRIPT

• wrapping: 
mj
 lj ≤ mj ≤ uj
r(mj ) = uj + mj − lj mj < lj (11)

PT

lj + mj − uj mj > uj

Formulas (10) and (11), which define repair by reflection and wrapping, may
produce a value which is still out of bounds. In this case, the appropriate

RI
formula is then repeated until a feasible value is eventually found.
The last repair method in our overview does not operate in the coordinate-
wise fashion:

SC
• projection to midpoint:
The method is inspired by the projection technique suggested by [38] and
in the context of DE it has been considered in [31] under the name “scaled
mutant”. In contrast to the coordinate-wise repair, the procedure here

U
results in the projection of infeasible individuals on the boundary, when
the projection direction goes towards the midpoint of the feasible area.
AN
r(m) = (1 − α) · (l + u)/2 + α · m (12)

where α ∈ [0, 1] is the largest value for which it holds that for all j = 1, . . . n
M
lj ≤ r(mj ) ≤ uj (13)

3.2. Penalty functions


D

Several types of penalty functions have been investigated in the context


of evolutionary computation and DE. Among them, two representatives are the
most popular: the death penalty and the quadratic penalty. The third approach,
TE

which we call here the substitution penalty, has recently been introduced in the
context of the DES algorithm [28]. In the description below o denotes the
infeasible individual and qe is the expanded fitness function.
EP

• Death penalty: the infeasible individual is assigned an arbitrary large


constant that exceeds the fitness of any feasible individual:

qe (o) = Q s.t. ∀y ∈ F q(y) < Q (14)


C

In effect, the infeasible individual will be rejected during the selection


phase. Note that the fitness value is not computed for infeasible points,
AC

which decreases the total number of evaluations.


• Additive quadratic penalty: the fitness of the infeasible individual is a
sum of the fitness of the repaired individual and the squared values of
constraint violations.
 
X X
qe (o) = q(r(o)) + α ·  (lj − oj )2 + (oj − uj )2  (15)
j:oj <lj j:oj >uj

9
ACCEPTED MANUSCRIPT

Typically, the penalty function formula uses the original position of the
individual rather than the repaired one, as is defined in (5). Yet the fitness
function may be undefined properly outside the feasible area. Therefore,

PT
it is safer to compute its value for the repaired individual.
The penalty of infeasible individuals increases along with the distance
from the feasible area, which provides information that may be useful to
guide the population towards F. Proper tuning of the parameter α that

RI
weights the penalty term is problem-dependent, which is a limitation of
the approach.
• Substitution quadratic penalty: this approach is a combination of the first

SC
two. For infeasible individuals the original fitness value is not computed,
which saves the number of fitness evaluations. Instead, they are assigned
fitness values which are sums of the squared distance from the exceeded
bounds and a large value which exceeds the fitness of all feasible individ-

U
uals: X X
qe (o) = Q + (lj − oj )2 + (oj − uj )2 (16)

where
AN
j:oj <lj

∀y ∈ F
j:oj >uj

q(y) < Q (17)


Note that this type of penalty function approach acts similarly to the
M
selection method suggested by Deb [26]: a) an infeasible offspring will be
accepted only if the corresponding target individual is also infeasible and
violates constraints in a higher degree than the offspring; b) a feasible
D

offspring will be accepted when the corresponding target individual is


infeasible or when it is feasible but its fitness is worse.
TE

3.3. Feasibility preserving mutations


Bound constraints define the feasible area as an n-dimensional hyperrectan-
gle, which is a convex subset of Rn . For this reason, if the mutant is feasible then
crossover between the mutant and the feasible target vector will always produce
EP

a feasible offspring. Hence, in the context of bound constraints, the discus-


sion of feasibility-preserving operators can be reduced to feasibility-preserving
mutations.
Literature on DE provides definitions of several specific versions of the dif-
C

ferential mutation, which guarantee feasibility of the resulting mutant. We use


the notation from Fig. 1 but the index i is omitted for brevity, i.e., with x, b, D
AC

we denote the target individual, the base individual and the set of individuals
to define the mutation vector; m is the mutant.
Perhaps the most straightforward feasibility-preserving method is resam-
pling. It is based on repeating the selection of b and D, such that the result of
the differential mutation of b will eventually become feasible.
All other methods in this review are defined in a fashion similar to the
Lamarckian repair. First, an individual is generated with a regular differential
mutation. If it is infeasible then its position will be modified with the mapping

10
ACCEPTED MANUSCRIPT

r : Π × X ← F, where Π denotes the space of all population states. Similarly, for


the Lamarckian repair methods used, the mapping r usually modifies only those
coordinate values which violate bounds, with an exception of the projection-to-

PT
base method discussed at the end of this subsection.
Below we overview methods which will be tested in the experimental part of
the article. Methods from the first group alter only the coordinate values which
exceed the bounds; coordinates are numbered with an index j ranging from 1

RI
to n:

• Rand base: a random combination of the base individual and the violated
constraint is applied instead of the infeasible coordinate

SC

mj
 lj ≤ mj ≤ uj
r(mj ) = U (lj , bj ) mj < lj (18)

U (bj , uj ) mj > uj

U
where U (α, β) denotes the uniform random variate from the range [α, β].

AN
• Midpoint base: the average of the base individual and the violated con-
straint replaces the infeasible coordinate

mj
 lj ≤ mj ≤ uj
M
r(mj ) = (lj + bj )/2 mj < lj (19)

(bj + uj )/2 mj > uj

• Midpoint target: the average is taken between the target individual and
D

the violated constraint



TE

mj
 lj ≤ mj ≤ uj
r(mj ) = (lj + xj )/2 mj < lj (20)

(xj + uj )/2 mj > uj

EP

Methods from the second group apply changes to all coordinates of infeasible
points:

• Resampling: based on repeating a selection of b and D, such that the


C

result of the differential mutation of b will eventually become feasible.


In many-dimensional spaces this procedure was reported to be inefficient,
since the number of unsuccessful trials tended to increase with the space
AC

dimension [12]. A practical solution to this issue is to apply resampling for


a predefined number of times, and if no feasible individual is generated,
a Lamarckian repair method is applied.
• Conservative: the infeasible mutant is replaced by the base individual
(
m when m ∈ F
r(m) = (21)
b when m ∈ /F

11
ACCEPTED MANUSCRIPT

• Projection to base: the method is inspired by the “projection to midpoint”


repair. It projects the infeasible individual onto the boundary in the
direction towards the base individual b:

PT
r(m) = (1 − α) · b + α · m (22)

where α ∈ [0, 1] is the largest value for which it holds for all j = 1, . . . n

RI
lj ≤ r(mj ) ≤ uj (23)

SC
4. Changes of mutant distribution introduced by bound constraint
handling

Every optimization algorithm introduces its own characteristic search pat-


tern that defines the locations of individuals sampled from the search space in

U
consecutive iterations. For DE, its search pattern has been characterized in
[39, 40]. According to these findings, when the differential mutation is applied,
AN
the covariance matrix of mutants depends linearly on the covariance matrix of
parental population. Mutants of a certain base individual are symmetrically
distributed around that individual.
In this section, we analyze Lamarckian repair methods and feasibility-preserving
M
mutations. Both groups of methods explicitly modify the distribution of mu-
tants. We characterize this modification by illustrating the probability density
functions of repaired mutants and by analyzing changes of their expected value
and variance. For the sake of simplicity, the analysis is performed in one di-
D

mension, for the DE/rand/1 algorithm, assuming no crossover and the scaling
factor F = 0.9. The admissible area is the range [−1, 1]. For the sake of brevity,
we shall use the name CDE (Classical DE) to refer to the DE/rand/1/bin algo-
TE

rithm.
We investigate the population of Np = 107 individuals, which is normally
distributed with expectation mP = 0.7 and variance vP = 0.01, i.e., the popula-
tion midpoint is located nearby the upper bound of the feasible area. According
EP

to [39, 40], the offspring population should be distributed with expectation mP


and variance (1 + 2F 2 ) · vP = 0.026. Fig. 2 presents histograms of offspring
individuals for four repair methods: reinitialization, projection, reflection and
wrapping, assuming that the repaired individual becomes the offspring in place
C

of the infeasible one. In Fig. 3 we plot similar histograms for 5 feasibility-


preserving differential mutation types: resampling, rand base, midpoint base,
AC

midpoint target and conservative. In one dimension the projection towards the
midpoint of the feasible area or to the base individual yield identical pictures
to the simple projection method. Therefore, we decided to omit them from the
plot.
Observe that repair by projection generates a number of points on the upper
bound, which can be seen as a high peak on the histogram. Application of
wrapping produces values which span a range starting from the lower bound.
When reinitialization is used, the histogram of generated values goes up a little,

12
ACCEPTED MANUSCRIPT

population no constraints

250000
0e+00 1e+05 2e+05 3e+05 4e+05

150000

PT
count

count

50000
0

RI
−1.0 −0.5 0.0 0.5 1.0 1.5 2.0 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0

x x

reinitialization projection
250000

SC
100000 200000 300000
150000
count

count
50000

U
0

0
−1.0 −0.5 0.0 0.5 1.0 1.5 2.0 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0

x x

reflection AN wrapping
250000
250000

150000
150000
count

count

M
50000

50000
0

−1.0 −0.5 0.0 0.5 1.0 1.5 2.0 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0
D

x x

Figure 2: Histograms of mutants prior and after repairing


TE

since all infeasible values are distributed uniformly randomly in the feasible
range. Repair by reflection raises the histogram values near the upper bound.
EP

Resampling trims the histogram of mutants to the feasible area. Changes


introduced by the conservative method are hardly visible — the method slightly
increases the histogram of mutants centrally around the midpoint of the par-
ents population. Rand base, midpoint target and midpoint base increase the
histogram in the range between the parents population midpoint and the upper
C

bound.
Differences between mutant distributions without constraints and with BCHMs,
AC

which are illustrated in Fig. 2 and 3, depend on the current position of the
parental population in the search space, on the mutation type and on DE pa-
rameters. Plots in Fig. 4 illustrate how the expected value and variance of
mutants will change after applying a specific feasibility-preserving mutation or
repair method. We compute the mean value and the variance for each analyzed
method and for the unconstrained case. We report the difference between mean
value of mutants with and without BCHM, and the proportion of variance val-

13
ACCEPTED MANUSCRIPT

resampling rand base

250000

250000

PT
150000

150000
count

count
50000

50000
0

RI
0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5

x x

midpoint base midpoint target


250000

250000

SC
150000

150000
count

count
50000

50000

U
0

0
0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5

x x

conservative AN
250000
150000
count

M
50000
0

0.0 0.5 1.0 1.5


D

Figure 3: Histograms of individuals generated by feasibility preserving mutations


TE

ues with and without BCHM. We perform this procedure assuming that the
population of parents is distributed normally with variance vP = 0.125 and ex-
EP

pectation mP , which varies from the lower to the upper bound of the feasible
area, namely, the interval [−1, 1]. The scaling factor value was F = 0.9.
We can observe that the expectation and variance of mutants gradually
change as the population midpoint approaches a bound. In all cases analyzed,
the mutants’ expectation is shifted towards the midpoint of the feasible area. In
C

most cases, the mutants variance is also reduced, with an exception of reinitial-
ization and wrapping, where the variance increases. The degree of change to the
AC

mutants’ mean and variance increases, while the parents population approaches
the lower or upper bound.
The most significant changes to the mean and variance of mutants are in-
troduced by reinitialization and by wrapping. These are also the only methods
which increase the mutants’ variance. The smallest change in the mutants mean
and variance is observed with the conservative method. All other methods in-
troduce roughly similar changes of mean and variance of mutants.

14
ACCEPTED MANUSCRIPT

reinitialization projection reflection wrapping

1.0

1.0

1.0

1.0
0.5

0.5

0.5

0.5

PT
0.0

0.0

0.0

0.0
m

m
−1.0 −0.5

−1.0 −0.5

−1.0 −0.5

−1.0 −0.5
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

m_P m_P m_P m_P

RI
20.0

20.0

20.0

20.0
5.0

5.0

5.0

SC 5.0
v

v
2.0

2.0

2.0

2.0
0.5

0.5

0.5

0.5
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

m_P m_P m_P m_P

U
resampling rand base midpoint base midpoint target conservative
1.0

1.0

1.0

1.0

1.0
0.5

0.5

0.5

0.5

0.5
bias
0.0

0.0

0.0

AN 0.0

0.0
m

m
−1.0 −0.5

−1.0 −0.5

−1.0 −0.5

−1.0 −0.5

−1.0 −0.5
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

m_P m_P m_P m_P m_P


20.0

20.0

20.0

20.0

20.0
M
5.0

5.0

5.0

5.0

5.0
v

v
2.0

2.0

2.0

2.0

2.0
0.5

0.5

0.5

0.5

0.5
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0
D

m_P m_P m_P m_P m_P

Figure 4: Plot of the difference between mean value of mutants with and without BCHM, and
TE

the proportion of variance values with and without BCHM, versus the population midpoint
position for various repairing methods and feasibility-preserving operators

5. Changes of fitness values introduced by constraint handling


EP

Feasibility-preserving mutation and Lamarckian repair introduce changes to


the mutants’ distribution and in this way they change the search pattern of DE.
Penalty functions and Darwinian repair affect only the fitness function. They
C

therefore influence the dynamics of the population in a more subtle way, via
selection.
This section illustrates fitness landscapes introduced by penalty function
AC

and Darwinian repair. The influence of these techniques on the dynamics of


populations is discussed in the next section.
We assume that the fitness function is defined as

q(x) = −3 + 5g(x, 0, 1) − 0.5g(x, −0.5, 0.01) − g(x, 0.4, 0.16) (24)


2
1 (x − m)
where g(x, m, v) = √ exp − (25)
2πv 2v

15
ACCEPTED MANUSCRIPT

Feasible points are contained in the range [−1, 1]. In the additive quadratic
penalty formula (15), we assume α = 1. In the formulas that define the death
and the substitution penalty it is assumed Q = maxx∈[−1,1] q(x).

PT
original fitness death penalty additive penalty substitution penalty

2
0
−2.0

−2.0

RI
−1

−1 0
q(x)

q(x)

q(x)

q(x)
−2
−3.0

−3.0

−3

−3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3

SC
x x x x

Figure 5: Original fitness and three types of expanded fitness, defined by the death penalty,
additive penalty and substitution penalty

U
projection reflection wrapping reinitialization

AN
−2.0

−2.0

−2.0

−2.0
q(x)

q(x)

q(x)

q(x)
−3.0

−3.0

−3.0

−3.0
M
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3

x x x x

Figure 6: Fitness function resulting from repairing constraints in the Darwinian style
D

Fig. 5 illustrates fitness function defined by (24) and expanded fitness for
the death penalty, additive penalty and substitution penalty. Plots of expanded
TE

fitness for the same fitness function and constraints when the Darwinian repair
is applied are provided in Fig. 6. Note that for the reinitialization method
expanded fitness becomes a random variable outside the feasible area.
A comparison of expanded fitness plots reveals that only the quadratic addi-
EP

tive penalty and the substitution penalty approaches efficiently shift infeasible
points towards the feasible area. No such reinforcement can be observed for the
death penalty, for the projection repair and for the reinitialization repair. For
wrapping and reflection things may even be worse — in some cases individuals
C

which are more distant from the feasible area may be assigned better fitness
values. They can then survive selection and be used to define difference vectors.
AC

Hence, the difference vectors may grow so much that they will always generate
infeasible mutants. Thus consecutive populations will gradually swallow and the
action of the DE algorithm will become increasingly chaotic. Moreover, applica-
tion of the reinitialization approach in Darwinian fashion changes the expanded
fitness function into noise outside the feasible area. For this reason, in the ex-
perimental part of the paper we exclude Darwinian repair by reinitialization
from the set of BCHMs being considered.

16
ACCEPTED MANUSCRIPT

6. Dynamics of populations for various bound constraint handling


methods

This section illustrates how the dynamics of populations is influenced by the

PT
choice of a particular BCHM. As a base algorithm we take CDE with population
size Np = 5000 individuals, scaling factor F = 0.9 and the crossover rate CR = 1
(i.e., crossover is switched off). The algorithm is applied for the optimization
of a quadratic fitness function in R2 :

RI
q(x) = (x − b)T (x − b) (26)

SC
where b = 0.9. Feasible points are contained in the range [−1, 1]2 , so the op-
timum is located inside the feasible area nearby the border. Population is ini-
tialized uniformly randomly in the range [−1, 1]2 . Optimization takes 20 gen-
erations. In case of the resampling method, after consecutive 100 unsuccessful
repairs the Lamarckian projection is used.

U
In Fig. 7–10 we provide plots of all individuals that have been generated in
single runs of DE coupled with each BCHM used. The plot in Fig. 7a) presents
AN
a reference picture — the set of all individuals generated by CDE when no
constraints are applied. In that case, individuals form a cloud whose shape can
be explained by the contour-fitting property of the CDE algorithm: individuals
concentrate around the point x∗ = [0.9, 0.9] where the fitness function takes its
M
optimum, and the density of individuals decreases along with the distance from
x∗ .
The application of Lamarckian repair methods (Fig. 7) yields various distri-
butions of generated individuals, depending on the repair method type. Repair
D

by projection and by projection to midpoint produce a significant number of


individuals on the boundary of the feasible area (they are visible as an “inner
TE

frame” of the plot). With projection, corners are especially densely sampled,
yet those individuals which are not located on the boundary are grouped around
the point x∗ such that their density very much resembles the density observed
in the unconstrained optimization cases, trimmed to the feasible area. Like-
EP

wise, with repair by reflection, distributions of individuals with and without


constraints are roughly similar within the feasible area. For wrapping one can
observe “shades” which are caused by an increased density of individuals near
the feasible area border, which produces difference vectors that go outside the
C

border and thus yield points on the other side. For projection to midpoint,
individuals near the feasible area midpoint are slightly more densely sampled
than in the unconstrained optimization case.
AC

Among the feasibility-preserving mutations considered in this study (Fig. 8),


resampling generates individuals whose distribution seems to most accurately
approximate the unconstrained optimization case. Also the midpoint target
and conservative methods result in a good correspondence of individuals dis-
tributions. For rand base and midpoint base operators, individuals seem to be
relatively more densely distributed along the feasible area boundary near the
location of the optimum.

17
ACCEPTED MANUSCRIPT

1.0
4

PT
0.5
2

0.0
x2

x2
0

RI
−0.5
−2

−1.0
−4

−4 −2 0 2 4 −1.0 −0.5 0.0 0.5 1.0

SC
a) x1
b) x1
1.0

1.0

U
0.5

0.5

AN
0.0

0.0
x2

x2
−0.5

−0.5
M
−1.0

−1.0

−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

c) x1
d) x1
D
1.0

1.0
0.5

0.5
TE
0.0

0.0
x2

x2
−0.5

−0.5
EP
−1.0

−1.0

−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

e) x1
f) x1
C

Figure 7: Individuals generated by the CDE: unbounded optimization (a) and Lamarckian
repair: reinitialization (b), projection (c), reflection (d), wrapping (e), projection to midpoint
(f)
AC

The additive quadratic and the death penalty methods (Fig. 9) yield distri-
butions of points which only slightly differ from the unconstrained case. For the
death penalty, the distribution is more compact, since no infeasible individual
would survive selection. Therefore, the base individual would never be infeasible
and the maximum length of the difference vector is bounded by the maximum

18
ACCEPTED MANUSCRIPT

1.0

1.0

PT
0.5

0.5
0.0

0.0
x2

x2

RI
−0.5

−0.5
−1.0

−1.0
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

SC
a) x1
b) x1
1.0

1.0

U
0.5

0.5

AN
0.0

0.0
x2

x2
−0.5

−0.5
M
−1.0

−1.0

−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

c) x1
d) x1
D
1.0

1.0
0.5

0.5
TE
0.0

0.0
x2

x2
−0.5

−0.5
EP
−1.0

−1.0

−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0

e) x1
f) x1
C

Figure 8: Individuals generated by the CDE, feasibility preserving operators: a) resampling,


b) conservative, c) rand base, d) projection to base, e) midpoint base, f) midpoint target
AC

difference between feasible individuals.


Darwinian repair (Fig. 10) definitely gives the most picturesque patterns of
offspring individuals. Since expanded fitness is piecewise constant outside the
feasible area, the projection repair produces very long difference vectors. For
wrapping and reflection, expanded fitness becomes periodical. Due to the char-
acter of DE, which defines the differential mutation such that distances between

19
ACCEPTED MANUSCRIPT

PT
2

2
x2

x2
0

RI
−2

−2
−4

−4
−4 −2 0 2 4 −4 −2 0 2 4

SC
a) x1
b) x1

Figure 9: Individuals generated by the CDE, penalty function approach: a) additive penalty,
b) death penalty

U
4

AN
2

2
x2

x2
0

0
M
−2

−2
−4

−4
D

−4 −2 0 2 4 −4 −2 0 2 4

a) x1
b) x1
TE 4
2
EP
x2

0
−2
C
−4

−4 −2 0 2 4

c) x1
AC

Figure 10: Individuals generated by the CDE with Darwinian repair: a) projection, b)
reflection, c) wrapping

mutants depend linearly on distances between parents, a positive feedback ap-


pears that results in an explosion of distances between members of consecutive
populations.

20
ACCEPTED MANUSCRIPT

To sum up, among the methods considered, Darwinian repair and Lamar-
ckian repair by projection and wrapping produce populations of individuals
whose density significantly differs from the unconstrained optimization case.

PT
For all other methods the densities agree much better, and the best agreement
of densities is observed in Lamarckian repair by reflection, for the resampling
and conservative strategies and for both death and additive quadratic penalty
function approaches.

RI
7. Experimental study
In this section, we summarize the results of an extensive experimental study

SC
aimed at better understanding the interactions between various types of DE
algorithms and different BCHMs. The study includes the basic methods: CDE
and DE/local-to-best/1/bin, and a group of advanced methods, which are based
on parameter adaptation and/or ensembles of operators, which have been suc-

U
cessful in black-box benchmarking: SADE [33], JADE [29] and jSO [30]. In ad-
dition, we provide results obtained by two techniques, which were presented at
AN
the Congress on Evolutionary Computation in 2017: DES [28], which enriches
the differential mutation by summing the difference vectors with the normal
vectors along the population midpoint shift, and BBDE [41], which excludes
crossover and puts more stress on an algorithm’s invariance to many transfor-
M
mations of the coordinate system. BBDE has been reported to have a very good
exploration property [41].
Each method was coupled with all BCHMs being considered, with the fol-
lowing exceptions. The substitution penalty was applied to the DES algorithm
D

only, since it was equivalent to the death penalty used with other DE algorithms
under consideration when they were initialized with feasible populations. Nei-
ther was the death penalty applied to DES, since it uses nonelitist selection and
TE

the death penalty may disable its proper operation. In BBDE, the base individ-
ual and the target individual are identical by definition, so the midpoint base,
midpoint target and conservative mutation types always assign the base point
to the infeasible one. Therefore, for BBDE we report results for the midpoint
EP

target method only. The midpoint base mutation was not considered for the
jSO algorithm since it was equivalent to the midpoint target mutation.
Tests were performed to characterize two different types of dynamics. The
influence of BCHMs on convergence speed was verified for a quadratic function
C

whose optimum was located in a different location of the feasible area. We shall
show that the influence of BCHMs on the results becomes more important as
AC

the optimum gets closer to the boundary of the feasible area. We shall compare
the sensitivity of optimization efficiency of the DE algorithms being considered
and BCHMs on the location of the optimum. In this group of experiments, tests
were performed in 10-dimensional space.
The influence of BCMHs on the global optimization efficiency of the com-
pared methods was verified by a second group of tests, which were performed
for the CEC’2017 benchmark set. For each algorithm considered, we illustrate
the effectiveness of BCHMs in 10, 30, 50 and 100 dimensions. We shall discuss

21
ACCEPTED MANUSCRIPT

the difference between DE algorithms in their sensitivity to choice of particular


BCHMs. We shall also compare the efficiency of the BCHMs examined, for each
DE algorithm.

PT
7.1. Influence of bound constraint handling methods on convergence speed
Experiments were performed for the fitness function given by the formula

q(x) = (x − b)T (x − b)

RI
(27)
n
when the feasible area was the hyperrectangle [−1, 1] . The value of b was
changed in the range [0.2, 1]. For each value of b, each DE algorithm and each
BCHM, 51 independent runs were performed. Each run was stopped when either

SC
the number of fitness evaluations reached 100,000 or an individual was found
whose fitness value was smaller than 10−8 .
For several BCHMs, BBDE was unable to reach the desired fitness value.
Therefore, we decided to stop BBDE after reaching the target value of 0.5 or

U
after exceeding 100,000 fitness evaluation.
The sum of the number of fitness evaluations for all runs was divided by the
AN
number of runs when the fitness reached the assumed level; we shall call this
value the expected run time (ERT). When no successful run was observed, it
was assumed that ERT was twice as large as the ERT for a single successful run.
For each DE algorithm, the ERT was computed for unconstrained optimization
M
and we treated this value as a reference. The ERT value of each BCHM was
then divided by the reference ERT. Plots of this ratio vs. the value of b are
depicted in Fig. 11 for each DE algorithm and each BCHM.
Note that when the efficiency of a particular combination of DE algorithm
D

and BCHM strongly depends on the location of the optimum b, it indicates


that the BCHM strongly biases the algorithm. Therefore, the most desirable
scenario is the insensitivity of the efficiency to the value of b. Two examples
TE

of such undesired sensitivity to the value of b can be observed in Fig. 11 d),


where the proportion of ERT values decreases from about 1 down to 0.3 for
the Lamarckian repair and increases from 1 up to a value greater than 3 for the
Darwinian wrapping. An increase of the proportion of ERT values indicates that
EP

application of BCHMs slows down convergence. Examples of desired dynamics


of the ERT proportion can be observed in Fig. 11 b) for the additive penalty
approach, where the proportion remains approximately constant and equal to
one for the whole range of considered values of b.
C

Results of experiments from Fig. 11 indicate that the DE algorithms consid-


ered can be divided into two groups which differ in the way in which the ERT
AC

is changed by BCHMs in comparison to unconstrained optimization.


For CDE and BBDE, introduction of BCHMs usually reduces the ERT in
comparison to the unconstrained optimization case, with some exceptions men-
tioned below. Other methods were less sensitive to BCHMs when considering
the ERT.
As a rule, Darwinian-style repair usually results in a significant increase of
ERT. Penalty-based BCHMs usually do not affect the ERT for all considered
methods.

22
ACCEPTED MANUSCRIPT

One exception to these observations is DES, where the substitution and the
additive penalty increase its ERT. The second exception is BBDE, whose ERT
is improved by the death and substitution penalty.

PT
In the majority of cases, Lamarckian repair by wrapping and reinitialization
increase the ERT of the DE algorithms considered. We believe that this effect
is caused by the fact that both methods increase the variance of mutants while
approaching the boundary (cf. Fig. 4), which may effect in a fast growing

RI
spread of individuals in consecutive populations (cf. Fig. 7).
Lamarckian repair by projection usually reduces the ERT, along with the
increasing value of b. This is no surprise, since this method prefers the border
of the feasible area. On the other hand, Lamarckian repair by projection to

SC
midpoint usually increases the ERT. This is probably caused by the tendency
of this operator to avoid the corners of the feasible hyperrectangle.
Application of Lamarckian repair by reflection, as well as feasibility-preserving
mutations, usually introduce small changes in the ERT. However, there are three

U
exceptions. For CDE and BBDE, the aforementioned methods introduce more
or less the same reduction of ERT, although for CDE the conservative BCHM
AN
yields a slightly greater reduction of ERT.
When looking at the bias introduced by BCHMs to the ERT, the most ad-
vantageous methods are those based on additive and substitution penalty, on
feasibility-preserving mutation, and on Lamarckian repair by reflection. Dar-
M
winian repair should be definitely avoided, as well as Lamarckian repair based
on projection, wrapping and reinitialization.

7.2. Influence of bound constraint handling methods on global optimization ef-


D

ficiency
Experiments were performed for the CEC’2017 benchmark set [42]. This set
defines 30 optimization problems in the dimension number n = 10, 30, 50, 100.
TE

Among them, problems number 1–3 are unimodal, problems 4–10 are simple
multimodal, and problems 11–30 are multimodal with local dynamics that sig-
nificantly differs in different regions of the feasible area. In all problems the
feasible area is the hypercube [−100, 100]n . The results of experiments are ag-
EP

gregated in agreement with the methodology described in [14].


For each problem and each dimension, we defined a set of 51 fitness levels,
which were evenly distributed in the logarithmic scale from the median of the
fitness values achieved after the first generation to the best solution that was
C

found at the end of evolution. Each run of every optimization method is char-
acterized by the so-called ECDF curve that defines the number of fitness levels
AC

achieved by the best-so-far solution, along with the number of fitness evalua-
tions. ECDF curves characterize both accuracy and convergence speed in a clear
graphical form. They can be averaged over independent runs and over different
combinations of DE algorithms and BCHMs. In Fig. 12 – 18 we characterize
each combination of DE algorithm and BCHM by a single ECDF curve, which
is averaged over all problems in the benchmark set. For each DE algorithm,
four plots are presented for n = 10, 30, 50, 100. To facilitate comparative analy-
sis, for each ECDF we compute the Area Under Curve (AUC); these values are

23
ACCEPTED MANUSCRIPT

Table 1: Values of AUC obtained for CEC’2017 benchmark set, n = 10 dimensions


method CDE loc2b JADE SADE jSO DES BBDE
Resampling 0.51 0.56 0.58 0.54 0.58 0.54 0.66

PT
Substitution p. — — — — — 0.57 —
Death penalty 0.51 0.56 0.58 0.54 0.59 — 0.49
Conservatism 0.52 0.54 0.57 0.52 0.58 0.56 —
Midpoint target 0.46 0.56 0.56 0.53 0.57 0.57 0.45

RI
Reflection L. 0.45 0.56 0.57 0.53 0.57 0.57 0.44
Midpoint base 0.45 0.56 0.56 0.53 — 0.56 —
Rand base 0.45 0.55 0.56 0.53 0.58 0.56 0.44

SC
Reinitialization 0.45 0.56 0.57 0.53 0.58 0.55 0.43
Projection to base 0.53 0.55 0.59 0.52 0.57 0.41 0.45
Projection L. 0.44 0.55 0.55 0.52 0.57 0.56 0.40
Projection to mid. 0.43 0.56 0.56 0.47 0.58 0.56 0.39
Wrapping L. 0.43 0.57 0.57 0.53 0.58 0.40 0.41

U
Additive pen. 0.30 0.55 0.53 0.49 0.56 0.56 0.12
Projection D. 0.26 0.53 0.49 0.41 0.51 0.55 0.24
Wrapping D.
Reflection D.
0.23
0.23
AN
0.54
0.51
0.45
0.42
0.33
0.34
0.51
0.50
0.55
0.55
0.30
0.30
M
provided in Tab. 1 – 4. The rows in the tables are sorted according to the mean
value of the AUC.
It should be noted that for each plot the scale is individually chosen for each
DE algorithm considered and depends on the results obtained by that algorithm
D

coupled with all considered BCHMs. Therefore, we cannot use either the ECDF
curves or the AUC values to compare efficiency between different DE algorithms.
TE

Analysis of the shapes of ECDF curves indicates that the sensitivity of global
optimization efficiency to the BCHMs differs amongst the DE algorithms con-
sidered. The efficiency of local2best, JADE, SADE and jSO is only slightly
dependent on the choice of a particular BCHM. In contrast, CDE and BBDE
EP

are very sensitive to BCHMs. DES seems an intermediate case. Differences


between the efficiency of BCHMs are greater when the problem dimension in-
creases.
A comparison of the efficiency of algorithms coupled with different BCHMs,
C

which is characterized by the corresponding AUC values, reveals certain general


observations. In the majority of cases, the winning technique is resampling.
It should be stressed, however, that in many dimensional search spaces the
AC

resampling technique may need a vast amount of trials until the first feasible
solution is produced. Therefore it is essential to impose a maximum limit of
trials and to couple resampling with some other BCHM which is applied after
exceeding that limit.
Other good BCHMs, especially in higher dimensions, are the feasibility-
preserving mutations: midpoint target, midpoint base and rand base. Note
that all these techniques refer to the base individual or to the target individual

24
ACCEPTED MANUSCRIPT

Table 2: Values of AUC obtained for CEC’2017 benchmark set, n = 30 dimensions

PT
method CDE loc2b JADE SADE jSO DES BBDE
Resampling 0.68 0.69 0.75 0.75 0.64 0.62 0.74
Midpoint target 0.54 0.68 0.74 0.74 0.62 0.63 0.37
Midpoint base 0.53 0.68 0.74 0.74 — 0.63 —

RI
Rand base 0.53 0.68 0.74 0.74 0.62 0.63 0.36
Projection to base 0.61 0.68 0.76 0.73 0.66 0.35 0.50
Reflection L. 0.54 0.68 0.74 0.74 0.62 0.63 0.33
Reinitialization 0.53 0.69 0.74 0.74 0.63 0.58 0.31

SC
Projection L. 0.50 0.68 0.74 0.74 0.62 0.62 0.29
Projection to mid. 0.53 0.68 0.73 0.69 0.63 0.62 0.29
Substitution p. — — — — — 0.63 —
Death penalty 0.67 0.69 0.73 0.75 0.64 — 0.02

U
Conservatism 0.55 0.65 0.70 0.70 0.67 0.63 —
Wrapping L. 0.52 0.69 0.73 0.74 0.62 0.32 0.28
Additive pen.
Projection D.
Reflection D.
0.38
0.32
0.18
AN
0.67
0.59
0.55
0.69
0.65
0.53
0.70
0.64
0.50
0.59
0.56
0.52
0.62
0.62
0.62
0.02
0.18
0.19
Wrapping D. 0.17 0.60 0.52 0.47 0.51 0.62 0.19
M
D

Table 3: Values of AUC obtained for CEC’2017 benchmark set, n = 50 dimensions


method CDE loc2b JADE SADE jSO DES BBDE
TE

Resampling 0.67 0.67 0.73 0.74 0.65 0.63 0.73


Midpoint target 0.55 0.66 0.73 0.71 0.63 0.66 0.38
Midpoint base 0.54 0.66 0.73 0.71 0.64 0.66 —
Rand base 0.54 0.66 0.73 0.71 0.63 0.66 0.36
Reflection L. 0.54 0.66 0.73 0.71 0.63 0.67 0.31
EP

Projection to mid. 0.55 0.67 0.74 0.69 0.65 0.65 0.27


Projection to base 0.58 0.64 0.73 0.71 0.69 0.32 0.54
Reinitialization 0.54 0.68 0.73 0.72 0.64 0.59 0.29
Projection L. 0.52 0.66 0.73 0.70 0.62 0.66 0.28
C

Substitution p. — — — — — 0.67 —
Death penalty 0.67 0.67 0.72 0.73 0.66 — 0.02
AC

Wrapping L. 0.53 0.68 0.72 0.71 0.62 0.31 0.26


Conservatism 0.50 0.59 0.65 0.63 0.70 0.65 —
Additive pen. 0.39 0.65 0.68 0.68 0.60 0.63 0.03
Projection D. 0.33 0.50 0.66 0.64 0.55 0.62 0.19
Reflection D. 0.16 0.45 0.55 0.50 0.52 0.63 0.19
Wrapping D. 0.16 0.51 0.53 0.45 0.50 0.63 0.19

25
ACCEPTED MANUSCRIPT

Table 4: Values of AUC obtained for CEC’2017 benchmark set, n = 100 dimensions
method CDE loc2b JADE SADE jSO DES BBDE
Resampling 0.71 0.68 0.71 0.73 0.62 0.54 0.72

PT
Midpoint target 0.61 0.69 0.74 0.72 0.61 0.59 0.38
Midpoint base 0.61 0.69 0.74 0.72 0.61 0.59 —
Rand base 0.60 0.69 0.74 0.72 0.61 0.59 0.36
Reflection L. 0.61 0.69 0.74 0.71 0.61 0.59 0.30

RI
Projection L. 0.58 0.69 0.73 0.71 0.60 0.58 0.27
Projection to mid. 0.60 0.67 0.72 0.70 0.63 0.58 0.26
Projection to base 0.54 0.63 0.69 0.70 0.66 0.26 0.62

SC
Reinitialization 0.60 0.70 0.73 0.71 0.61 0.47 0.28
Substitution p. — — — — — 0.58 —
Death penalty 0.71 0.69 0.71 0.73 0.62 — 0.02
Wrapping L. 0.59 0.71 0.72 0.71 0.60 0.17 0.25
Additive pen. 0.43 0.68 0.70 0.69 0.57 0.54 0.03

U
Conservatism 0.48 0.55 0.63 0.63 0.67 0.54 —
Projection D. 0.32 0.40 0.70 0.67 0.54 0.54 0.18
Reflection D.
Wrapping D.
0.16
0.16
AN
0.35
0.39
0.59
0.55
0.53
0.46
0.52
0.50
0.54
0.54
0.17
0.17
M
when determining the feasible mutant. We hypothesize that some kind of preser-
vation of the base individual or the target individual will result in maintaining
links between distributions of consecutive populations, which will facilitate the
contour-matching property and preserve the ability of populations to explore
D

the feasible area.


Among the Lamarckian repair methods, the leading technique is usually
TE

repair by reflection. The results achieved by all Darwinian repair techniques are
usually very poor in comparison to other BCHMs. The substitution penalty is
one of the best for DES but in general it is in the middle of the ranking. The
additive penalty is ineffective in most cases, especially for CDE and BBDE.
EP

8. Conclusions

In this paper we investigated interactions between 17 BCHMs and 7 dif-


C

ferent types of DE algorithms. We experimentally verified the efficiency of all


combinations of DE algorithms and BCHMs. The tests included two scenar-
ios: a quadratic function in 10 dimensions with different position of the local
AC

optimum, and the CEC’2017 benchmark set that comprised 30 optimization


problems in 10, 30, 50 and 100 dimensions. Thus, we were able to characterize
how the analyzed BCHMs influence the efficiency of various DE algorithms in
both exploitation and exploration.
The results of experiments allow us to draw several conclusions. An obvious
observation is that the choice of a BCHM does make a difference to efficiency.
Yet the influence of BCHMs on the efficiency of DE algorithms differs. Among

26
ACCEPTED MANUSCRIPT

the DE algorithms tested, the least sensitivity was revealed by JADE, SADE
and jSO, whereas CDE was more sensitive and BBDE was extremely sensitive
to choice of BCHMs.

PT
In most cases, application of repair methods in the Darwinian fashion was
least efficient. We presume that this effect is caused by the fact that populations
are neither explicitly nor implicitly forced to stay inside or at least shift towards
the feasible area. The most efficient methods for the majority of DE algorithms

RI
tested were resampling, feasibility-preserving mutation that involved the target
or the base point and Lamarckian repair by reflection. We hypothesize that
the advantage of these approaches is related to the relatively low discrepancy
between the distributions of individuals generated by the DE with and without

SC
constraints. We believe that theoretical analysis of the influence of BCHMs on
distributions of points generated by the DE, started by Zaharie and Micota [43],
is an issue that needs more investigation.

U
References

AN
[1] A. R. Yıldız, A comparative study of population-based optimization algo-
rithms for turning operations, Information Sciences 210 (2012) 81 – 88.
doi:10.1016/j.ins.2012.03.005.
[2] A. R. Yıldız, Comparison of evolutionary-based optimization algorithms
M
for structural design optimization, Engineering Applications of Artificial
Intelligence 26 (1) (2013) 327 – 333. doi:10.1016/j.engappai.2012.05.
014.
D

[3] S. Karagöz, A. R. Yıldız, A comparison of recent metaheuristic algorithms


for crashworthiness optimisation of vehicle thin-walled tubes considering
TE

sheet metal forming effects, International Journal of Vehicle Design 73 (1-


3) (2017) 179–188. doi:10.1504/IJVD.2017.082593.

[4] B. Yıldız, A comparative investigation of eight recent population-based


optimisation algorithms for mechanical and structural design problems, In-
EP

ternational Journal of Vehicle Design 73 (1) (2017) 208 – 218.


[5] B. Yıldız, A. Yıldız, Moth-flame optimization algorithm to determine op-
timal machining parameters in manufacturing processes, Materials Testing
C

59 (5) (2017) 425–429.


[6] B. Yıldız, A. Yıldız, Comparison of grey wolf, whale, water cycle, ant lion
AC

and sine-cosine algorithms for the optimization of a vehicle engine connect-


ing rod, Materials Testing 60 (3) (2018) 311–315.

[7] N. Pholdee, S. Bureerat, A. R. Yıldız, Hybrid real-code population-based


incremental learning and differential evolution for many-objective optimi-
sation of an automotive floor-frame, International Journal of Vehicle Design
73 (1-3) (2017) 20–53. doi:10.1504/IJVD.2017.082578.

27
ACCEPTED MANUSCRIPT

[8] N. Hansen, A. S. P. Niederberger, L. Guzzella, P. Koumoutsakos, A method


for handling uncertainty in evolutionary optimization with an application to
feedback control of combustion, IEEE Trans. Evol. Comput. 13 (1) (2009)

PT
180–197. doi:10.1109/TEVC.2008.924423.
[9] R. Biedrzycki, J. Arabas, A. Jasik, M. Szymański, P. Wnuk, P. Wasylczyk,
A. Wójcik-Jedlińska, Application of evolutionary methods to semiconduc-
tor double-chirped mirrors design, in: T. Bartz-Beielstein, J. Branke, B. Fil-

RI
ipič, J. Smith (Eds.), Parallel Problem Solving from Nature – PPSN XIII,
Vol. 8672 of Lecture Notes in Computer Science, Springer, 2014, pp. 761–
770. doi:10.1007/978-3-319-10762-2_75.

SC
[10] R. Biedrzycki, D. Jackiewicz, R. Szewczyk, Reliability and efficiency of dif-
ferential evolution based method of determination of Jiles-Atherton model
parameters for X30Cr13 corrosion resisting martensitic steel, Journal of
Automation, Mobile Robotics and Intelligent Systems 8 (4) (2014) 63–68.

U
doi:10.14313/JAMRIS_4-2014/39.

AN
[11] J. Arabas, L. Bartnik, S. Szostak, D. Tomaszewski, Global extraction of
MOSFET parameters using the EKV model: Some properties of the under-
lying optimization task, in: 2009 MIXDES-16th International Conference
Mixed Design of Integrated Circuits Systems, 2009, pp. 67–72.
M
[12] J. Arabas, A. Szczepankiewicz, T. Wroniak, Experimental comparison
of methods to handle boundary constraints in differential evolution, in:
R. Schaefer, C. Cotta, J. Kolodziej, G. Rudolph (Eds.), Parallel Problem
Solving from Nature, PPSN XI, Springer Berlin Heidelberg, Berlin, Heidel-
D

berg, 2010, pp. 411–420. doi:10.1007/978-3-642-15871-1_42.


[13] P. N. Suganthan, P. N. Suganthan homepage, http://www.ntu.edu.sg/
TE

home/epnsugan/, (Accessed: 17 April 2018).


[14] N. Hansen, A. Auger, D. Brockhoff, D. Tusar, T. Tusar, COCO: perfor-
mance assessment, CoRR abs/1605.03560 (2016) 1–16. arXiv:1605.03560.
EP

[15] R. Storn, K. Price, Differential evolution - a simple and efficient adaptive


scheme for global optimization over continuous spaces, Tech. rep., TR-95-
012, ICSI (1995).
C

[16] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic


for global optimization over continuous spaces, Journal of Global Optimiza-
AC

tion 11 (4) (1997) 341–359. doi:10.1023/A:1008202821328.


[17] S. Das, S. S. Mullick, P. N. Suganthan, Recent advances in differential
evolution – an updated survey, Swarm and Evol. Comput. 27 (2016) 1 –
30. doi:10.1016/j.swevo.2016.01.004.
[18] F. Neri, V. Tirronen, Recent advances in differential evolution: a survey
and experimental analysis, Artificial Intelligence Review 33 (1) (2010) 61–
106. doi:10.1007/s10462-009-9137-2.

28
ACCEPTED MANUSCRIPT

[19] R. D. Al-Dabbagh, F. Neri, N. Idris, M. S. Baba, Algorithmic design issues


in adaptive differential evolution schemes: Review and taxonomy, Swarm
and Evolutionary Computation, in press, 2018. doi:10.1016/j.swevo.

PT
2018.03.008.
[20] N. Hansen, A. Ostermeier, Completely derandomized self-adaptation in
evolution strategies, Evolutionary Computation 9 (2) (2001) 159–195. doi:

RI
10.1162/106365601750190398.
[21] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained
parameter optimization problems, Evol. Comput. 4 (1) (1996) 1–32. doi:

SC
10.1162/evco.1996.4.1.1.

[22] A. E. Eiben, J. E. Smith, Introduction to Evolutionary Computing,


Springer Berlin Heidelberg, Berlin, Heidelberg, 2003, Ch. Constraint Han-
dling, pp. 205–220. doi:10.1007/978-3-662-05094-1_12.

U
[23] C. A. C. Coello, List of references on constraint-handling tech-
niques used with evolutionary algorithms, http://www.cs.cinvestav.mx/
AN
~constraint/, (Accessed: 17 April 2018).
[24] J. Lampinen, I. Zelinka, Mechanical engineering design optimization by
differential evolution, in: D. Corne, M. Dorigo, F. Glover, D. Dasgupta,
M
P. Moscato, R. Poli, K. V. Price (Eds.), New Ideas in Optimization,
McGraw-Hill Ltd., UK, Maidenhead, UK, England, 1999, pp. 127–146.

[25] R. Storn, System design by constraint adaptation and differential evolution,


D

IEEE Transactions on Evolutionary Computation 3 (1) (1999) 22–34. doi:


10.1109/4235.752918.
TE

[26] K. Deb, An efficient constraint handling method for genetic algorithms,


Computer Methods in Applied Mechanics and Engineering 186 (2) (2000)
311 – 338. doi:10.1016/S0045-7825(99)00389-8.
EP

[27] E. Mezura-Montes, C. A. Coello Coello, E. I. Tun-Morales, Simple fea-


sibility rules and differential evolution for constrained optimization, in:
R. Monroy, G. Arroyo-Figueroa, L. E. Sucar, H. Sossa (Eds.), MICAI 2004:
Advances in Artificial Intelligence, Springer Berlin Heidelberg, Berlin, Hei-
delberg, 2004, pp. 707–716. doi:10.1007/978-3-540-24694-7_73.
C

[28] D. Jagodziński, J. Arabas, A differential evolution strategy, in: IEEE


AC

Congr. Evol. Comput., 2017, pp. 1872–1876. doi:10.1109/CEC.2017.


7969529.

[29] J. Zhang, A. C. Sanderson, JADE: Adaptive differential evolution with


optional external archive, IEEE Trans. Evol. Comput. 13 (5) (2009) 945–
958. doi:10.1109/TEVC.2009.2014613.

29
ACCEPTED MANUSCRIPT

[30] J. Brest, M. S. Maučec, B. Bošković, Single objective real-parameter op-


timization: Algorithm jSO, in: IEEE Congr. Evol. Comput., 2017, pp.
1311–1318. doi:10.1109/CEC.2017.7969456.

PT
[31] V. Kreischer, T. T. Magalhaes, H. J. Barbosa, E. Krempser, Evaluation
of bound constraints handling methods in differential evolution using the
CEC2017 benchmark, in: XIII Brazilian Congress on Computational Intel-
ligence, Rio de Janeiro, Brazil, 2017.

RI
[32] N. Padhye, P. Mittal, K. Deb, Feasibility preserving constraint-handling
strategies for real parameter evolutionary optimization, Comput. Optim.

SC
Appl. 62 (3) (2015) 851–890. doi:10.1007/s10589-015-9752-6.

[33] A. K. Qin, V. L. Huang, P. N. Suganthan, Differential evolution algorithm


with strategy adaptation for global numerical optimization, IEEE Trans.
Evol. Comput. 13 (2) (2009) 398–417. doi:10.1109/TEVC.2008.927706.

U
[34] J. Brest, S. Greiner, B. Bošković, M. Mernik, V. Zumer, Self-adapting con-
trol parameters in differential evolution: A comparative study on numeri-
AN
cal benchmark problems, IEEE Transactions on Evolutionary Computation
10 (6) (2006) 646–657. doi:10.1109/TEVC.2006.872133.
[35] R. Mallipeddi, P. N. Suganthan, Differential evolution with ensemble of
M
constraint handling techniques for solving CEC 2010 benchmark problems,
in: IEEE Congress on Evolutionary Computation, 2010, pp. 1–8. doi:
10.1109/CEC.2010.5586330.
D

[36] X. Zhang, X. Zhang, Improving differential evolution by differential vector


archive and hybrid repair method for global optimization, Soft Computing
21 (23) (2017) 7107–7116. doi:10.1007/s00500-016-2253-4.
TE

[37] I. Poikolainen, F. Neri, F. Caraffini, Cluster-based population initialization


for differential evolution frameworks, Inf. Sci. 297 (C) (2015) 216–235. doi:
10.1016/j.ins.2014.11.026.
EP

[38] S. Koziel, Z. Michalewicz, Evolutionary algorithms, homomorphous map-


pings, and constrained parameter optimization, Evolutionary Computation
7 (1) (1999) 19–44. doi:10.1162/evco.1999.7.1.19.
C

[39] D. Zaharie, Statistical properties of Differential Evolution and related


random search algorithms, in: P. Brito (Ed.), COMPSTAT: Proc. Com-
AC

put. Stat., Physica-Verlag HD, Heidelberg, 2008, pp. 473–485. doi:


10.1007/978-3-7908-2084-3_39.

[40] K. Opara, J. Arabas, Differential mutation based on population covariance


matrix, in: R. Schaefer, C. Cotta, J. Kolodziej, G. Rudolph (Eds.), Proc.
11th Parallel Problem Solving from Nature, Springer, 2010, pp. 114–123.
doi:10.1007/978-3-642-15844-5_12.

30
ACCEPTED MANUSCRIPT

[41] K. V. Price, How symmetry constrains evolutionary optimizers, in: 2017


IEEE Congress on Evolutionary Computation (CEC), 2017, pp. 1712–1719.
doi:10.1109/CEC.2017.7969508.

PT
[42] N. H. Awad, M. Ali, J. Liang, B. Qu, P. N. Suganthan, Problem definitions
and evaluation criteria for the CEC 2017 special session and competition
on real-parameter optimization, Tech. rep., Nanyang Technol. Univ., Sin-
gapore and Jordan Univ. Sci. Technol. and Zhengzhou Univ., China (2016).

RI
[43] D. Zaharie, F. Micota, Revisiting the analysis of population variance in
differential evolution algorithms, in: 2017 IEEE Congress on Evolution-

SC
ary Computation (CEC), 2017, pp. 1811–1818. doi:10.1109/CEC.2017.
7969521.

U
AN
M
D
TE
C EP
AC

31
ACCEPTED MANUSCRIPT

Reinitialization

3.0
Lamarckian projection
Darwinian projection

2.5
Lamarckian reflection

PT
Darwinian reflection

ERT/reference ERT
Lamarckian wrapping

2.0
Darwinian wrapping
Projection to midpoint

1.5
Death penalty
Additive penalty
Substitution penalty

1.0

RI
Resampling
Rand base

0.5
Midpoint base
Midpoint target
Projection to base

0.0
Conservatism

SC
0.2 0.8 0.95 0.994 0.999
a) b) b
3.0

3.0
2.5

2.5
ERT/reference ERT

ERT/reference ERT
2.0

2.0

U
1.5

1.5

AN
1.0

1.0
0.5

0.5
0.0

0.0

0.2 0.8 0.95 0.994 0.999 0.2 0.8 0.95 0.994 0.999
c) d)
M
b b
3.0

3.0
2.5

2.5
ERT/reference ERT

ERT/reference ERT
D
2.0

2.0
1.5

1.5
TE
1.0

1.0
0.5

0.5
0.0

0.0

0.2 0.8 0.95 0.994 0.999 0.2 0.8 0.95 0.994 0.999
e) f)
EP

b b
3.0

3.0
2.5
2.5
ERT/reference ERT

ERT/reference ERT

2.0
2.0
C

1.5
1.5

1.0
AC
1.0

0.5
0.5

0.0

0.2 0.8 0.95 0.994 0.999 0.2 0.8 0.95 0.994 0.999
g) b h) b

Figure 11: Proportion of ERT values vs. value of b for different BCHMs when the base
algorithm was: b) CDE, c) DE/local2best/1/bin, d) JADE, e) SADE, f) jSO, g) DES, h)
BBDE

32
ACCEPTED MANUSCRIPT

PT
Reinitialization Darwinian wrapping Rand base

RI
Lamarckian projection Projection to midpoint Midpoint base
Darwinian projection Death penalty Midpoint target
Lamarckian reflection Additive penalty Projection to base
Darwinian reflection Substitution penalty Conservatism
Lamarckian wrapping Resampling

SC
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0

U
0.8

0.8
0.6

0.6
AN
0.4

0.4
0.2

0.2
M
0.0

0.0

20 50 200 500 2000 10000 5 10 50 500 5000

a) f−evals / dimension b) f−evals / dimension


D
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0
TE
0.8

0.8
0.6

0.6
EP
0.4

0.4
0.2

0.2
0.0

0.0
C

5 10 50 500 5000 5 50 500 5000

c) d)
AC

f−evals / dimension f−evals / dimension

Figure 12: ECDF curves obtained for CEC’2017 benchmark set for 10 (a), 30 (b), 50 (c) and
100 (d) dimensions for different BCHMs when the base algorithm was CDE

33
ACCEPTED MANUSCRIPT

PT
Reinitialization Darwinian wrapping Rand base

RI
Lamarckian projection Projection to midpoint Midpoint base
Darwinian projection Death penalty Midpoint target
Lamarckian reflection Additive penalty Projection to base
Darwinian reflection Substitution penalty Conservatism
Lamarckian wrapping Resampling

SC
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0

U
0.8

0.8
0.6

0.6
AN
0.4

0.4
0.2

0.2
M
0.0

0.0

20 50 200 500 2000 10000 5 10 50 500 5000

a) f−evals / dimension b) f−evals / dimension


D
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0
TE
0.8

0.8
0.6

0.6
EP
0.4

0.4
0.2

0.2
0.0

0.0
C

5 10 50 500 5000 5 50 500 5000

c) d)
AC

f−evals / dimension f−evals / dimension

Figure 13: ECDF curves obtained for CEC’2017 benchmark set for 10 (a), 30 (b), 50 (c) and
100 (d) dimensions for different BCHMs when the base algorithm was DE/local2best/1/bin

34
ACCEPTED MANUSCRIPT

PT
Reinitialization Darwinian wrapping Rand base

RI
Lamarckian projection Projection to midpoint Midpoint base
Darwinian projection Death penalty Midpoint target
Lamarckian reflection Additive penalty Projection to base
Darwinian reflection Substitution penalty Conservatism
Lamarckian wrapping Resampling

SC
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0

U
0.8

0.8
0.6

0.6
AN
0.4

0.4
0.2

0.2
M
0.0

0.0

20 50 200 500 2000 10000 5 10 50 500 5000

a) f−evals / dimension b) f−evals / dimension


D
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0
TE
0.8

0.8
0.6

0.6
EP
0.4

0.4
0.2

0.2
0.0

0.0
C

5 10 50 500 5000 5 50 500 5000

c) d)
AC

f−evals / dimension f−evals / dimension

Figure 14: ECDF curves obtained for CEC’2017 benchmark set for 10 (a), 30 (b), 50 (c) and
100 (d) dimensions for different BCHMs when the base algorithm was JADE

35
ACCEPTED MANUSCRIPT

PT
Reinitialization Darwinian wrapping Rand base

RI
Lamarckian projection Projection to midpoint Midpoint base
Darwinian projection Death penalty Midpoint target
Lamarckian reflection Additive penalty Projection to base
Darwinian reflection Substitution penalty Conservatism
Lamarckian wrapping Resampling

SC
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0

U
0.8

0.8
0.6

0.6
AN
0.4

0.4
0.2

0.2
M
0.0

0.0

20 50 200 500 2000 10000 5 10 50 500 5000

a) f−evals / dimension b) f−evals / dimension


D
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0
TE
0.8

0.8
0.6

0.6
EP
0.4

0.4
0.2

0.2
0.0

0.0
C

5 10 50 500 5000 5 50 500 5000

c) d)
AC

f−evals / dimension f−evals / dimension

Figure 15: ECDF curves obtained for CEC’2017 benchmark set for 10 (a), 30 (b), 50 (c) and
100 (d) dimensions for different BCHMs when the base algorithm was SADE

36
ACCEPTED MANUSCRIPT

PT
Reinitialization Darwinian wrapping Rand base

RI
Lamarckian projection Projection to midpoint Midpoint base
Darwinian projection Death penalty Midpoint target
Lamarckian reflection Additive penalty Projection to base
Darwinian reflection Substitution penalty Conservatism
Lamarckian wrapping Resampling

SC
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0

U
0.8

0.8
0.6

0.6
AN
0.4

0.4
0.2

0.2
M
0.0

0.0

20 50 200 500 2000 10000 5 10 50 500 5000

a) f−evals / dimension b) f−evals / dimension


D
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0
TE
0.8

0.8
0.6

0.6
EP
0.4

0.4
0.2

0.2
0.0

0.0
C

5 10 50 500 5000 5 50 500 5000

c) d)
AC

f−evals / dimension f−evals / dimension

Figure 16: ECDF curves obtained for CEC’2017 benchmark set for 10 (a), 30 (b), 50 (c) and
100 (d) dimensions for different BCHMs when the base algorithm was BBDE

37
ACCEPTED MANUSCRIPT

PT
Reinitialization Darwinian wrapping Rand base

RI
Lamarckian projection Projection to midpoint Midpoint base
Darwinian projection Death penalty Midpoint target
Lamarckian reflection Additive penalty Projection to base
Darwinian reflection Substitution penalty Conservatism
Lamarckian wrapping Resampling

SC
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0

U
0.8

0.8
0.6

0.6
AN
0.4

0.4
0.2

0.2
M
0.0

0.0

20 50 200 500 2000 10000 5 10 50 500 5000

a) f−evals / dimension b) f−evals / dimension


D
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0
TE
0.8

0.8
0.6

0.6
EP
0.4

0.4
0.2

0.2
0.0

0.0
C

5 10 50 500 5000 5 50 500 5000

c) d)
AC

f−evals / dimension f−evals / dimension

Figure 17: ECDF curves obtained for CEC’2017 benchmark set for 10 (a), 30 (b), 50 (c) and
100 (d) dimensions for different BCHMs when the base algorithm was jSO

38
ACCEPTED MANUSCRIPT

PT
Reinitialization Darwinian wrapping Rand base

RI
Lamarckian projection Projection to midpoint Midpoint base
Darwinian projection Death penalty Midpoint target
Lamarckian reflection Additive penalty Projection to base
Darwinian reflection Substitution penalty Conservatism
Lamarckian wrapping Resampling

SC
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0

U
0.8

0.8
0.6

0.6
AN
0.4

0.4
0.2

0.2
M
0.0

0.0

100 200 500 2000 5000 100 200 500 2000 5000

a) f−evals / dimension b) f−evals / dimension


D
Proportion of function + target pairs

Proportion of function + target pairs


1.0

1.0
TE
0.8

0.8
0.6

0.6
0.4

0.4
EP
0.2

0.2
0.0

0.0
C

100 200 500 2000 5000 100 200 500 2000 5000

c) f−evals / dimension d) f−evals / dimension


AC

Figure 18: ECDF curves obtained for CEC’2017 benchmark set for 10 (a), 30 (b), 50 (c) and
100 (d) dimensions for different BCHMs when the base algorithm was DES

39

You might also like