0% found this document useful (0 votes)
7 views9 pages

A Nature Inspired Hybrid Optimisation Algorithm For Dynamic Environment With Real Parameter Encoding

Uploaded by

vermaritik25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views9 pages

A Nature Inspired Hybrid Optimisation Algorithm For Dynamic Environment With Real Parameter Encoding

Uploaded by

vermaritik25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

24 Int. J. Bio-Inspired Computation, Vol. 10, No.

1, 2017

A nature inspired hybrid optimisation algorithm for


dynamic environment with real parameter encoding

Ashish Tripathi*
CSED, SPMIT,
Allahabad, India
Email: [email protected]
*Corresponding author

Nitin Saxena and K.K. Mishra


CSED, MNNIT,
Allahabad, India
Email: [email protected]
Email: [email protected]

A.K. Misra
CSED, SPMIT,
Allahabad, India
Email: [email protected]

Abstract: In recent years, many nature inspired algorithms have been proposed which are widely
applicable for different optimisation problems. Real-world optimisation problems have become
more complex and dynamic in nature and a single optimisation algorithm is not good enough to
solve such type of problems individually. Thus hybridisation of two or more algorithms may be a
fruitful effort in handling the limitations of individual algorithm. In this paper a hybrid
optimisation algorithm has been established which includes the features of environmental
adaption method for dynamic (EAMD) environment and particle swarm optimisation (PSO). This
algorithm is specially designed to optimise both unimodal and multimodal problems and the
performance is checked over a group of 24 benchmark functions provided by black box
optimisation benchmarking (BBOB-2013). The result shows the superiority of this hybrid
algorithm over other well established state-of-the-art algorithms.

Keywords: adaptive learning; environmental adaption method for dynamic; EAMD; hybrid
algorithm; environmental adaption method; EAM; optimisation; PSO.

Reference to this paper should be made as follows: Tripathi, A., Saxena, N., Mishra, K.K. and
Misra, A.K. (2017) ‘A nature inspired hybrid optimisation algorithm for dynamic environment
with real parameter encoding’, Int. J. Bio-Inspired Computation, Vol. 10, No. 1, pp.24–32.

Biographical notes: Ashish Tripathi is an Assistant Professor at CSED, SPMIT, Allahabad,


India. He received his MTech and PhD in CSE from MNNIT Allahabad, India in 2012 and 2015,
respectively. His current research interests include nature inspired optimisation algorithm
(i.e. EAMD, GA, PSO, DE, etc.), software cost estimation, test case generation, optimisation of
RNA secondary structure and protein folding.

Nitin Saxena received his MTech in Information Security from MNNIT Allahabad, India in
2009. His current research interests include evolutionary algorithms, particle swarm optimisation,
and digital image watermarking.

K.K. Mishra received his MTech in Computer Science and Engineering from UP Technical
University and PhD in Computer Science and Engineering from MNNIT Allahabad, India in
2013. He is an Assistant Professor at Computer Science and Engineering Department of MNNIT
Allahabad. His research interests include nature inspired optimisation algorithms and their
applications in software cost estimation, test case generation.

A.K. Misra received his BTech and MTech in Electrical Engineering from University of
Roorkee, Roorkee, India in 1971 and MLNREC Allahabad, India in 1976, respectively. He
received his PhD in CSE from MNNIT Allahabad, India in 1989. His research interest includes
software engineering, artificial intelligence and nature inspired optimisation algorithms.

Copyright © 2017 Inderscience Enterprises Ltd.


A nature inspired hybrid optimisation algorithm for dynamic environment with real parameter encoding 25

1 Introduction additional information to the solution once it starts


stagnating on the local optimum value. Since this solution
Inspired by the fact that species which adapt the continuous
has already attained the best value in its local region, it can
changes in the environment due to imbalance caused by
check the best values of other regions by establishing
disasters and diseases survive and others who fail to do so
communications with other solutions. Swarm-based
decimate, a stochastic nature inspired algorithm namely
techniques such as particle swarm optimisation (PSO), are
environmental adaption method for dynamic (EAMD)
very helpful in improving the solutions as particles
environment by Tripathi et al. (2014, 2015) has been
communicate to improve their fitness.
introduced. This algorithm is an improvement of an earlier
To improve the performance of EAMD, a hybrid version
proposed algorithm, environmental adaption method (EAM)
of EAMD with PSO is proposed which provides its
by Mishra et al. (2011, 2014) which uses static environment
searching capability to EAMD. If best fitness value of an
and binary valued parameters. In contrast to this, EAMD is
individual (fbest value) in the successive generations is not
based on adaptive learning theory and considers the
improved up to the predefined threshold value (allowed
environment to be dynamic, i.e., subject to continuous
maximum number of fitness evaluations) against the
changes as in the nature. Further, EAMD works with real
previous fbest value of that individual, then, PSO is applied
valued parameters.
to improve the individuals’ fbest value. Here PSO provides
In EAMD, the dynamic environment has been
new position vector to the individuals and it is very
introduced by an ‘environmental window’. The size of this
effective in exploring the search space when EAMD is
window controls the adverseness of the environment. The
unable to provide any changes in the fitness of individuals.
environment gets tougher as the size reduces. The
The individuals with improved fitness values are put back
environment also represents the search space for EAMD.
into the environmental window to continue the adaption
EAMD is very effective in finding good regions in the
process from the next successive generation, just after the
search space with high probability of getting the optimal
generation where the individuals are shifted to PSO. This
solution. In addition exploration and exploitation
process continues from EAMD to PSO and vice-versa until
capabilities of EAMD are impressive. It also provides good
the optimum solution is obtained. The proposed algorithm
convergence rate and is applicable in both unimodal and
has been named as EAMDPSO.
multimodal problems. But in higher dimensions, stagnation
Remaining part of this paper is organised into six
(no improvement in current best fitness of population)
sections. Section 2, reviews the related work done so far.
problem may arise.
Section 3, discusses the basis of the proposed algorithm. In
EAMD uses two operators for targeting the optimal
Section 4, the proposed EAMDPSO algorithm has been
solution named adaption and selection. Adaption operator
discussed in detail and also the experimental setup and
initially explores the whole search space to mark all
simulation strategies for benchmark testing has been given.
possible good regions where probability of getting good
Analysis of the result has been shown in Section 5. Finally,
solutions is high. However as the number of generations
the paper is concluded in Section 6.
increases, this operator performs exploitation around
already selected good solutions to find out whether a global
optimal solution may exist in these regions or not. Selection
2 Related work
operator selects good solutions and helps to identify those
regions in the search space where the optimal solutions may Two cellular genetic algorithms (CGAs), like a single
exist. These processes are repeated again and again until the population genetic algorithm and a hill-climber (hill) is
global optimal solution is found. However, after careful and evaluated over BBOB test-bed by Holtschulte and Moses
deep analysis of the techniques used in EAMD, we have (2013). Three distinct population sizes 16, 49 and 100 are
found that with the increase in dimension, no improvement used in benchmarking for ring CGA (ring) and grid CGA by
in the fitness of individuals occur. As a result, it affects the Alba and Dorronsoro (2009).
convergence towards the optimal solution. Multiobjectivisation technique has been proposed by
In EAMD, each solution improves its phenotypic Tran et al. (2013) to redevelop a single objective problem as
structure by taking guidelines from current environmental a multi-objective problem.
conditions. During adaptation, each solution uses an The performance of pure-random-search algorithm
adaption window for adaptation. Initially the size of this (random) has been checked by Auger and Ros (2009) on
adaption window is taken very large so that solution may 24 benchmark functions using noise free BBOB 2009
adapt anywhere in the search space. However, the size of test-bed.
adaption window becomes shorter as the solution progresses Sawyerr et al. (2011, 2013) have evaluated the
to capture optimal structure. After some generations, the projection-based real coded genetic algorithm (PRCGA) on
solution structure becomes stable as it has attained the best the noise free BBOB 2013 test-bed on 24 benchmark
value in that local region. Those solutions which are functions. They also performed a comparative study on real-
targeting local optimum will stick in local optima. They will coded genetic algorithm (RCGAs) for unconstrained global
never recover from this situation because the size of their optimisation with global and local exploratory search
adaption window becomes very short and they have no way capabilities.
to increase it. This problem can be removed by providing
26 A. Tripathi et al.

Parallel genetic algorithm having distributed panmictic environmental window (adaption window). Selection is
populations has been developed by Cant’u-Paz and applied after merging the initial and intermediate improved
Goldberg (1999). They checked the functioning of parallel solutions. Now solutions are sorted and best n solutions
genetic algorithm on physically distributed population selected for the next generations. In the successive
which seems like a single panmictic population. generations, the whole process continues till either the
Mirjalili and Hashim (2010) have proposed a new optimal solution is found or maximum number of fitness
hybrid algorithm PSOGSA which is a combination of PSO evaluation has reached.
and gravitational search algorithm (GSA).
Kao and Zahara (2008) have established a hybrid 3.2 Particle swarm optimisation
optimisation algorithm combining genetic algorithm (GA)
and PSO for global optimisation of multimodal functions. PSO is a population-based optimisation technique proposed
Abd-El-Wahed et al. (2011) incorporated the strength of by Eberhart and Kennedy (1995). Inspired by social
GA and PSO and applied the technique of evolving behaviour of bird flocking and fish schooling, PSO uses
individuals by GA and self-improvement technique of PSO. swarm intelligence to find the optimal solution. Further,
Fan et al. (2010) have created a hybrid technique to find PSO use random initialisation of particles in n-dimensional
the global optimal solution for nonlinear continuous search space (Mirjalili and Hashim, 2010). In the beginning,
variable functions. particles do not know the exact location where the optimal
Shi et al. (2003) have proposed a novel PSO and solution can be obtained, so they start searching and follow
GA-based hybrid algorithm (PGBHEA). This algorithm the particle which is nearest to the optimal position. For that
executes two systems concurrently and selects few each particle adjusts and updates its position according to
individuals on the basis of having larger fitness from both the two best positions. First one is the personal best (pbest)
systems separately and exchanges them after an assigned position, which has been attained so far by particle itself.
number of iterations. Second one is the global best (gbest) position obtained so far
Hybrid versions of PSO were proposed by Garca-Nieto by any particle in the search space. PSO finds global
et al. (2009) (DEPSO), El-Abd and Kamel (2009) optimal solution in the entire search space by modifying
(EDA-PSO), and Loshchilov et al. (2013) (HCMA). They particles positions using the personal best and global best
have tested the performance of their proposed algorithms (Hansen et al., 2012).
over BBOB noiseless test-bed on different dimensions like
2, 3, 5, 10, 20, and 40. The convergence rate of these
algorithms is good and requires small number of function 4 Proposed method
evaluations. 4.1 Proposed work
EAMDPSO is a population-based hybrid optimisation
3 Background details algorithm, which operates over real valued parameters in a
dynamic environment. In EAMDPSO, population is
3.1 EAMD environment represented as POP (X1, X2, X3 ... Xm) where X1, X2,
EAMD is a population-based, nature inspired randomised Xm are m individuals in the population which maintain
algorithm which works on real valued parameters. In position vector characterised by n parameters. Xi ( xi1,
EAMD, environment is created by introducing a term xi2, …, xin) is the position vector of ith individual and xij
‘environmental window’ that actually maps the real represents jth parameter of ith individual. Each parameter
environment. The environmental window is used to is allowed to take values from the search space, for
represent an environment for its inhabitants to survive. The example (xminij, xmaxij) is the search space for jth
nature of the environment is dependent on the size of the parameter of ith individual. Adaption window
window, if the window size is large, species can easily (environmental window) abbreviated as ‘aw’ symbolises
survive and as the window size decreases gradually the environment and dynamicity is simulated by changing
environment becomes tough for its species to sustain. Over the size of aw. Initial size of aw for each parameter, is set
few generations, environment becomes tough and due to equal to the respective parameter’s search space size.
environmental constraints, window size decreases gradually Each parameter of an individual only adapts within the
to make an environment dynamic. So, as the survival is size of aw. Adaptive learning technique proposed by
concerned individuals modify their phenotypic structure as Wund (2012) helps individuals to learn for improving
per environmental changes to gain better fitness over time. their fitness value in every generation. As a result fitness
If the individuals unable to adapt the changes, they no value of the population fbest in EAMD gets changed in the
longer survive and eliminated from the environment. successive generations. Sometimes individuals do not
EAMD has two operators namely adaption and improve their fitness value due to the environmental
selection. The adaption is applied first on the set of constraints (i.e., higher dimension) and finally it affects
solutions to improve their fitness and form a new the population’s fbest. As the dimension increases it
intermediate set of improved solutions. Adaption improves slowdowns the convergence rate and as a result it reduces
the fitness of solutions in the range provided by the
A nature inspired hybrid optimisation algorithm for dynamic environment with real parameter encoding 27

the probability of all solutions adapting towards the 24 BBOB-2013 noiseless fitness functions
optimal solution. (benchmark functions) shown in Table 2.
To handle this situation a hybridisation of EAMD Initially the best fitness value fbest is assigned to
with PSO is proposed in this paper. The idea behind this infinite. A counter named failureCount has been
hybrid algorithm is that if fbest value of the individuals in used to count the number of failures in finding
EAMD is not improved up to the specified number of the improved value of fbest and initially the
generations, then position vectors of individuals are failureCount is set to 0. Adaption window
handed-over to PSO. Size of the search space is same as represents the dynamic environment and it is
the size taken at the initial stage in EAMD, i.e., the initially very gentle for all individuals to survive.
whole search space which has been taken at the After some specific generations the size of
beginning of the search activity for an individual to adaption window shrinks. A term Win_size is
survive. PSO guides these solutions to update their used to determine the rate of shrinking of
fitness values in the direction of gbest and their pbest. This adaption window. It is estimated by equation (1).
direction of assignment in PSO is different from EAMD W in _ size = round ( dim ension / 1.6 ) + 10 (1)
where each individual searches new fitness around their
previous positions. PSO updates these solutions for MATLAB function ‘round’ gives the nearest
predefined number of generations and helps in removing integer value. The values 1.6 and 10 are used to
the problem of getting the sub-optimal solutions. These tune the Win_size.
newly updated solutions are now transferred to EAMD to Step 2 Fitness computation: Fitness value of each
resume processing. This whole process is repeated until individual is computed.
the optimal or near optimal solution has obtained. The Step 3 Population breeding: Each individual of old
flowchart of EAMDPSO is shown in Figure 1. population is updated using the Adaption
operator to produce improved individuals of new
Figure 1 Flowchart for EAMDPSO algorithm population.
Step 4 Population updation: Population is updated
using the Selection operator. Best N individuals
based on function’s fitness value are selected as
updated population after merging old and new
population.
Step 5 Measuring improvement in fbest: Functions
fitness value of best individual from updated
population is compared with previous generation
fbest. If updated population is able to improve
function’s best fitness then update fbest and
reset failureCount to 0 otherwise increase
failureCount by 1.
Step 6 Population updation using PSO: If the
failureCount reaches to predefined threshold,
position vector of each individual from old
population are updated using PSO computational
method to produce, possibly improved new
population. PSO is allotted a maximum timespan
(number of iterations) equal to number of
generations, individuals has elapsed. Population
is updated using Selection operator as defined in
Step 4 and failure count is reset to 0.
Step 7 Shrinking of adaption window: the size of
‘adaption window’ is reduced, decided by
stepCount. The shrinkAw gives value to shrink
the adaption window representing how worst the
resultant environment.
Step 8 Check termination condition: If the number of
4.2 EAMDPSO procedure function evaluation is larger than the predefined
maximum number of evaluations, the algorithm
Step 1 Initialisation: All individuals of population POP terminates. Otherwise, go to step 3 for the next
are selected randomly within n-dimensional generation. The pseudo code of the proposed
search space and represented by POP0. The method is shown in Algorithms 1, 2, 3 and 4,
fitness of every individual is calculated on respectively.
28 A. Tripathi et al.

Algorithm 1 EAMDPSO Algorithm 3 Selection

begin Selection (POP, APOP)


1 POP0 = random (xmin, xmax) // random population begin
2 aw = xmax – xmin // Initial size of adaption window 1 temp_POP ← merge (POP, APOP)
3 itr ← 0 // Shows number of generation 2 sor_POP ← sort (temp_POP, ascending)
4 failureCount ← 0 3 POP ← seletion of top n individuals from sort_POP
5 stepCount ← 0 end
6 while (itr < maxGen && fbest ≤ ftarget)
7 APOPitr = Adaption (POPitr, aw) // use of adaption Algorithm 4 PSO
operator of EAMD
begin
8 if failureCount > failureThresh
1 Fit_val ← Cal_Fit (POP, Fun)
9 APOPitr = PSO (POPitr) // using PSO
// calculating the fitness of particle
10 end if
2 while (!TerCond) do
11 POPitr+1←Selection (POPitr, APOPitr) //
selecting best individuals 3 for each particle

12 If ftbestitrPrv < ftbestitrUpd 4 if Fit_val > pBest

13 ftbestitr ← ftbestitrUpd 5 pBest ← Fit_val

14 failureCount ← 0 6 end if
15 Else 7 if pBest > gBest

16 failureCount ← failureCount + 1 8 gBest ← pBest

17 end if 9 end if
18 end while 10 Fit_val ← Cal_Fit (POP, Fun)

19 if stepCount = Win_size 11 end for


20 aw ← shrinkAW 12 for each particle

21 stepCount ← 0 13 calculate particle velocity V

22 Else 14 Update particle value using gBest and


velocity V
23 stepCount ← stepCount +1
15 end while
24 end if
End
end
Table 1 shows the notations and their description for the
Algorithm 2 Adaption proposed algorithm.

Adaption (POP, aw)


4.3 Experimental setup
begin
1 for i = 1 to m Black-box optimisation benchmarking (BBOB 2013)
noiseless test-bed of Comparing Continuous Optimisers
2 for j = 1 to n
(http://coco.gforge.inria.fr/doku.php) platform is taken to
3 if xij + aw/2 ≥ xmaxij conduct this experiment. The proposed optimisation
4 adaptxij = random (xij – aw/2, xmaxij) algorithm, EAMDPSO is tested over 24 benchmark
5 else if xij – aw/2 ≥ xminij functions with different dimensions such as 2D, 3D, 5D,
6 adaptxij = randam (xminij, xij+ aw/2) 10D, 20D and 40D. The size [–5, 5] is taken as search
domain. Lower bound and upper bound of the search
7 else
domain is –5 and +5. The initial population POP0 is
8 adaptxij = random (xij – aw/2, xij + aw/2) randomly taken form the search space [−5, 5]. The size of
9 end if the adaption window is initially taken as max-min,
10 end for i.e., 5 – (–5) = 10 and it reduces after a specified number of
11 end for generations. Size of the window is decided by window
counter value and it calculated by (round (dimension/2.5) +
end
15). The size of the population is 30 × dimension and
maximum function evaluation is calculated by equation (2),
i.e.,
A nature inspired hybrid optimisation algorithm for dynamic environment with real parameter encoding 29

Max function evaluation 4.3.1 Benchmarking of EAMDPSO


(2)
= (500 × dimension) + 2,000. The performance of the proposed algorithm is checked
over 24 noise free single objective benchmark functions
Table 1 Notations used in the algorithms (Hansen et al., 2009, 2012; Finck et al., 2009). Comparison
Notations Description
of the proposed algorithm with other state-of-the-art
algorithms on these functions as per getting the optimal
POP0 Initial population solution and the valid convergence rate. The benchmark
aw Adaption window functions shown in Table 2 are scalable with dimension (D)
maxGen Maximum number of generations and at a time 15 instances are calculated for each
ftarget Function optimal value benchmark function. The benchmark functions are grouped
into five groups that are listed in Table 2 and it includes
xmin Minimum value of parameter
separable function (f1–f5), low or moderate conditioning
xmax Maximum value of parameter functions (f6–f9), unimodal with high conditioning
APOP Adapted population functions (f10–f15), multimodal functions with adequate
failureCount Used to count the number of failures in global structure (f16–f19) and multimodal functions with
finding the improved value of fbest weak global structure (f20–f24). It is observed that most of
stepCount Count the steps actually completed to the benchmark functions have no exact value for their
reach the Window size value optimal solution. The optimal value of these functions is
failureThresh Allowed max no. of generations to check shifted randomly in f-space (Finck et al., 2010).
the improvement in fbest
shrinkAW Shrinking value of the adaption window Table 2 Benchmark function used for experimental work

feval Number of function evaluation Group f# Function name


meval Predefined maximum evaluation number Separable f1 Sphere
Feval Total number of function evaluation f2 Ellipsoidal
Meval Maximum function evaluation f3 Rastrigin
X Individuals f4 Bche-Rastrigin
temp_POP Temporary population f5 Linear slope
sort_POP Sorted population Low or f6 Attractive sector
P_temp Merging of old and adapted population moderate f7 Step ellipsoidal
conditioning
P_sort Sort individuals in ascending order f8 Rosenbrock, original
PS Population size f9 Rosenbrock, rotated
StrLen Length of the string Unimodal with f10 Ellipsoidal
TerCond Termination condition high f11 Discus
conditioning
Fun Fitness function f12 Bent cigar
pbest Personal best position of a particle f13 Sharp ridge
gbest Global best position of a particle f14 Different powers
Cal_Fit Calculate the fitness of particle Multimodal f15 Rastrigin
Fit_Val Fitness value with adequate f16 Weierstrass
global structure
fbest Best fitness value of an individual f17 Schaffers F7
f18 Schaffers F7,
According to equation (2), the function evaluation is
moderately ill-conditioned
estimated as 38,000 for 2D, 90,000 for 3D, 180,000 for 5D,
485,000 for 10D, 1,600,000 for 20D and 6,000,000 for 40D. f19 Composite
Griewank-Rosenbrock F8F2
Total function evaluation is estimated by equation (3).
Multimodal f20 Schwefel
Total function evaluation with weak f21 Gallaghers Gaussian
= Max functionevaluation (3) global structure 101-me Peaks
× Function evaluation in one iteration. f22 Gallaghers Gaussian
21-hi Peaks
The experiments have been performed on Intel Core i7 3.4
f23 Katsuura
GHz with 64 bit machine, 2 GB Ram, Microsoft Windows 7
and MATLAB 2013a. The CPU time calculated as per f24 Lunacek bi-Rastrigin
function evaluation was 1.0, 1.1, 1.1, 1.3, 1.5, 1.8 times 10–5
seconds for dimensions 2-D, 3-D, 5-D, 10-D, 20-D, and
40-D.
30 A. Tripathi et al.

Table 3 Dimension wise position of EAMDPSO among 11 4th place in f1–f5 functions and 5th place in f20–f24
state-of-the-art algorithms over BBOB benchmark functions. For 20D, it gives the less feasible result for some
functions
functions like f1–f5 and f20–f24. For other function ranges
Dimension wise position of EAMDPSO like f6–f9, f10–f14, f15–f19 and f1–f24, it is good enough to
Function range deliver its superiority over most of the algorithms.
2D 3D 5D 10D 20D 40D
f1–f5 (Separ) 4 2 4 5 9 2 Figure 2 Bootstrapped empirical cumulative distribution of the
f6–f9 (Lcond) 5 3 3 3 3 2 number of objective function evaluations divided by
dimension for 50 targets in 10[−8…2] for all functions in
f10–f14 (Hcond) 3 2 3 3 4 2 2-D, 3-D, 5-D, 10-D, 20-D and 40-D (see online
f15–f19 (Multi) 4 3 3 4 4 2 version for colours)
f20–f24 (Multi2) 2 2 1 5 8 5
f1–f24 (All) 2 2 1 3 2 2
Notes: Dimension wise performance of EAMDPSO is
shown in Table 3. It is found that the
performance of EAMDPSO is comparably better
than other algorithms. In dimension 5, it has
obtained first position as compared to best-2009
and HCMA algorithm on COCO framework in
the function range f20–f24 and f1–f24.

5 Experimental results and analysis


The performance of EAMDPSO is checked against twelve
distinct stat-of-the-art algorithms over twenty four noise
free benchmark functions which is shown in Figure 2 and
Table 3. Two different kinds of benchmark functions have
been used in the experiment, i.e., unimodal and multimodal
which are shown in Table 2. The unimodal functions are
categorised into three parts like separable functions
(separable fcts), moderate functions (moderate fcts) and
ill-conditioned functions (ill-conditioned fcts). Each part has
five different functions. While multi model functions are
categorised into two sections such as multimodal fcts and
weakly structured multimodal fcts. In multimodal functions,
each section has also five different functions. The
‘all function’ category which is shown in both unimodal
and multimodal functions is used to verify the overall
performance of EAMDPSO. For dimensionality 2,
EAMDPSO performs better than most of the algorithms on
COCO platform on BBOB noiseless test-bed. EAMDPSO
constantly outperforming for dimension size 2 and it gives
good result for function range f1–f5, f6–f9, f15–f19, f10–f14
and f20–f24. The overall performance of EAMDPSO for
function range f1–f24 is very impressive rather than other
state-of-the-art algorithms except best 2009. The
performance on dimension size 3, the proposed algorithm
shows good result on functions f6–f9, f10–f14 and f15–f19
and secured 3rd position in the graph. It has secured
2nd position for the function range f1–f5 and f1–f24
(all functions). It is found that EAMDPSO is very close in
performance to the best 2009 for dimension size 2 and 3.
The proposed algorithm has obtained first position and
dominates the Best 2009 value for dimension size 5 on Note: The best 2009 line corresponds to the best ERT
function range f20–f24 and f1–f24. While on the other observed during BBOB 2009 for each single
functions it gives satisfactory result. On the other hand for target.
dimension size 10, EAMDPSO secures 3rd position in
functions f6–f9, f10–f14, f15–f19 and f1–f24. While it gets
A nature inspired hybrid optimisation algorithm for dynamic environment with real parameter encoding 31

Figure 2 Bootstrapped empirical cumulative distribution of the EAMD. Searching technique of EAMD has been improved
number of objective function evaluations divided by by applying PSO when EAMD found itself to produce new
dimension for 50 targets in 10[−8…2] for all functions in
good solutions and unable to maintain diversity. The
2-D, 3-D, 5-D, 10-D, 20-D and 40-D (continued)
(see online version for colours) proposed EAMDPSO is capable enough to maintain the
balance between exploration and exploitation of the search
space that helps to improve the performance in higher
dimensions. Table 3 shows that EAMDPSO dominates all
recent algorithms in function range f20–f24 and f1–f24 on
dimension 5. Its performance is very good on dimension 40
for all 24 benchmark functions. In separable functions
(f1–f5), it shows good results in dimension 2, 3, 5 and 40. In
low or moderate conditioning functions (f6–f9), its
performance has not changed in dimension 3, 5, 10 and 20
and shows third position. In 40D it has secured second
position and in 2D, fifth position. In unimodal with high
conditioning functions (f10–f14), it comparatively gives
better results rather than other algorithms in all dimensions.
For multimodal with adequate global structure (f15–f19),
multimodal with weak global structure (f20–f24) and
all functions (f1–f24), it performance is very good except
in 20D of (f20–f24). The overall result obtained by
EAMDPSO is very impressive and expresses its
significance in solving unimodal and multimodal problems
and increases the performance of the algorithm in higher
dimensions.

References
Abd-El-Wahed, W.F., Mousa, A.A. and El-Shorbagy, M.A. (2011)
‘Integrating particle swarm optimization with genetic
algorithms for solving nonlinear optimization problems’,
Journal of Computational and Applied Mathematics,
Vol. 235, No. 5, pp.1446–1453.
Alba, E. and Dorronsoro, B. (2009) Cellular Genetic Algorithms,
Springer, Vol. 42.
Auger, A. and Ros, R. (2009) ‘BBO-benchmarking of pure random
search for noiseless function testbed’, BBOB Workshop
Paper.
Cant’u-Paz, E. and Goldberg, D.E. (1999) ‘Parallel genetic
algorithms with distributed panmictic populations’.
Comparing Continuous Optimisers [online]
http://coco.gforge.inria.fr/doku.php.
Eberhart, R.C. and Kennedy, J. (1995) ‘A new optimizer using
particle swarm theory’, Proceedings of the Sixth International
Symposium on Micro Machine and Human Science, Vol. 1.
El-Abd, M. and Kamel, M.S. (2009) ‘Black-box optimization
benchmarking for noiseless function testbed using an EDA
and PSO hybrid’, Proceedings of the 11th Annual Conference
Companion on Genetic and Evolutionary Computation
Conference: Late Breaking Papers, ACM.
Note: The best 2009 line corresponds to the best ERT Fan, S-K.S., Liang, Y-C. and Zahara, E. (2010) ‘A genetic
observed during BBOB 2009 for each single algorithm and a particle swarm optimizer hybridized with
target. Nelder-Mead simplex search’, Computers & Industrial
Engineering, Vol. 50, No. 4, pp.401–425.
Finck, S., Hansen, N., Ros, R. and Auger, A. (2009)
6 Conclusions Real-parameter Black-box Optimization Benchmarking 2009,
Presentation of the Noiseless Functions, Technical Report
The experimental result shows that EAMDPSO has proved 2009/20, Research Center PPE, updated February 2010.
its strength with valid convergence rate and less number of
function evaluations. It overcomes the problem with
32 A. Tripathi et al.

Garca-Nieto, J., Alba, E. and Apolloni, J. (2009) ‘Noiseless Mishra, K.K., Tiwari, S. and Misra, A.K. (2014) ‘Improved
functions black-box optimization: evaluation of a hybrid environmental adaption method and its application in test case
particle swarm with differential operators’, Proceedings of the generation’, Journal of Intelligent and Fuzzy Systems,
11th Annual Conference Companion on Genetic and Vol. 27, No. 5, pp.2305–2317.
Evolutionary Computation Conference: Late Breaking Sawyerr, B.A., Adewumi, A.O. and Ali, M.M.M. (2013)
Papers, ACM. ‘Benchmarking projection-based real coded genetic algorithm
Hansen, N. et al. (2009) Real-parameter Black-box Optimization on BBOB-2013 noiseless function testbed’, in Proceeding of
Benchmarking, Noiseless Functions Definitions, Technical the Fifteenth Annual Conference Companion on GENETIC
Report RR-6829, INRIA, updated February 2010. and Evolutionary Computation Conference Companion,
Hansen, N., Auger, A., Finck, S. and Ros, R. (2012) ACM, pp.1193–1200.
Real-parameter Black-box Optimization benChmarking 2012 Sawyerr, B.A., Ali, M.M. and Adewumi, A.O. (2011)
Experimental Setup, Technical report, INRIA. ‘A comparative study of some real-coded genetic algorithms
Holtschulte, N.J. and Moses, M. (2013) ‘Benchmarking cellular for unconstrained global optimization’, Optimization Methods
genetic algorithms on the BBOB noiseless testbed’, and Software, Vol. 26, No. 2, pp.945–970.
in Proceeding of the Fifteenth Annual Conference Companion Shi, X.H., Wan, L.M., Lee, H.P., Yang, X.W., Wang, L.M. and
on Genetic and Evolutionary Computation Conference Liang, Y.C. (2003) ‘An improved genetic algorithm with
Companion, ACM, pp.1201–1208. variable population-size and a PSO-GA based hybrid
Kao, Y-T. and Zahara, E. (2008) A hybrid genetic algorithm and evolutionary algorithm’, in Machine Learning and
particle swarm optimization for multimodal functions’, Cybernetics, International Conference on, IEEE, Vol. 3,
Applied Soft Computing, Vol. 8, No. 2, pp.849–857. pp.1735–1740.
Loshchilov, I., Schoenauer, M. and Sebag, M. (2013) Tran, T-D., Brockhoff, D. and Derbel, B. (2013)
‘Bi-population CMA-ES algorithms with surrogate models ‘Multiobjectivization with NSGA-II on the noiseless BBOB
and line searches’, Proceedings of the 15th Annual testbed’, in Proceeding of the Fifteenth Annual Conference
Conference Companion on Genetic and Evolutionary Companion on Genetic and Evolutionary Computation
Computation, ACM. Conference Companion, ACM, pp.1217–1224.
Mirjalili, S. and Hashim, S.Z.M. (2010) ‘A new hybrid Tripathi, A., Saxena, N., Mishra, K.K. and Misra, A.K. (2015)
PSOGSA algorithm for function optimization’, Computer and ‘An environmental adaption method with real parameter
Information Application (ICCIA), International Conference. encoding for dynamic environment’, Journal of Intelligent &
IEEE, pp.374–377. Fuzzy Systems, 27 August, Vol. 29, No. 5, pp.2003–2015.
Mishra, K.K., Tiwari, S. and Misra, A.K. (2003) ‘A bio inspired Tripathi, A. et al. (2014) ‘Environmental adaption method for
algorithm for solving optimization problems’, Computer and dynamic environment’, Systems, Man and Cybernetics
Communication Technology (ICCCT), 2nd International (SMC), IEEE International Conference.
Conference on, pp.653–659. Wund, M.A. (2012) ‘Assessing the impacts of phenotypic
plasticity on evolution’, Integrative and Comparative
Biology, Vol. 52, No. 1, pp.5–15.

You might also like