A Nature Inspired Hybrid Optimisation Algorithm For Dynamic Environment With Real Parameter Encoding
A Nature Inspired Hybrid Optimisation Algorithm For Dynamic Environment With Real Parameter Encoding
1, 2017
Ashish Tripathi*
CSED, SPMIT,
Allahabad, India
Email: [email protected]
*Corresponding author
A.K. Misra
CSED, SPMIT,
Allahabad, India
Email: [email protected]
Abstract: In recent years, many nature inspired algorithms have been proposed which are widely
applicable for different optimisation problems. Real-world optimisation problems have become
more complex and dynamic in nature and a single optimisation algorithm is not good enough to
solve such type of problems individually. Thus hybridisation of two or more algorithms may be a
fruitful effort in handling the limitations of individual algorithm. In this paper a hybrid
optimisation algorithm has been established which includes the features of environmental
adaption method for dynamic (EAMD) environment and particle swarm optimisation (PSO). This
algorithm is specially designed to optimise both unimodal and multimodal problems and the
performance is checked over a group of 24 benchmark functions provided by black box
optimisation benchmarking (BBOB-2013). The result shows the superiority of this hybrid
algorithm over other well established state-of-the-art algorithms.
Keywords: adaptive learning; environmental adaption method for dynamic; EAMD; hybrid
algorithm; environmental adaption method; EAM; optimisation; PSO.
Reference to this paper should be made as follows: Tripathi, A., Saxena, N., Mishra, K.K. and
Misra, A.K. (2017) ‘A nature inspired hybrid optimisation algorithm for dynamic environment
with real parameter encoding’, Int. J. Bio-Inspired Computation, Vol. 10, No. 1, pp.24–32.
Nitin Saxena received his MTech in Information Security from MNNIT Allahabad, India in
2009. His current research interests include evolutionary algorithms, particle swarm optimisation,
and digital image watermarking.
K.K. Mishra received his MTech in Computer Science and Engineering from UP Technical
University and PhD in Computer Science and Engineering from MNNIT Allahabad, India in
2013. He is an Assistant Professor at Computer Science and Engineering Department of MNNIT
Allahabad. His research interests include nature inspired optimisation algorithms and their
applications in software cost estimation, test case generation.
A.K. Misra received his BTech and MTech in Electrical Engineering from University of
Roorkee, Roorkee, India in 1971 and MLNREC Allahabad, India in 1976, respectively. He
received his PhD in CSE from MNNIT Allahabad, India in 1989. His research interest includes
software engineering, artificial intelligence and nature inspired optimisation algorithms.
Parallel genetic algorithm having distributed panmictic environmental window (adaption window). Selection is
populations has been developed by Cant’u-Paz and applied after merging the initial and intermediate improved
Goldberg (1999). They checked the functioning of parallel solutions. Now solutions are sorted and best n solutions
genetic algorithm on physically distributed population selected for the next generations. In the successive
which seems like a single panmictic population. generations, the whole process continues till either the
Mirjalili and Hashim (2010) have proposed a new optimal solution is found or maximum number of fitness
hybrid algorithm PSOGSA which is a combination of PSO evaluation has reached.
and gravitational search algorithm (GSA).
Kao and Zahara (2008) have established a hybrid 3.2 Particle swarm optimisation
optimisation algorithm combining genetic algorithm (GA)
and PSO for global optimisation of multimodal functions. PSO is a population-based optimisation technique proposed
Abd-El-Wahed et al. (2011) incorporated the strength of by Eberhart and Kennedy (1995). Inspired by social
GA and PSO and applied the technique of evolving behaviour of bird flocking and fish schooling, PSO uses
individuals by GA and self-improvement technique of PSO. swarm intelligence to find the optimal solution. Further,
Fan et al. (2010) have created a hybrid technique to find PSO use random initialisation of particles in n-dimensional
the global optimal solution for nonlinear continuous search space (Mirjalili and Hashim, 2010). In the beginning,
variable functions. particles do not know the exact location where the optimal
Shi et al. (2003) have proposed a novel PSO and solution can be obtained, so they start searching and follow
GA-based hybrid algorithm (PGBHEA). This algorithm the particle which is nearest to the optimal position. For that
executes two systems concurrently and selects few each particle adjusts and updates its position according to
individuals on the basis of having larger fitness from both the two best positions. First one is the personal best (pbest)
systems separately and exchanges them after an assigned position, which has been attained so far by particle itself.
number of iterations. Second one is the global best (gbest) position obtained so far
Hybrid versions of PSO were proposed by Garca-Nieto by any particle in the search space. PSO finds global
et al. (2009) (DEPSO), El-Abd and Kamel (2009) optimal solution in the entire search space by modifying
(EDA-PSO), and Loshchilov et al. (2013) (HCMA). They particles positions using the personal best and global best
have tested the performance of their proposed algorithms (Hansen et al., 2012).
over BBOB noiseless test-bed on different dimensions like
2, 3, 5, 10, 20, and 40. The convergence rate of these
algorithms is good and requires small number of function 4 Proposed method
evaluations. 4.1 Proposed work
EAMDPSO is a population-based hybrid optimisation
3 Background details algorithm, which operates over real valued parameters in a
dynamic environment. In EAMDPSO, population is
3.1 EAMD environment represented as POP (X1, X2, X3 ... Xm) where X1, X2,
EAMD is a population-based, nature inspired randomised Xm are m individuals in the population which maintain
algorithm which works on real valued parameters. In position vector characterised by n parameters. Xi ( xi1,
EAMD, environment is created by introducing a term xi2, …, xin) is the position vector of ith individual and xij
‘environmental window’ that actually maps the real represents jth parameter of ith individual. Each parameter
environment. The environmental window is used to is allowed to take values from the search space, for
represent an environment for its inhabitants to survive. The example (xminij, xmaxij) is the search space for jth
nature of the environment is dependent on the size of the parameter of ith individual. Adaption window
window, if the window size is large, species can easily (environmental window) abbreviated as ‘aw’ symbolises
survive and as the window size decreases gradually the environment and dynamicity is simulated by changing
environment becomes tough for its species to sustain. Over the size of aw. Initial size of aw for each parameter, is set
few generations, environment becomes tough and due to equal to the respective parameter’s search space size.
environmental constraints, window size decreases gradually Each parameter of an individual only adapts within the
to make an environment dynamic. So, as the survival is size of aw. Adaptive learning technique proposed by
concerned individuals modify their phenotypic structure as Wund (2012) helps individuals to learn for improving
per environmental changes to gain better fitness over time. their fitness value in every generation. As a result fitness
If the individuals unable to adapt the changes, they no value of the population fbest in EAMD gets changed in the
longer survive and eliminated from the environment. successive generations. Sometimes individuals do not
EAMD has two operators namely adaption and improve their fitness value due to the environmental
selection. The adaption is applied first on the set of constraints (i.e., higher dimension) and finally it affects
solutions to improve their fitness and form a new the population’s fbest. As the dimension increases it
intermediate set of improved solutions. Adaption improves slowdowns the convergence rate and as a result it reduces
the fitness of solutions in the range provided by the
A nature inspired hybrid optimisation algorithm for dynamic environment with real parameter encoding 27
the probability of all solutions adapting towards the 24 BBOB-2013 noiseless fitness functions
optimal solution. (benchmark functions) shown in Table 2.
To handle this situation a hybridisation of EAMD Initially the best fitness value fbest is assigned to
with PSO is proposed in this paper. The idea behind this infinite. A counter named failureCount has been
hybrid algorithm is that if fbest value of the individuals in used to count the number of failures in finding
EAMD is not improved up to the specified number of the improved value of fbest and initially the
generations, then position vectors of individuals are failureCount is set to 0. Adaption window
handed-over to PSO. Size of the search space is same as represents the dynamic environment and it is
the size taken at the initial stage in EAMD, i.e., the initially very gentle for all individuals to survive.
whole search space which has been taken at the After some specific generations the size of
beginning of the search activity for an individual to adaption window shrinks. A term Win_size is
survive. PSO guides these solutions to update their used to determine the rate of shrinking of
fitness values in the direction of gbest and their pbest. This adaption window. It is estimated by equation (1).
direction of assignment in PSO is different from EAMD W in _ size = round ( dim ension / 1.6 ) + 10 (1)
where each individual searches new fitness around their
previous positions. PSO updates these solutions for MATLAB function ‘round’ gives the nearest
predefined number of generations and helps in removing integer value. The values 1.6 and 10 are used to
the problem of getting the sub-optimal solutions. These tune the Win_size.
newly updated solutions are now transferred to EAMD to Step 2 Fitness computation: Fitness value of each
resume processing. This whole process is repeated until individual is computed.
the optimal or near optimal solution has obtained. The Step 3 Population breeding: Each individual of old
flowchart of EAMDPSO is shown in Figure 1. population is updated using the Adaption
operator to produce improved individuals of new
Figure 1 Flowchart for EAMDPSO algorithm population.
Step 4 Population updation: Population is updated
using the Selection operator. Best N individuals
based on function’s fitness value are selected as
updated population after merging old and new
population.
Step 5 Measuring improvement in fbest: Functions
fitness value of best individual from updated
population is compared with previous generation
fbest. If updated population is able to improve
function’s best fitness then update fbest and
reset failureCount to 0 otherwise increase
failureCount by 1.
Step 6 Population updation using PSO: If the
failureCount reaches to predefined threshold,
position vector of each individual from old
population are updated using PSO computational
method to produce, possibly improved new
population. PSO is allotted a maximum timespan
(number of iterations) equal to number of
generations, individuals has elapsed. Population
is updated using Selection operator as defined in
Step 4 and failure count is reset to 0.
Step 7 Shrinking of adaption window: the size of
‘adaption window’ is reduced, decided by
stepCount. The shrinkAw gives value to shrink
the adaption window representing how worst the
resultant environment.
Step 8 Check termination condition: If the number of
4.2 EAMDPSO procedure function evaluation is larger than the predefined
maximum number of evaluations, the algorithm
Step 1 Initialisation: All individuals of population POP terminates. Otherwise, go to step 3 for the next
are selected randomly within n-dimensional generation. The pseudo code of the proposed
search space and represented by POP0. The method is shown in Algorithms 1, 2, 3 and 4,
fitness of every individual is calculated on respectively.
28 A. Tripathi et al.
14 failureCount ← 0 6 end if
15 Else 7 if pBest > gBest
17 end if 9 end if
18 end while 10 Fit_val ← Cal_Fit (POP, Fun)
Table 3 Dimension wise position of EAMDPSO among 11 4th place in f1–f5 functions and 5th place in f20–f24
state-of-the-art algorithms over BBOB benchmark functions. For 20D, it gives the less feasible result for some
functions
functions like f1–f5 and f20–f24. For other function ranges
Dimension wise position of EAMDPSO like f6–f9, f10–f14, f15–f19 and f1–f24, it is good enough to
Function range deliver its superiority over most of the algorithms.
2D 3D 5D 10D 20D 40D
f1–f5 (Separ) 4 2 4 5 9 2 Figure 2 Bootstrapped empirical cumulative distribution of the
f6–f9 (Lcond) 5 3 3 3 3 2 number of objective function evaluations divided by
dimension for 50 targets in 10[−8…2] for all functions in
f10–f14 (Hcond) 3 2 3 3 4 2 2-D, 3-D, 5-D, 10-D, 20-D and 40-D (see online
f15–f19 (Multi) 4 3 3 4 4 2 version for colours)
f20–f24 (Multi2) 2 2 1 5 8 5
f1–f24 (All) 2 2 1 3 2 2
Notes: Dimension wise performance of EAMDPSO is
shown in Table 3. It is found that the
performance of EAMDPSO is comparably better
than other algorithms. In dimension 5, it has
obtained first position as compared to best-2009
and HCMA algorithm on COCO framework in
the function range f20–f24 and f1–f24.
Figure 2 Bootstrapped empirical cumulative distribution of the EAMD. Searching technique of EAMD has been improved
number of objective function evaluations divided by by applying PSO when EAMD found itself to produce new
dimension for 50 targets in 10[−8…2] for all functions in
good solutions and unable to maintain diversity. The
2-D, 3-D, 5-D, 10-D, 20-D and 40-D (continued)
(see online version for colours) proposed EAMDPSO is capable enough to maintain the
balance between exploration and exploitation of the search
space that helps to improve the performance in higher
dimensions. Table 3 shows that EAMDPSO dominates all
recent algorithms in function range f20–f24 and f1–f24 on
dimension 5. Its performance is very good on dimension 40
for all 24 benchmark functions. In separable functions
(f1–f5), it shows good results in dimension 2, 3, 5 and 40. In
low or moderate conditioning functions (f6–f9), its
performance has not changed in dimension 3, 5, 10 and 20
and shows third position. In 40D it has secured second
position and in 2D, fifth position. In unimodal with high
conditioning functions (f10–f14), it comparatively gives
better results rather than other algorithms in all dimensions.
For multimodal with adequate global structure (f15–f19),
multimodal with weak global structure (f20–f24) and
all functions (f1–f24), it performance is very good except
in 20D of (f20–f24). The overall result obtained by
EAMDPSO is very impressive and expresses its
significance in solving unimodal and multimodal problems
and increases the performance of the algorithm in higher
dimensions.
References
Abd-El-Wahed, W.F., Mousa, A.A. and El-Shorbagy, M.A. (2011)
‘Integrating particle swarm optimization with genetic
algorithms for solving nonlinear optimization problems’,
Journal of Computational and Applied Mathematics,
Vol. 235, No. 5, pp.1446–1453.
Alba, E. and Dorronsoro, B. (2009) Cellular Genetic Algorithms,
Springer, Vol. 42.
Auger, A. and Ros, R. (2009) ‘BBO-benchmarking of pure random
search for noiseless function testbed’, BBOB Workshop
Paper.
Cant’u-Paz, E. and Goldberg, D.E. (1999) ‘Parallel genetic
algorithms with distributed panmictic populations’.
Comparing Continuous Optimisers [online]
http://coco.gforge.inria.fr/doku.php.
Eberhart, R.C. and Kennedy, J. (1995) ‘A new optimizer using
particle swarm theory’, Proceedings of the Sixth International
Symposium on Micro Machine and Human Science, Vol. 1.
El-Abd, M. and Kamel, M.S. (2009) ‘Black-box optimization
benchmarking for noiseless function testbed using an EDA
and PSO hybrid’, Proceedings of the 11th Annual Conference
Companion on Genetic and Evolutionary Computation
Conference: Late Breaking Papers, ACM.
Note: The best 2009 line corresponds to the best ERT Fan, S-K.S., Liang, Y-C. and Zahara, E. (2010) ‘A genetic
observed during BBOB 2009 for each single algorithm and a particle swarm optimizer hybridized with
target. Nelder-Mead simplex search’, Computers & Industrial
Engineering, Vol. 50, No. 4, pp.401–425.
Finck, S., Hansen, N., Ros, R. and Auger, A. (2009)
6 Conclusions Real-parameter Black-box Optimization Benchmarking 2009,
Presentation of the Noiseless Functions, Technical Report
The experimental result shows that EAMDPSO has proved 2009/20, Research Center PPE, updated February 2010.
its strength with valid convergence rate and less number of
function evaluations. It overcomes the problem with
32 A. Tripathi et al.
Garca-Nieto, J., Alba, E. and Apolloni, J. (2009) ‘Noiseless Mishra, K.K., Tiwari, S. and Misra, A.K. (2014) ‘Improved
functions black-box optimization: evaluation of a hybrid environmental adaption method and its application in test case
particle swarm with differential operators’, Proceedings of the generation’, Journal of Intelligent and Fuzzy Systems,
11th Annual Conference Companion on Genetic and Vol. 27, No. 5, pp.2305–2317.
Evolutionary Computation Conference: Late Breaking Sawyerr, B.A., Adewumi, A.O. and Ali, M.M.M. (2013)
Papers, ACM. ‘Benchmarking projection-based real coded genetic algorithm
Hansen, N. et al. (2009) Real-parameter Black-box Optimization on BBOB-2013 noiseless function testbed’, in Proceeding of
Benchmarking, Noiseless Functions Definitions, Technical the Fifteenth Annual Conference Companion on GENETIC
Report RR-6829, INRIA, updated February 2010. and Evolutionary Computation Conference Companion,
Hansen, N., Auger, A., Finck, S. and Ros, R. (2012) ACM, pp.1193–1200.
Real-parameter Black-box Optimization benChmarking 2012 Sawyerr, B.A., Ali, M.M. and Adewumi, A.O. (2011)
Experimental Setup, Technical report, INRIA. ‘A comparative study of some real-coded genetic algorithms
Holtschulte, N.J. and Moses, M. (2013) ‘Benchmarking cellular for unconstrained global optimization’, Optimization Methods
genetic algorithms on the BBOB noiseless testbed’, and Software, Vol. 26, No. 2, pp.945–970.
in Proceeding of the Fifteenth Annual Conference Companion Shi, X.H., Wan, L.M., Lee, H.P., Yang, X.W., Wang, L.M. and
on Genetic and Evolutionary Computation Conference Liang, Y.C. (2003) ‘An improved genetic algorithm with
Companion, ACM, pp.1201–1208. variable population-size and a PSO-GA based hybrid
Kao, Y-T. and Zahara, E. (2008) A hybrid genetic algorithm and evolutionary algorithm’, in Machine Learning and
particle swarm optimization for multimodal functions’, Cybernetics, International Conference on, IEEE, Vol. 3,
Applied Soft Computing, Vol. 8, No. 2, pp.849–857. pp.1735–1740.
Loshchilov, I., Schoenauer, M. and Sebag, M. (2013) Tran, T-D., Brockhoff, D. and Derbel, B. (2013)
‘Bi-population CMA-ES algorithms with surrogate models ‘Multiobjectivization with NSGA-II on the noiseless BBOB
and line searches’, Proceedings of the 15th Annual testbed’, in Proceeding of the Fifteenth Annual Conference
Conference Companion on Genetic and Evolutionary Companion on Genetic and Evolutionary Computation
Computation, ACM. Conference Companion, ACM, pp.1217–1224.
Mirjalili, S. and Hashim, S.Z.M. (2010) ‘A new hybrid Tripathi, A., Saxena, N., Mishra, K.K. and Misra, A.K. (2015)
PSOGSA algorithm for function optimization’, Computer and ‘An environmental adaption method with real parameter
Information Application (ICCIA), International Conference. encoding for dynamic environment’, Journal of Intelligent &
IEEE, pp.374–377. Fuzzy Systems, 27 August, Vol. 29, No. 5, pp.2003–2015.
Mishra, K.K., Tiwari, S. and Misra, A.K. (2003) ‘A bio inspired Tripathi, A. et al. (2014) ‘Environmental adaption method for
algorithm for solving optimization problems’, Computer and dynamic environment’, Systems, Man and Cybernetics
Communication Technology (ICCCT), 2nd International (SMC), IEEE International Conference.
Conference on, pp.653–659. Wund, M.A. (2012) ‘Assessing the impacts of phenotypic
plasticity on evolution’, Integrative and Comparative
Biology, Vol. 52, No. 1, pp.5–15.