0% found this document useful (0 votes)
46 views6 pages

A Comparative Study On Prominent Nature Inspired

Dasar AI

Uploaded by

wahyurmdhna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views6 pages

A Comparative Study On Prominent Nature Inspired

Dasar AI

Uploaded by

wahyurmdhna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 803

A Comparative Study on Prominent Nature Inspired


Algorithms for Function Optimization
Md. Julfikar Islam, Md. Siddiqur Rahman Tanveer and M. A. H. Akhand
Dept. of Computer Science and Engineering
Khulna University of Engineering & Technology
Khulna-9203, Bangladesh
Email: [email protected], [email protected], [email protected]

Abstract— Optimization includes finding best available values Given : →


of some objective function given a defined domain. Function
optimization (FO) is the well-studied continuous optimization task Find ∈ such that ( ) ≥ ( ), ∀ ∈
which aim is to find best suited parameter values to get optimal The domain of is referred to as the search space [1]. Each
value of a function. A number of techniques have been investigated element of is called a candidate solution in the search space,
in last few decades to solve FO and recently Nature Inspired
with being the optimal solution. The value n denotes the
Algorithms (NIAs) become popular to solve it. The objective of this
study is to draw a fair comparison among prominent NIAs in
number of dimensions of the search space, and thus the number
solving benchmark test functions. Algorithms we selected are of parameters involved in the optimization problem. The
Genetic Algorithm (GA), Particle Swarm Optimization (PSO), function F is called the objective function [1] which is used to
Artificial Bee Colony (ABC) optimization, Firefly Algorithm map the search space to the function space. The function space
(FFA), Cuckoo Search Optimization (CSO), Group Search is then mapped to the one dimensional fitness space, providing
Optimization (GSO) and Grey Wolf Optimizer (GWO). Among a single fitness value for each set of parameters. A fitness
the methods, GA is the pioneer method for optimization, PSO is function quantifies the optimality of a solution so that particular
the most popular in recent time and GWO is the most recently solution may be ranked against all the other solutions. A fitness
developed method. Experimental results revealed that GWO is the value is assigned to each solution depending on how close it to
overall best method among the NIAs and PSO is still promising to the target or optimal value.
solve bench mark functions.
A number of techniques have been investigated last few
Keywords—Function Optimization, Fitness Function, Nature decades to solve FO and recently Bio-Inspired or Nature
Inspired Algorithms. Inspired Algorithms (NIA) become popular to solve it. NIA is
a category of algorithms that imitate the way how nature
I. INTRODUCTION performs. These algorithms have been gaining much popularity
Optimization includes finding best available values of some in recent years due to the fact that many real-world optimization
objective function given a defined domain (or a set of problems have become increasingly large, complex and
constraints), including a variety of different types of objective dynamic. Different NIA have been developed different time and
functions and different types of domains [1]. In mathematics tested on different test functions. So it is important to compare
and computer science, optimization problems can be divided the algorithms on a common test bench to identify their
depending on the variables whether they are discrete or capability. In this regard, researchers have performed
continuous [2]. Discrete optimization is also known as comparative studies on FO including several algorithms which
combinatorial optimization and its goal is to find best are Particle Swarm Optimization (PSO) [3, 4 , 5], Evolution
combination among given variables from a finite (or possibly Algorithm e.g. Genetic Algorithm (GA) [3, 4], Firefly
countable infinite) set. On the other hand, continuous Algorithm (FFA) [3], Artificial Bee Colony (ABC)
optimization searches best suited parameter values in a given optimization [5], Cuckoo Search Optimization (CSO) [5] etc.
boundary. The objective of this study is to draw a fair comparison
Function optimization (FO) is the well-studied continuous among prominent NIAs, in solving benchmark test functions.
optimization task which aim is to find best suited parameter Algorithms we selected are GA, PSO, ABC, FFA, CSO, Group
values to get optimal value of a function. FO can refer to either Search Optimization (GSO) and Grey Wolf Optimizer (GWO).
minimization or maximization type: minimization type FO Among the methods GA is the pioneer method for optimization,
searches minimum function value whereas minimization type PSO is the most popular in recent time and GWO is the most
searches maximum value. Mathematically, a minimization task recently developed method. On the other hand, well known
is defined as: benchmark functions such as Sphere, Schwefel, Ackley,
Rastrigin etc are selected as test bench.
Given : →
The outline of the paper is as follows: Section II briefly
Find ∈ such that ( ) ≤ ( ), ∀ ∈ explains the NIAs considered in this study. Section III
And, a maximization task is defined as: compares proficiency of the methods in solving benchmark

978-1-5090-1269-5/16/$31.00 ©2016 IEEE


2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 804

functions and hence identify the most prominent method. and its own best solution. Population of particles is distributed
Section also includes the experimental setup and analysis. uniformly for multi-dimension search space optimization
problem. Equation (1) is to calculate velocities of particles and
II. PROMINENT NATURE INSPIRED ALGORITHMS (2) to update positions.
A large number of NIAs have been developed in recent
= ∗ + ∗ − + ∗ − (1)
years. We have considered prominent to draw a comparison of
their performance in function optimization. The following = + (2)
subsections give brief description of the NIAs to make the paper
self-contained. In (1) the global best location is denoted by and
represents the best location ever encountered by this particle.
A. Genetic Algorithm (GA)
An inertia weight is included in (1) to avoid the swarm being
GA is a search and optimization technique which was trapped into a local minimum. Both and are learning
developed based on the principle of Natural Selection by parameters and , are random parameters in a range of [0,
Darwin [6]. GA belongs to the larger class of evolutionary 1].
algorithms (EA). GA generates solutions to optimization
problems using techniques inspired by natural evolution. Such C. Artificial Bee Colony (ABC)
natural evolution techniques are inheritance, mutation, ABC algorithm [12] is developed by Dervis Karaboga in
selection, and crossover [7]. The common features of this 2005. In the ABC algorithm, there are three groups of bees are
algorithm are: populations of chromosomes (i.e., solutions), considered: employed bees, onlookers and scouts. An employed
selection according to fitness, crossover to produce new bee goes to the food source to visit it. An onlooker bee waits on
offspring, and random mutation of new offspring. the dance area to make the decision for choosing a food source.
In GA, solutions are encoded as genes. Several GA In this algorithm, the number of food sources around the hive is
operations [7] are performed on the solutions evolved in GA equal to the number of employed bees. An employed bee
toward better solutions. Among various encoding techniques, becomes scout when its food source is exhausted by the
value encoding technique is commonly used for FO. The employed and onlooker bees. Scout bee carries out a random
evolution process normally starts from a population of search.
randomly generated individuals. After that GA operations are In ABC, the position of a food source represents a possible
repeated [7] through an iterative process (called generation). In outcome to the problem and the amount of nectar of the source
each generation, the fitness of every individual is evaluated. represents its quality (fitness). In the population, number of the
The fitness is usually the value of derived from the objective employed bees or the onlooker bees is equal to the number of
function evolved in the optimization problem to be solved. solutions. Initially, solutions are generated randomly. An
Selection process is occurred in each generation to emphasize onlooker bee chooses a food source according to the probability
on fitter individuals in the population and it is expected that associated with that food source . This is calculated by the
their off springs have better fitness. There are different types of following expression in (3).
selection which are tournament selection, roulette wheel
selection etc. After selection, crossover will occur on selected =∑ (3)
population. The main objective of crossover is to breed new
solutions (i.e., offspring) from existing ones. Mutation [6, 9] Where, represents the fitness value of th bee and it is
process is also considered to get new solution by randomly
proportional to the nectar amount of the food source in the
changing an existing solution. The best chromosome might be
position . N is the number of food sources. With a view to
lost while creating a new population by crossover or mutation.
producing a candidate food position from the old one, ABC uses
For this reason to keep the best solutions Elitism [6, 8] is used.
the expression shown in (4).
It copies the best chromosome to the new offspring before
crossover and mutation. , = , + , , − , (4)
B. Particle Swarm Optimization(PSO) In (4), i and are chosen randomly and {1, 2, . . . } denotes
PSO was proposed by Eberhart and Kennedy [10]. The the dimensions. In this algorithm k is determined randomly but
algorithms is evaluated according to the idea of swarm it needs to be different from . ψ , is also a random number
intelligence based on the swarming habits by certain kinds of between [-1, 1]. It not only controls the production of neighbor
animals (such as birds and fish). The basic operations of PSO food sources around , but also represents the comparison of
are performed simultaneously maintaining several candidate two food positions. A food source is abandoned if the position
solutions in the search space. During each iteration of the which cannot be improved further through a predetermined
algorithm, the fitness of each candidate solution is determined number of cycles. This predetermined number of cycles is a
by the objective function being optimized. In PSO, each particle major control parameter called ‘‘limit”. This limit is used for
represents a potential solution. At every iteration each particle abandonment. If the abandoned food source is and j ϵ {1, 2. .
moves to a new position (i.e., search a new point) based on the . M} then scout discovers a new food source which is to be
calculated velocity. replaced with expressed in (5).
In PSO, each particle updates position based on the
= + Rand [0, 1]*( – ) (5)
calculated velocity comparing the best solution of population

978-1-5090-1269-5/16/$31.00 ©2016 IEEE


2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 805

D. Firefly Algorithm (FFA) The ѐ ℎ essentially provides a random walk while the
FFA is developed in 2009 by Xin-She Yang [13]. Fireflies random step length is drawn from a Levy distribution with a
produce short and rhythmic flashes. For a particular species the heavy tail. In fact, Levy flights have been observed among
pattern of flashes is often unique. The rhythmic flash, the rate foraging patterns of albatrosses, fruit flies and spider monkeys
of flashing and the amount of time form part of the signal [14].
system, helps to bring both male and female fireflies together. F. Group Search Optimization (GSO)
In the same species females give the response to a male’s
unique pattern of flashing. GSO is a novel optimization technique [15] had been
developed inspired by animal searching behavior by S. He et al.
There are three idealized rules in FFA: i) all fireflies are in 2009. Animal searching behavior may be described as an
unisex so that one firefly will be attracted to other fireflies active movement through which animal can find resources such
regardless of their sex; ii) attractiveness is proportional to their as foods, mates, nesting sites. One major consequence of living
brightness so that the less brighter one will move towards the together is that it is group searching strategy which allows
brighter one; and iii) the brightness of a firefly is affected or group members to increase patch finding rates [16]. Simply this
determined by the landscape of the objective function. The has led to the adoption of two foraging strategies within groups
brightness I of a firefly at a particular location X which can be which are 1) producing, e.g., searching for food; and 2)
chosen as I(X) ∞ F(X). The attractiveness β is relative and it is scrounging, e.g., joining resources uncovered by others. The
seen by other fireflies. As a result, it will vary with the distance second one is also referred to as conspecific attraction,
, between firefly i and firefly j. A firefly’s attractiveness β is kleptoparasitism [17].
proportional to the light intensity which is seen by the adjacent In GSO algorithm population is called a group and each
fireflies and it is defined by = . The distance individual animal is called a member. In an n dimensional
between any two fireflies i and j at and in Cartesian form search space, the ith member at the kth searching bout has a
is current position which is ∈ and a head angle =
,…., ( ) .The search direction of the ith member
, = − = ∑ , − , (6)
is a unit vector ( )= ,…., ( ) which is
Here, , is the k-th component of spatial coordinate of i-th measured from polar to Cartesian coordinate transformation
firefly. In 2-D case distance is , = − + − . The [18].
movement of a firefly i is attracted to another more attractive
(brighter) firefly j which is determined by (7). = Cos
= (1 − ) ∗ + ∗ + ∗( − ) (7)
The parameter α determines the maximum radius of the random = Sin . Cos ( = 2, … . . , − 1)
( )
step in (7) and determines the step size towards the better
solution.
= Sin ( )
(9)
E. Cuckoo Search Optimization (CSO)
CSO is invented by Xin-she Yang and Suash Deb in 2009 Practically in a 3-D search space, if at the k-th searching bout,
[14]. The algorithm is inspired by the brood parasitism behavior the ith member’s head angle is = ( /3, /4), then the search
seen in some species of cuckoo. They lay their eggs in other direction unit vector is = (1/2, √6/4, √2/2). Each iteration,
bird's nest. When the host bird finds out about this then it will a group member located in the most promising area and which
either throw away the cuckoo’s egg or simply abandon the nest confers the best fitness value chosen as the producer. It then
and build a new one. Naturally the cuckoo’s eggs hatch slightly stops and scans the environment to seek resources actually that
earlier than their host eggs. When the first cuckoo chick is is the optima. At the kth iteration producer will scan at zero
hatched then its first instinct action is to evict the host eggs. It degree in (10) and then scan laterally by randomly sampling
blindly propels the eggs out of the nest. other two points in right according to (11) and left by (12) [20].
Basic rules of CSO [14] are firstly, each cuckoo lays one = + ( ) (10)
egg at a time and dumps it in a randomly chosen nest. Secondly,
the best nests with the best quality of eggs will be brought to = + ( + /2) (11)
the next generation. And the number of host nests is fixed. The
probability of a host bird will discover cuckoo’s egg = + ( − /2) (12)
is (0,1). The host bird can throw away the eggs or it can is a normally distributed random number with mean 0
build a new nest. An egg which is in the nest represents a and standard deviation 1 and is random number
solution to the problem and at each time step (8) is used to sequence in the range (0,1). If the resource of the best point
generate new solutions (cuckoo’s eggs). among the three is better than producer’s current position, then
= + ( ѐ ℎ )∗ (8) it will fly to that point; otherwise it will have to stay in its
present position and turn its head to a new randomly generated

978-1-5090-1269-5/16/$31.00 ©2016 IEEE


2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 806

angle, = + . If the producer fails to find a ⃗ = ⃗ − ⃗ . ⃗ , ⃗ = ⃗ − ⃗ . ⃗ ,


better area after a iterations, it will have to turn its head back to
zero degree, = . In GSO, some group of members are ⃗ = ⃗ − ⃗ . ⃗ (20)
selected as scroungers and they try to get opportunities from the
⃗1+ ⃗2+ ⃗3
producer. The random walk toward the producer of the ith ⃗( + 1) = 3
(21)
scrounger is modeled in (13) at kth iteration.
= + − (13) III. EXPERIMENTAL STUDIES
This section first gives short description of the benchmark
is constant in a uniform random sequence which range functions and parameter setting of each individual algorithm.
is (0, 1). In this equation “ ” is the Hadamard product, After that experimental results comparing performance of GA,
which calculates the entry wise product of the two vectors. A PSO, ABC, FFA, CSO, GSO and GWO are presented. Finally,
few worst members in GSO are considered as dispersed or experimental analysis has given on selected functions.
ranger members who perform random walks. At the k-th A. Benchmark Functions and Experimental Setup
iteration, ranger generates a random head angle ; chooses a
random distance = . and move to the new point To evaluate any system benchmark data is commonly used.
using (14) Table 1 shows the characteristics of 22 benchmark functions
that we have considered in this study. The significance of the
+1 +1
= + ( ) (14) selected functions is that they have different numbers of range,
dimension and type. In table last column indicates type of a
G. Grey Wolf Optimizer (GWO) function; type unimodal, multimodal and fixed dimension
GWO is the most recently developed by S. Mirjalili et al. multimodal are marked as U, M and FDM, respectively. These
[21] based on the hunting behaviors of Grey wolf (Canis lupus) characteristics of the selected functions indicate that they are
which belongs to Canidae family. It is seen in nature that Grey different to some extent. It is worth mentioning here that these
wolves prefer to live in a pack. The main phases of their hunting functions have been widely used by in many studies.
are i) tracking, chasing, and approaching the prey; ii) Pursuing,
Experiments has been conducted in this study considering a
encircling, and harassing the prey until it stops moving; and iii)
fair settings. Population size and total iteration were considered
attack towards the prey. In the pack, wolves are categorized into
as 100 and 1000 to train a function with an algorithm. In GA
several different types and each type perform specific task.
we set the value for crossover value 0.8 and mutation 0.1. In
In GWO, the wolf with fittest solution mark as the alpha (α). PSO acceleration coefficients were 2.0 and the inertia weight
Consequently, the second and third best ones are named beta was in range [0.2, 0.9]. In ABC we set the limit as limit = (0.6
(β) and delta (δ) respectively. The rest of the candidate solutions × population size × dimension). In CSO discovery rate was
are assumed to be omega (ω). In the GWO, the hunting 0.25. For FFA we have set the value of α is 0.5, β0 is minimum
(optimization) is guided by α, β, and δ. The ω wolves have to 0.2 and is 1. Here reducing the value of α is optional to reduce
follow these three types of wolves. During hunting Grey wolves randomness. In GSO the initial head angle of each individual,
encircle the prey. The mathematical model of encircling is set to be (π/4, . . . , π/4). The constant a was given
behavior is illuminated through the following (15) and (16). by (√ + 1) where n is the dimension of the search
⃗ = c⃗. y⃗(t) − y⃗(t) space. The maximum pursuit angle is π/a2. The maximum
(15) turning angle is set to be /2. The maximum pursuit
⃗( + 1) = ⃗ ( ) − ⃗. ⃗ (16) distance is calculated from the following equation
= ‖U − L‖ = ∑ ( − ) , (22)
Here, t shows the current iteration, ⃗ and ⃗ are both coefficient
vectors, ⃗ represents the position vector of the prey, and the where and are the lower and upper bounds for the ith
position vector of a grey wolf is indicated by ⃗. The vectors ⃗ dimension. The experiment had been conducted on a PC (Intel
and ⃗ are calculated through (17) and (18). Core [email protected] CPU, 4GB RAM, Windows 8.1 OS,
MATLAB 2015).
⃗ = 2 ⃗. ⃗ − ⃗ (17)
B. Experimental Results
⃗ = 2. ⃗ (18) This section compares proficiency of GA, PSO, ABC, FFA,
CSO, GSO and GWO in solving the selected benchmark
In (17) and (18), component ⃗ is decreased from 2 to 0 linearly functions. Table 2 shows the fitness value achieved by a method
over the course of iterations and ⃗ , ⃗ are both random vectors for the functions; the value under fmin heading is the optimal
in [0, 1]. The mathematical simulation of the hunting behavior value for a function. For a particular function, the best value
of grey wolves illustrates that the alpha, beta, and delta have among the methods is bold face type. The summary of the total
fair knowledge about the potential location of prey. As a result, number of functional a method showed best is placed bottom of
the first three best solutions obtained so far are saved and also the table as best count among NIAs. On the other hand, optimal
oblige the other search agents (including the omegas). The count indicates for how many functions a method is achieved
following equations are used to fulfill this purpose. the optimal values. From the table it is observed that GA is the
⃗ = ⃗ . ⃗ − ⃗ ,⃗ = ⃗ . ⃗ − ⃗ ,⃗ = ⃗ . ⃗ − ⃗ worst method among them and failed to achieve optimal value
(19) for any function and only showed best for two cases. Among

978-1-5090-1269-5/16/$31.00 ©2016 IEEE


2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 807

TABLE 1: CHARACTERSTICS OF BENCHMARK FUNCTIONS.


found effective for some functions. PSO is found best for 13
cases and achieved optimal value for six cases.
No. Function Name Range Dimen fmin Type
sion The pair wise win/draw/loss result from Table 2 is shown in
1 Sphere [-100,100] 30 0 U Table 3 which helps to identify performance of a method with
2 Schwefel 2.22 [-10,10] 30 0 U another. According to Table 3, any SI method is found better
3 Schwefel 1.2 [-100,100] 30 0 U than GA: PSO, ABC, FFA, CSO, GSO and GWO are found
4 Rosenbrock [-30,30] 30 0 U better than GA for 20, 16, 19, 17, 22 and 20 cases out of 22
5 Step [-100,100] 5 0 U cases. On the other hand, GWO is also found better than ABC,
6 Quartic [-1.28,1.28] 30 0 U FFA and CSO and competitive to PSO and GSO. Finally GWO
7 SumSquares [-10,10] 30 0 U is the overall best method for FO and PSO is still promising to
8 Schwefel [-500,500] 30 -12569.5 M solve bench mark functions.
9 Rastrigin [-5.12,5.12] 30 0 M
10 Ackley [-32,32] 30 0 M C. Experimental Analysis
11 Griewank [-600,600] 30 0 M This section presents the effect of population and iteration
12 Penalized [-50,50] 30 0 M on the NIAs to solve benchmark functions. The experiment has
13 Penalized2 [-50,50] 30 0 M been conducted on one unimodal function (Schwefel 1.2) and
14 Foxholes [-65,65] 2 1 FDM one multimodal function (Rastrigin). The size of population
15 Six hump camel [-5,5] 2 -1.031 FDM
back
varied from 10 to 500 and iteration was varied from 50 to 1000.
16 Branin [-5,5] 2 0.398 FDM Figs. 1 and 2 show the effect of population and iteration on
17 GoldStein-Price [-2,2] 2 3 FDM Schwefel 1.2 and Rastrigin functions. For both the functions
18 Hartman3 [1,3] 3 -3.86 FDM fitness value is shown worse for very small population size (i.e.,
19 Hartman6 [0,1] 6 -3.32 FDM 10). Among the methods, ABC and PSO are shown very bad
20 Shekel5 [0,10] 4 -10.1532 FDM for small population and improves with size of population. On
21 Shekel7 [0,10] 4 -10.4028 FDM
22 Shekel10 [0,10] 4 -10.5363 FDM TABLE 3: PAIR WISE WIN/DRAW/LOSS COMPARISON OF RESULT
PRESENTED IN TABLE 2.
the methods, GWO is shown the best: it achieved best fitness GA PSO ABC FFA CSO GSO GWO
value among the methods for 17 functions out of 22 functions GA - 20/0/2 16/0/6 19/0/3 17/0/5 22/0/0 20/0/2
and among them 11 cases it achieved optimal values. GWO is
PSO - 0/10/12 1/9/12 1/9/12 2/9/11 7/8/7
the recent method and its proficiency is identified as the
collaborative actions of different wolves. GSO is shown inferior ABC - 8/9/5 7/9/6 12/9/1 12/7/3
to GWO but better than other methods. It is found best for 10 FFA - 5/8/9 13/8/1 12/6/4
cases and achieved optimal value for eight cases. On the other CSO - 13/8/1 12/6/4
hand, PSO, the pioneer as well as popular SI method, is also - 9/6/7
GSO

TABLE 2: COMPARISON AMONG THE NIAS ON THE BASIS OF FITNESS ACHIEVED FOR THE BENCHMARK FUNCTIONS.
F fmin GA PSO ABC FFA CSO GSO GWO
F1 0 15.7926 4.24E-17 0.000133253 0.00284646 0.0525012 1.05E-08 0
F2 0 1.28491 1.31E-09 5.06E-05 0.217751 3.98673 3.68E-05 3.07E-243
F3 0 4609.02 0.894981 18695.8 968.129 612.874 30.6306 3.42E-216
F4 0 447.935 17.4573 226.237 27.5278 58.1186 22.4814 25.0925
F5 0 18 0 0 0 1 0 0
F6 0 0.0335914 0.0149869 0.0372523 0.0224415 0.0211475 0.0163996 4.49E-05
F7 0 2.63916 1.69E-16 8.83E-06 0.0483066 0.00545648 2.60E-09 0
F8 -12569.5 -12532.7 -8304.92 -6037.71 -7772.46 -9219.58 -12569.5 -8164.71
F9 0 7.54775 32.8336 167.403 24.543 77.0304 2.01486 0
F10 0 2.05604 2.93E-09 0.00439897 0.0218787 5.39665 7.38E-05 7.99E-15
F11 0 1.19676 0 0.00511547 0.00234851 0.146045 4.38E-08 0
F12 0 0.0808211 7.15E-20 6.67666 0.000144563 0.99701 6.34E-11 0.0130216
F13 0 1.06799 3.82E-18 10.5903 0.00129743 0.673448 9.46E-10 1.01E-06
F14 1 0.99801 0.998004 0.998004 1.99203 0.998004 0.998004 0.998004
F15 -1.031 -1.0315 -1.03163 -1.03163 -1.03163 -1.03163 -1.03163 -1.03163
F16 0.398 0.398013 0.397887 0.397887 0.397887 0.397887 0.397887 0.397887
F17 3 3.00366 3 3 3 3 3 3
F18 -3.86 -0.289072 -0.300479 -0.300479 -0.300479 -0.300479 -0.300033 -0.300479
F19 -3.32 -3.32183 -3.322 -3.322 -3.322 -3.322 -3.322 -3.32199
F20 -10.1532 -10.0993 -10.1532 -10.1532 -10.1532 -10.1532 -10.1532 -10.1532
F21 -10.4028 -9.70687 -10.4029 -10.4029 -10.4029 -10.4029 -10.4029 -10.4029
F22 -10.5363 -10.5246 -10.5364 -10.5364 -10.5364 -10.5364 -10.5364 -10.5363
Optimal Count 0 6 7 6 6 8 11
Best Count
2 13 9 8 8 10 17
among NAIs

978-1-5090-1269-5/16/$31.00 ©2016 IEEE


2016 5th International Conference on Informatics, Electronics and Vision (ICIEV) 808

35000 PSO ABC GWO


PSO ABC 200
30000 FFA GSO CSO
GWO FFA
GSO CSO GA
25000 GA 150

ss 20000

Fitness
e Iteration=1000 Iteration=1000
tn
i 15000 100
F
10000
50
5000

0 0
0 100 200 300 400 500 0 100 200 300 400 500
Population Population
(a) Fitness varying population. (a) Fitness varying population.
40000 350
PSO ABC PSO ABC
GWO FFA 300 GWO FFA
30000 GSO CSO GSO CSO
GA 250 GA

Fitness
Fitness

200
20000 Population=100
150 Population=100
100
10000
50

0 0
0 200 400 600 800 1000 0 200 400 600 800 1000
Iteration Iteration
(b) Fitness varying iteration. (b) Fitness varying iteration.
Figure 1. Effect of Population and Iteration on Schwefel 1.2. Figure 2. Effect of Population and Iteration on Rastrigin.

the other hand, GWO, GSO and CSO are shown to perform well [10] J. Kennedy, and R. C. Eberhart. (1995). Particle swarm optimization. In
with relatively small population than others. For iteration Proceedings of the 1995 IEEE International Conference on Neural
Networks, Vol. 4, pp. 1942-1948
variation, ABC and CSO are shown very bad for small iteration
[11] James Blodin, Particle Swarm Optimization: A Tutorial, Sep. 4, 2009.
with respect to other NIAs. GWO and GSO are found to
[12] D. Karaboga, An Idea Based On Honey Bee Swarm For Numerical
perform well with less number of iteration. Finally, GWO is Optimization,Technical Report-TR06, Erciyes University, Engineering
able to solve a function effectively with small population as Faculty, Computer Engineering Department, 2005.
well as small iteration. [13] X.-S. Yang, “Firefly algorithms for multimodal optimization”, in:
Stochastic Algorithms: Foundations and Applications, SAGA 2009,
REFERENCES Lecture Notes in Computer Sciences, vol. 5792, pp. 169-178, 2009.
[1] Mathematical optimization, Available: https://en.wikipedia.org/wiki/ [14] X.-S. Yang, S. Deb. (2009). Cuckoo search via Levy flights. Proc. of
Mathematical_optimization. [Accessed: 10- Apr- 2016]. World Congress on Nature Biologically Inspired Computing, pp. 210-214
[2] Optimization problem, Available: https://en.wikipedia.org/ [15] S. He, Q. H. Wu and J. R. Saunders, “Group Search Optimizer: An
wiki/Optimization_problem. [Accessed: 10- Apr- 2016]. Optimization Algorithm Inspired by Animal Searching Behavior”, IEEE
Transactions On Evolutionary Computation, vol. 13, no. 5, October 2009.
[3] Saibal. K. Pal, C.S. Rai, Amrit Pal Singh, “ Comparative Study of Firefly
Algorithm and Particle Swarm Optimization for Noisy Non-Linear [16] H. R. Pulliam and G. E. Millikan, “Social organization in the non-
Optimization Problems”, International Journal on Intelligent Systems and reproductive season,” Animal Behavior, vol. 6, pp. 169–197, 1983.
Applications, 2012, 10, pp. 50-57, DOI: 10.5815/ijisa.2012.10.06 [17] J. Brockmann and C. J. Barnard, “Kleptoparasitism in birds,” Animal
[4] Vesterstrøm, Jakob, and Rene Thomsen. "A comparative study of Behavior, vol. 27, pp. 546–555, 1979.
differential evolution, particle swarm optimization, and evolutionary [18] D. Mustard, “Numerical integration over the n-dimensional spherical
algorithms on numerical benchmark problems." Evolutionary shell,” Math. Comput., vol. 18, no. 88, pp. 578-589, Oct. 1964.
Computation, 2004. CEC2004. Congress on. Vol. 2. IEEE, 2004. [19] J. W. Bell, Searching Behavior–The Behavioral Ecology of Finding
[5] Ab Wahab MN, Nefti-Meziani S, Atyabi A (2015) A Comprehensive Resources (Chapman and Hall Animal Behavior Series). London, U.K.:
Review of Swarm Optimization Algorithms. PLoS ONE 10(5): e0122827. Chapman and Hall, 1990.
doi:10.1371/journal.pone.0122827 [20] W. J. O’Brien, B. I. Evans, and G. L. Howick, “A new view of the
[6] D. E. Goldberg, ‘Genetic Algorithm In Search, Optimization And predation cycle of a planktivorous fish, white crappie (pomoxis
Machine Learning’, New York: Addison – Wesley 1989. annularis),” Canadian J. Fisheries Aquatic Sci., vol. 43, pp. 1894–1899,
[7] Genetic Algoeithm, Available: https://en.wikipedia.org/wiki/Genetic_alg 1986.
orithm [Accessed: 12- Apr- 2016]. [21] S. Mirjalili , S. M. Mirjalili, A. Lewis, “Grey Wolf Optimizer”, Advances
in Engineering Software 69 (2014) 46–61.
[8] John H. Holland ‘Genetic Algorithms’, Scientific American Journal, July
1992. Muro C, Escobedo R, Spector L, Coppinger R. Wolf-pack (Canis lupus)
hunting strategies emerge from simple rules in computational simulations.
[9] Mitchell Melanie, An Introduction to Genetic algorithms, ISBN BehavProcess 2011; 88:192–7.
0−262−13316−4 (HB), 0−262−63185−7 (PB), 1996

978-1-5090-1269-5/16/$31.00 ©2016 IEEE

You might also like