0% found this document useful (0 votes)
69 views16 pages

Grasshopper Optimization Algorithm For Multi-Objective Optimization Problems

Grasshopper optimization algorithm for multi-objective optimization problems

Uploaded by

invisible25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views16 pages

Grasshopper Optimization Algorithm For Multi-Objective Optimization Problems

Grasshopper optimization algorithm for multi-objective optimization problems

Uploaded by

invisible25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Appl Intell (2018) 48:805–820

DOI 10.1007/s10489-017-1019-8

Grasshopper optimization algorithm for multi-objective


optimization problems
Seyedeh Zahra Mirjalili1 · Seyedali Mirjalili2 · Shahrzad Saremi2 · Hossam Faris3 ·
Ibrahim Aljarah3

Published online: 4 August 2017


© Springer Science+Business Media, LLC 2017

Abstract This work proposes a new multi-objective algo- 1 Introduction


rithm inspired from the navigation of grass hopper swarms
in nature. A mathematical model is first employed to Before the existence of humans on this planet, nature has
model the interaction of individuals in the swam includ- been continuously solving challenging problems using evo-
ing attraction force, repulsion force, and comfort zone. A lution. Therefore, it is reasonable to be inspired from nature
mechanism is then proposed to use the model in approx- for solving different challenging problems. In the field of
imating the global optimum in a single-objective search optimization, in 1977, a revolutionary idea was proposed by
space. Afterwards, an archive and target selection technique Holland when evolutionary concepts in nature were simu-
are integrated to the algorithm to estimate the Pareto opti- lated in computers for solving optimization problems [1].
mal front for multi-objective problems. To benchmark the It was that moment when the most well-regarded heuristic
performance of the algorithm proposed, a set of diverse stan- algorithm called Genetic Algorithms (GA) [2] came to exis-
dard multi-objective test problems is utilized. The results tence and opened a new way of tackling challenging and
are compared with the most well-regarded and recent algo- complex problems in different fields of study.
rithms in the literature of evolutionary multi-objective opti- The general idea of the GA algorithm is very simple. It
mization using three performance indicators quantitatively mimics selection, re-combination, and mutation of genes in
and graphs qualitatively. The results show that the proposed nature. In fact, Darwin’s theory of evolution was the main
algorithm is able to provide very competitive results in terms inspiration of this algorithm. In GA, the optimization pro-
of accuracy of obtained Pareto optimal solutions and their cess starts by creating a set of random solutions as candidate
distribution. solutions (individuals) for a given optimization problem.
Each variable of the problem is considered as a gene and
Keywords Optimization the set of variables is analogous to chromosomes. Simi-
larly to nature, a cost function defines the fitness of each
chromosome. The whole set of solutions is considered as a
population. When the fitness of chromosomes is calculated,
the best chromosomes are randomly selected for creating
 Seyedali Mirjalili the next population. In GA, the fittest individuals that have
[email protected] higher probability are selected and participate in creating the
1
next population in a similar way to what happens in nature.
School of Electrical Engineering and Computing, University
of Newcastle, Callaghan, NSW 2308, Australia
The next step is the combination of the selected individuals.
In this step, the genes of pairs of individuals are randomly
2 Institute for Integrated and Intelligent Systems, merged to produce new individuals. Eventually, some of the
Griffith University, Nathan, QLD 4111, Australia individuals’ genes in the population are changed randomly
3 Business Information Technology Department, King Abdullah II
to mimic mutation.
School for Information Technology, The University of Jordan, The GA became gradually a dominant optimization tech-
Amman, Jordan nique compared to exact (deterministic) approaches [3]
806 S.Z. Mirjalili et al.

mainly due to the higher probability of local solutions avoid- that heuristics should be equipped with suitable opera-
ance [4]. However, the main drawback of GA was the tors to discard infeasible solutions during optimization and
stochastic nature of this algorithm which resulted in find- eventually find the best feasible solution. There are many
ing different solutions in every run. This problem seemed outstanding constraint handling techniques in the literature
easy to handle with enough number of runs yet a large num- [20, 21]. Since the proposed technique of this work will be
ber of function evaluations for every run was still an issue. applied to only unconstrained problems, we do not review
Advances in computer hardware, nowadays, are reducing the literature of such techniques further and refer interested
the computational cost of GA significantly. Therefore, GA readers to [20, 21].
can be considered as a reliable problem solving technique Some problems have computationally expensive objec-
compared to exact methods. tive functions, which make the whole heuristic optimization
The success of GA in solving a wide range of problems process very long due to the need to evaluate solutions
in science and industry paved the way of proposing new iteratively. In order to solve such a problem, researchers
heuristic algorithms. The proposal of well-regarded algo- try to reduce the number of function evaluations or utilize
rithms such as Ant Colony Optimization (ACO) [5], Particle surrogate models, which are computationally much cheaper.
Swarm Optimization (PSO) [6], Evolution Strategy (ES) One of the main difficulties mentioned above is the exis-
[7], and Differential Evolution (DE) [8–10] were the results tence of multiple objectives. A heuristic algorithm cannot
of the success of GA. This field of study is still one of the compare solutions when there is more than one objective to
most popular in computational intelligence field. In addi- be considered. In order to solve such problems, researchers
tion to the aforementioned advantage of heuristics, local compared solutions using Pareto dominance operator [22].
optima avoidance, there are several other advantages that Due to the nature of such problems, in addition, there is
have contributed in their popularity. more than one best (non-dominated) solution for a multi-
Heuristic algorithms do not need to calculate derivation objective problem. A heuristic should be able to find all the
of a problem to be able to solve it. This is because they are best solutions. Multi-objective optimization using heuristics
not designed based on gradient descent to find the global has recently attracted much attention and is the main focus
optimum. They consider a problem as a black box with of this work [23].
a set of inputs and outputs. Their inputs are the variables The majority of the heuristics have been equipped with
of the problem and outputs are the objectives. A heuristic proper operators to solve multi-objective problems in the
search starts with creating a set of random inputs as the can- literature. The mechanism of most of multi-objective heuris-
didate solutions for the problem. The search is continued tics is almost identical. The essential component is an
by evaluating each solution, observing the objectives val- archive or repository to store the non-dominated solutions
ues, and changing/combining/evolving the solutions based during the optimization process. Multi-objective heuristics
on their outputs. These steps are repeated until a termination iteratively update this archive to improve the quality and
criterion is met, which can be either a threshold of exceed- quantity of non-dominated solutions in the archive. Another
ing maximum number of iterations or maximum number of duty of a multi-objective heuristic is to find “different” non-
function evaluations. dominated solutions. This means that the non-dominated
Although heuristics are very effective in solving real solutions should be spread uniformly across all objectives
challenging problems, there are many difficulties when to show all the best trade-offs between the multiple objec-
solving optimization problems [11]. In addition, optimiza- tives. This is a key feature in a posteriori multi-objective
tion problems are not all similar and have diverse charac- algorithms [24] where decision making [25] occurs after the
teristics. Some of these difficulties/features are: dynamicity optimization.
[12–14], uncertainty [15], constraints, [16], multiple objec- There are many algorithms in the literature for solving
tives [17], and many objectives [18]. Each of such diffi- multi-objective algorithms. For the GA, the most well-
culties has created a subfield in the field of heuristics and regarded multi-objective counterpart is Non-dominated
attracted many researchers. Sorting Genetic Algorithm (NSGA) [26]. Other popular
In dynamic problems, the position of the global opti- algorithms are: Multi-Objective Particle Swarm Optimiza-
mum changes over time. Therefore, a heuristic should be tion (MOPSO) [27–29] Multi-objective Ant Colony Opti-
equipped with suitable operators to track the changes and mization [30], Multi-Objective Differential Evolution [31],
to not lose the global optimum [19]. In real problems, there Multi-objective Evolution Strategy [32]. All these algo-
is a variety of uncertainty applied to each component. In rithms are proved to be effective in finding non-dominated
order to address this, a heuristic should be able to find robust solutions for multi-objective problems. However, there
solutions that are fault tolerant. Constraints are another dif- question is if we still need more algorithms. According
ficulty of a real problem, which restricts the search space. the No Free Lunch theorem for optimization [33], there is
They divide solutions to feasible and infeasible. This means no algorithm capable of solving optimization algorithms of
Grasshopper optimization algorithm for multi-objective optimization problems 807

all kinds. This theorem logically proved this and allows the number of equality constraints, gi is the i-th inequality con-
proposal of new algorithms or improvement of current ones. straints, hi indicates the i-th equality constraints, and [Li,Ui]
Therefore, there is still room for new or improved algo- are the boundaries of i-th variable.
rithms to better solve the current problems and solve prob- One of the main difficulties in a multi-objective search
lems that are difficult to solve with the current techniques. space is that objectives can be in conflict and require special
Grasshopper Optimization Algorithm (GOA) [40] has considerations. In a multi-objective search space, a solution
been proven to benefit from high exploration while showing cannot be compared with another with relational operators.
very fast convergence speed. The special adaptive mecha- This is due to the existence of more than one criterion
nism in this algorithm smoothly balances exploration and for comparison [36]. Therefore, we need other operators to
exploitation. These characteristics make the GOA algorithm measure and find out how much a solution is better than
potentially able to cope with the difficulties of a multi- another. The most widely used operator is called Pareto
objective search space and outperform other techniques. In dominance and defined as follows:
addition, the computational complexity is better than those
of many optimization techniques in the literature. These
powerful features motivated our attempts to propose a multi- ∀i ∈ {1, 2, . . . , 0} : fi (
x ) ≤ fi (
y ) ∧ ∃i ∈ {1, 2, . . . , k} : fi (
x ) ≤ fi (
y)
objective optimizer inspired from the social behaviour of (2.5)
grasshoppers in nature. The rest of this paper is organized
as follows:
where x = (x1 x2 , . . . , xk ) and y = (y1 y2 , . . . , yk ).
Section 2 provides the preliminaries, essential definition,
This equation shows that a solution (vector x) is bet-
and literature review of multi-objective optimization using
ter than another (vector y) if it has equal and at least one
heuristics. The GOA and its proposed multi-objective ver-
better value on all objectives. In this case, x is said to
sion are described in detail in Section 3. The results are
dominate y and it can be denoted as: x≺y. An example is
presented, discussed, and analysed in Section 4. The latter
presented in Fig. 1. This figure shows that the circles are
section also includes the experimental set up, test func-
better than squares because they provide a lower value in
tions, and performance metrics as well. Finally, Section 5
both objectives.
concludes the work and suggests several future research
Despite the fact that circles dominated the squares in
directions.
the above figures, they are non-dominated with respect to
each other. This means that they are better in one objec-
tive and worse in the other. The Pareto optimality can be
2 Multi-objective optimization mathematically defined as follows:
2.1 Preliminaries and definitions
y ∈ X |fi (
∀i ∈ {1, 2, . . . , 0} : { y ) ≺ fi (
x) } (2.6)
As its name implies, multi-objective optimization deals with
optimizing multiple objectives. In the literature, the term This solution is referred to as a Pareto optimal solution
multi-objective refers to problems with up to four objec- because it cannot be dominated by the solution x.
tives. Due to the complexity of problems with more than 4 For every problem, there is a set of best non-dominated
objectives, there is a specialized field called many-objective solutions. This set is considered as a solution for multi-
optimization to solve problems with many objectives. Since objective optimization. Consequently, the projection of
solving such problems is out of the scope of this work, Pareto optimal solutions in the objective space are stored in
interested readers are referred to a survey in [34]. a set called Pareto optimal front.
In order to formulate a multi-objective problem, we can
use a problem definition. Without the loss of generality, the
following equations show multi-objective optimization as a
minimization optimization problem [35]:
Minimize f2

Minimize : F ( x ) = {f1 (
x ), f2 (
x ), . . . , f0 (
x )} (2.1)
Subj ect to : gi (
x ) ≥ 0, i = 1, 2, . . . , m (2.2)
x ) = 0, i = 1, 2, . . . , p
hi ( (2.3)
Li ≤ Ui , i = 1, 2, . . . , n (2.4)
Minimize f1
where n is the number of variables, o is the number of objective
functions, m is the number of inequality constraints, p is the Fig. 1 Pareto dominance
808 S.Z. Mirjalili et al.

2.2 Multi-objective optimization using metaheuristics which an expert preference is continuously fetched and
involved during the optimization process to find desired
In the literature, there are three main approaches for solv- Pareto optimal solutions.
ing multi-objective problems using metaheuristics: a pri- The literature shows that a posteriori are the domi-
ori [37], a posteriori [38], and interactive [39]. In the nant methods of multi-objective optimization. Most of the
first approach, multiple objectives are aggregated to one well-regarded algorithms in the field of single-objective
objective. This means that the multi-objective problem is optimization have been modified to perform a posteriori
converted to a single-objective as follows: multi-objective optimization. Needless to say, they all com-
pare solutions based on Pareto dominance and employ an
Minimize : F ( x ) = w1 f1 (
x ) + w2 f2 (x ) + . . . + wo fo (
x ) (2.7) archive to store the best Pareto optimal solutions obtained
Subj ect to : g i (
x ) ≥ 0, i = 1, 2, . . . , m (2.8) so far. The general framework of all a posteriori methods is
hi (x ) = 0, i = 1, 2, . . . , p (2.9) identical. They initiate the optimization process with a set
Li ≤ xi ≤ Ui , i = 1, 2, ..., n (2.10) of random solutions. After finding the Pareto optimal solu-
where w1 , w2 , w3, , . . . , wo are the weights of objectives, tions and storing them to an archive, they try to improve the
n is the number of variables, o is the number of objective solutions to find better Pareto optimal solutions. The pro-
functions, m is the number of inequality constraints, p is cess of improving Pareto optimal solutions is stopped when
the number of equality constraints, gi is the i-th inequal- a condition is met.
ity constraints, hi indicates the i-th equality constraints, and The main objective of a posteriori multi-objective algo-
[Li,Ui] are the boundaries of i-th variable. rithm is to find a very accurate approximation of the actual
Aggregation of objectives allows single-objective opti- (true) Pareto optimal solutions for a given multi-objective
mizers to effectively find Pareto optimal solutions. How- problem. Due to the occurrence of decision making after the
ever, the main drawbacks of this approach are the need to optimization proves, the solutions should be spread along
run an algorithm multiple times to find multiple Pareto opti- all objectives as uniform as possible as well. One of the
mal solutions, dealing with all the challenges in every run, main challenges here is that finding accurate Pareto optimal
lack of information exchange between Pareto optimal solu- solutions (convergence) is in conflict with distribution of
tions during optimization, the need to consult with an expert solutions (coverage). A multi-objective optimization should
to find the best weights, and the failure to find concave be able to effectively balance these two factors to solve a
regions of Pareto optimal front due to the addition of objec- multi-objective problem.
tives. Such methods are called a priori because decision In order to improve the coverage, different mechanisms
making is done before the optimization within determin- are utilized. In MOPSO, for instance, Pareto optimal solu-
ing the weights. Obviously, the disadvantages of a priori tions in the less populated segments of the archive have
approaches outweigh their benefits. The main duty of a a higher probability to be chosen as the leaders for other
designer when using such techniques is to run the algorithm particles. In NSGA-II, non-dominated sorting ranks Pareto
multiple times while changing the weights to find the Pareto optimal solutions and assigns them a number. This gives bet-
optimal front. ter Pareto optimal solutions a higher chance of participation
The second class of multi-objective metaheuristics are in creating the new generation.
considered as a posteriori. As the name implies, deci- In spite of the recent advances in the field of multi-
sion making is done after the optimization. There is no objective optimization, many researchers try to improve
aggregation anymore and such methods maintain the multi- the current techniques or propose new ones to solve the
objective formulation of a multi-objective problem. The current multi-objective optimization problems better. This
main advantages of this class are the ability of finding Pareto motivated our attempts to proposed and investigate the
optimal solution set in just one run, exchanging information effectiveness of a new algorithm called Grasshopper Opti-
between Pareto optimal solutions during optimization, and mization Algorithm (GOA) in this field. In the next section,
the ability to determine the Pareto optimal front of any type. the GOA is introduced first and then the multi-objective
However, a posteriori methods require special mechanisms version of this algorithm is proposed.
to address multiple and often conflicting objectives. In addi-
tion, the computational cost of such methods is normally
higher than that of aggregation techniques. 3 Multi-objective grasshopper optimization
The last method mentioned above is called interactive algorithm (MOGOA)
multi-objective optimization. The name indicates that deci-
sion making is done during optimization. Interactive opti- This section first introduces the GOA algorithm. The multi-
mization is also called human-in-the-loop optimization, in objective version of this algorithm is then proposed.
Grasshopper optimization algorithm for multi-objective optimization problems 809

3.1 Grasshopper optimization algorithm s(d) when l=1.5 and f=0.5


0.02

The GOA algorithm simulates the swarming behaviour of 0


grasshoppers in nature. The mathematical equations and for-
-0.02
mulas proposed for this algorithm are given as follows [40,
41]. In GOA, the position of the grasshoppers in the swarm -0.04
(X,Y)=(2.079 , 0)
represents a possible solution of a given optimization prob-
-0.06
lem. The position of the i-th grasshopper is denoted as Xi
and represented as given in (3.1) -0.08

Xi = Si + Gi + Ai (3.1) -0.1

-0.12
0 5 10 15
where Si is the social interaction, Gi is the gravity force on
Distance (d)
i-th grasshopper, and Ai shows the wind advection.
Equation (3.1) includes three main components to sim- Fig. 2 Function s when l =1.5 and f =0.5
ulate social interaction, impact of gravitational force, and
wind advection. These components fully simulate the move-
between them. To resolve this issue, the distance between
ment of grasshoppers, yet the main component originated
grasshoppers should be mapped or normalized to [1, 4].
from grasshoppers themselves is the social interaction dis-
The G component in (3.1) is calculated as follows:
cussed as follows:
Gi = −g eˆg (3.4)

N
Si = s(dij )d
ij (3.2)
where g is the gravitational constant and eˆg shows a unity
j =1
j
=i vector towards the centre of earth.
The A component in (3.1) is calculated as follows:
where dij is the distance between
 i-th and
 j-th grasshopper
and it is calculated as dij = xj − xi , s is a function to Ai = ueˆw (3.5)
define the strength of social forces as shown in (3.3), and
xj −xi where u is a constant drift and eˆw is a unity vector in the
d
ij = dij is a unit vector from i-th grasshopper to the j-th
direction of wind.
grasshopper.
Equation (3.1) can be written with all components as
The s function, which defines the social forces, is calcu-
follows:
lated as follows:

N
  xj − xi
s xj − xi 
−r
s (r) = f e l − e−r (3.3) Xi = − g eˆg + ueˆw (3.6)
dij
j =1
j
=i
where f indicates the intensity of attraction and lis the
attractive length scale. −r
where s (r) = f e l − e−r and N is the number of
The function s is illustrated in the following figure to
grasshoppers.
show how it impacts the social interaction (attraction and
To solve optimization problems, a stochastic algorithm
repulsion) of grasshoppers.
must perform exploration and exploitation effectively to
Inspecting Fig. 2, it may be seen that repulsion forces
determine an accurate approximation of the global opti-
are encouraged in the interval of [0, 2.079]. If the distance
mum. The mathematical model presented above should
becomes equal to 2.079, there is no attraction and repul-
be equipped with special parameters to show exploration
sion. This area is called comfort area. The attraction force
and exploitation in different stages of optimization. The
increases from 2.079 unit of distance to nearly 4 and then
proposed mathematical model is as follows:
it gradually decreases. Changing the parameters l and f
in (3.3) results in different social behaviours in artificial ⎛ ⎞
grasshoppers as may be seen in Fig. 3. ⎜  x − x ⎟
ubd − lbd
 d
N
⎜  j i⎟
To show the interaction between grasshoppers with respect Xid = c ⎜ c s xj − xid  ⎟ + Tˆd
to comfort area, Fig. 4 shows a conceptual schematic. ⎝ 2 dij ⎠
j =1
Despite the merits of the function s, it is not able to apply j
=i
strong forces between grasshoppers with large distances (3.7)
810 S.Z. Mirjalili et al.

l=1.5 f=0.5
0.2 0.2

0.1 0.1

0 0

-0.1 -0.1

f=0.0 l=1.0
-0.2 f=0.2 -0.2 l=1.2
f=0.4 l=1.4
f=0.5 l=1.5
-0.3 f=0.6 -0.3 l=1.6
f=0.8 l=1.8
f=1.0 l=2.0
-0.4 -0.4
0 5 10 15 0 5 10 15
d d

Fig. 3 Behaviour of the function s when varying lorf

where ubd is the upper bound in the d-th dimension, lbd is do not consider the gravity (no G component) and assume
−r
the lower bound in the d-th dimension s (r) = f e l − e−r , that the wind direction (A component) is always towards a
Tˆd is the value of d-th dimension in the target (best solution target (Tˆd ).
found so far), and c is a decreasing coefficient to shrink the It should be noted that the inner c contributes to the
comfort area, repulsion area, and attraction area. Note that S reduction of repulsion/attraction forces between grasshoppers
is almost similar to the S component in (3.1). However, we proportional to the number of iterations, while the outer c

Fig. 4 Primitive corrective


patterns between individuals in a
swarm of grasshoppers

Comfort

Attraction force

Repulsion force
Grasshopper optimization algorithm for multi-objective optimization problems 811

reduces the search coverage around the target as the iteration where Ni is the number of solutions in the vicinity of the
counter increases. i-th solution.
The parameter c is updated with the following equation With this probability, a roulette wheel is utilized to select
to reduce exploration and increase exploitation proportional the target from the archive. This allows improving the dis-
to the number of iteration: tribution of less distributed regions of the search space.
Another advantage is that in case of premature convergence,
cmax − cmin
c = cmax − l (3.8) it is possible for solutions with crowded neighbourhood to
L be selected as the target and resolve this issue.
where cmax is the maximum value, cmin is the minimum The archive that is employed has a limitation. There
value, l indicates the current iteration, and L is the max- should be a limited number of solutions in the archive to be
imum number of iterations. In this work, we use 1 and able to decrease the computational cost of MOGOA. This
0.00001 values for cmax and cmin, respectively. leads to the issue of a full archive. We deliberately remove
solutions with crowded neighbourhood to decrease the num-
3.2 Multi-objective grasshopper optimization algorithm ber of solutions in the crowded regions. This allows accom-
(MOGOA) modation of new solutions in the less populated regions. In
order to do so, the inverse of the Pi in (4.1) and a roulette
A multi-objective algorithm seeks two goals, when solving wheel are used.
multi-objective problems. For one, a very accurate approx- The archive should also be updated regularly. However,
imations of the true Pareto optimal solutions should be there are different cases when comparing a solution out-
found. For another, the solutions should be well-distributed side the archive and the solutions inside the archive. The
across all objectives. This is essential in a posteriori meth- MOGOA should be able to handle those cases to improve
ods since the decision making is done after the optimization the archive. The easiest case is where the external solution
process. In the following paragraphs, the main mechanisms is dominated by at least one of the archive members. In this
to achieve these two essential goals are discussed. case it should be thrown away immediately. Another case is
As discussed in the preceding section, two solutions can- when the solution is non-dominated with respect to all solu-
not be compared with the regular relational operators. Also, tions in the archive. Since the archive stores non-dominated
there is more than one solution for a multi-objective prob- solutions obtained so far, a non-dominated solution should
lem. In order to compare the solutions in MOGOA, Pareto be added to the archive. However, if the solution dominates
optimal dominance is used. The best Pareto optimal solu- a solution in the archive, it should be replaced with it.
tions are also stored in an archive. The main challenge in de- The computational complexity of MOGOA is of O(MN2 )
signing MOGOA based on GOA is to choose the target. The where M is the number of objectives and N is the number
target is the main component that leads the search agents of solutions. The complexity is equal to other well-known
towards promising regions of the search space. The same algorithms in this field: NSGA-II [42], MOPSO, SPEA2
equations in the preceding section are used in the MOGOA, [43], and PAES [7]. The computational complexity is better
and the main difference is the process of updating the target. than other algorithms such as NSGA [44] and SPEA [45],
The target can be chosen easily in a single-objective which are of O(MN3 ).
search space by choosing the best solution obtained so far. Note that the MOGOA algorithm was designed above
However, the target should be chosen from a set of Pareto considering the ‘Unified Framework’ proposed by Padhye
optimal solutions in MOGOA. Obviously, the Pareto opti- et al. [46, 47] in which the main steps are initialization,
mal solutions are added to the archive and the target must be selection, generation and replacement. Improving the per-
one of them in the archive. The challenge here is to find a formance of MOGOA with integrating evolutionary opera-
target to improve the distribution of solutions in the archive. tors (e.g. crossover and mutation) is out of the scope of this
In order to achieve this, the number of neighbouring solu- work, but it would be a valuable contribution in future.
tions in the neighbourhood of every solution in the archive is With the above-mentioned mechanisms and rules, the
first calculated considering a fixed distance. This approach MOGOA is able to find the Pareto optimal solutions, store
is similar to that in MOPSO. Afterwards, the number of them in the archive, and improve their distribution. In the
neighbouring solutions is counted and assumed as the quan- following section, a set of test functions is employed to test
titative metric to measure the crowdedness of regions in the the performance of the proposed MOGOA algorithm.1
Pareto optimal front. The following equation is employed
which defines the probability of choosing the target from
the archive:
 1 The source codes of MOGOA can be found at http://www.alimirjalili.
Pi = 1 Ni (3.9) com/Projects.html.
812 S.Z. Mirjalili et al.

4 Results on test functions where d̄ is the average of all di , n is the num-


ber of Pareto optimal

   solutions obtained,  and di =
 i j   i j 
4.1 Experimental set up f1 (x ) − f1 (x ) + f2 ( x ) for all i, j =
x ) − f2 (
1,2,3,. . . ,n.
Similarly to benchmarking single-objective optimization 
 o
algorithms, several test functions with diverse character- 
istics should be employed. This is because different test MS = max (d (ai , bi )) (4.3)
i=1
functions are able to challenge an algorithm from different
perspectives. There are many standard test functions in the where d is a function to calculate the Euclidean distance,
literature: ZDT [48], DTLZ [49], and CEC2009 [50]. The ai is the maximum value in the i-th objective, bi is the
details of test functions employed in this work are presented minimum in thei-th objective, and o is the number of
in the Appendix. It can be seen that the test functions have objectives.
different Pareto optimal front: concave, convex, linear, and Note that for IGD and SP, lower values show better
separated. The search space of most of them are multimodal, results. By contrast, MS is high for a better algorithm and
in which several local fronts hinder the solutions to move shows higher coverage. In order to qualitatively compare
easily towards the true Pareto optimal front. the results and observe the distribution of solutions, the best
In order to verify the results of the proposed MOGOA Pareto optimal front by each of the algorithms are illustrated
algorithm, its results are compared with the well-regarded in the following subsection as well.
and popular multi-objective algorithms in the literature such
as NSGA-II, MOPSO, MODA [51], and MOALO [52]. The 4.2 Results on ZDT test suite
results are collected and compared qualitatively and quanti-
tatively. For the qualitative results, the best Pareto optimal The results of algorithms on the test functions are presented
fronts over 10 runs are chosen and drawn in the follow- in Table 1 and Fig. 5. Table 1 includes average, standard
ing subsection. Obviously, such figures allow us to see deviation, median, best, and worst value for IGD obtained
which algorithm performs better. However, qualitative fig- by algorithms after 10 independent runs. Figure 5 illustrates
ures cannot show how much better an algorithm is better the best Pareto optimal front obtained by all algorithms.
accurately. Therefore, we have employed a set of perfor- Inspecting the results in Table 1, it is evident that the
mance indicators to quantify convergence and coverage MOGOA algorithm outperforms MOPSO and NSGA-II in
of the algorithms. The first performance metric employed three of the test functions: ZDT13, ZDT1 with linear front,
is the inverse generational distance (IGD) and defined as and ZDT1 with 3 objectives. In the rest of the test functions,
follows: MOGOA shows very competitive results. It is interesting
that MOGOA managed to significantly show better results

nt  2 than NSGA-II in all these functions. The IGD performance
i=1 di measure is a good indicator of the accuracy of algorithms
I GD = (4.1)
n in approximating Pareto optimal solutions. Therefore, these
results show that MOGOA benefits from good convergence
where nt is the number of true Pareto optimal solutions towards the Pareto optimal solutions.
and di indicates the Euclidean distance between the i-th The best Pareto optimal fronts illustrated in Fig. 5 show
true Pareto optimal solution and the closest Pareto optimal that the distribution of the Pareto optimal solutions obtained
solution obtained in the reference set. by MOGOA tends to be better than the other two algo-
The IGD metric quantifies the convergence of algo- rithms. It may be seen that MOGOA outperforms NSGA-II
rithms, so we can measure how close the obtained Pareto in all the functions and is occasionally better than MOPSO.
optimal solutions are to the true Pareto optimal solutions. The Pareto optimal fronts for ZDT3 indicate that MOGOA
However, the coverage of solutions across all objectives finds four separated regions, while MOPSO finds only three
is also important. To measure coverage and quantitatively of them. In the most challenging test function, ZDT1 with
compare the algorithms, the spacing (SP) and maximum 3 objectives, it can be observed that the Pareto optimal
spread (MS) measures are employed. SP and MS are given solutions found by MOGOA are well distributed.
in (4.2) and (4.3), respectively.
 4.3 Results on CEC2009 test suite

 1 
n
 2
SP = d̄ − di (4.2) The preceding section investigated the performance of the
n−1
i=1 proposed MOGOA algorithm on ZDT test set. Most of
Grasshopper optimization algorithm for multi-objective optimization problems 813

Table 1 Results of the


multi-objective algorithms on Algorithm IGD
ZDT1, ZDT2, ZDT3, ZDT1
with linear front, and ZDT1 Ave Std. Median Best Worst
with 3 objectives
ZDT1
MOGOA 0.0121 0.0247 0.0046 0.0028 0.0822
MOPSO 0.0042 0.0031 0.0037 0.0015 0.0101
NSGA-II 0.0599 0.0054 0.0574 0.0546 0.0702
MODA 0.0061 0.0028 0.0072 0.0024 0.0096
MOALO 0.0152 0.00502 0.0166 0.0061 0.0209
ZDT2
MOGOA 0.007 0.0090 0.0049 0.0016 0.0273
MOPSO 0.003 0.0002 0.0017 0.0013 0.0017
NSGA-II 0.140 0.0263 0.1258 0.1148 0.1834
MODA 0.017 0.0109 0.0165 0.0050 0.0377
MOALO 0.017 0.0109 0.0165 0.0050 0.0377
ZDT3
MOGOA 0.0306 0.0034 0.0313 0.0224 0.0345
MOPSO 0.0378 0.0063 0.0362 0.0308 0.0497
NSGA-II 0.0417 0.0081 0.0403 0.0315 0.0557
MODA 0.0279 0.0040 0.0302 0.02 0.0304
MOALO 0.0303 0.0009 0.0323 0.0303 0.0330
ZDT1 with a linear front
MOGOA 0.0091 0.0148 0.0023 0.0017 0.0498
MOPSO 0.0092 0.0055 0.0098 0.0012 0.0165
NSGA-II 0.0827 0.0054 0.0804 0.0773 0.0924
MODA 0.0061 0.0051 0.0038 0.0022 0.0163
MOALO 0.0198 0.0075 0.0196 0.0106 0.0330
ZDT1 with 3 objectives
MOGOA 0.0114 0.0079 0.0081 0.0068 0.0289
MOPSO 0.0203 0.0013 0.0203 0.0189 0.0225
NSGA-II 0.0626 0.0179 0.0584 0.0371 0.0847
MODA 0.00916 0.0053 0.0063 0.0048 0.0191
MOALO 0.01982 0.0075 0.0196 0.0106 0.0330

the test functions in this test suite are not multi-modal. show that the proposed algorithm is able to find very accu-
To benchmark the performance of the proposed algorithm rate approximations of true Pareto optimal solutions for the
on more challenging test beds, this subsection employs the given multi-objective problems. High convergence, how-
CEC2009 test functions. These functions are of the most ever, might result in poor coverage. Inspecting the results
difficult test functions in the literature of multi-objective of Tables 3 and 4, it may be seen that the coverage of the
optimization and able to confirm whether the superiority MOGOA algorithm tends to be better than those of MOPSO
of MOGOA is significant or not. The mathematical formu- and MOEA/D on the majority of CEC2009 test functions.
lation of these test functions are given in the Appendix. This shows that the proposed algorithm benefits from high
The results of MOGOA on CEC2009 test functions are pre- coverage as well.
sented in Tables 2, 3, and 4 and compared to MOPSO and The results of the preceding tables were collected and
MOEA/D. presented considering 30 independent runs. The average,
Table 2 shows that the MOGOA algorithm provides bet- median, standard deviation, maximum, and minimum statis-
ter results on six of the CEC2009 test functions. The test tical metrics show how well the proposed algorithm performs
functions are UF3, UF5, UF7, UF8, UF9, and UF10. IGD on average. To see how significant the superiority of the pro-
quantifies the convergence of algorithms, so these results posed algorithm is considering each run and prove that the
814 S.Z. Mirjalili et al.

Fig. 5 Best Pareto optimal front


obtained by the multi-objective
algorithms on test functions

results were not obtained by chance, the Wilcoxon rank-sum independently. For example, if the best algorithm is
test is conducted in this subsection as well. The p-values that MOGOA, the pairwise comparison is done between
are less than 0.05 could be considered as strong evidence MOGOA/MOPSO and MOGOA/MOEA/D. The results are
against the null hypothesis. presented in Tables 5 and 6. It is evident in these tables that
For this statistical test, the best algorithm in each the superiority of the proposed algorithm is statistically sig-
test function is chosen and compared with other algorithms nificant on the majority of test cases. The MOGOA tends
Grasshopper optimization algorithm for multi-objective optimization problems 815

Table 2 Statistical results for IGD on UF1 to UF10

IGD MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D

UF1 (bi-objective) UF2 (bi-objective) UF3 (bi-objective)


Average 0.1811 0.1370 0.1871 0.0959 0.0604 0.1223 0.2380 0.3140 0.2886
Median 0.1892 0.1317 0.1829 0.0894 0.0484 0.1201 0.2166 0.3080 0.2893
STD. Dev. 0.0250 0.0441 0.0507 0.0386 0.0276 0.0107 0.0662 0.0447 0.0159
Worst 0.2100 0.2279 0.2464 0.1681 0.1305 0.1437 0.3690 0.3777 0.3129
Best 0.1430 0.0899 0.1265 0.0488 0.0370 0.1049 0.1682 0.2565 0.2634
UF4 (bi-objective) UF5 (bi-objective) UF6 (bi-objective)
Average 0.0702 0.1360 0.0681 1.1559 2.2024 1.2915 0.7771 0.6475 0.6881
Median 0.0696 0.1343 0.0685 1.1470 2.1257 1.3376 0.7345 0.5507 0.6984
STD. Dev. 0.0048 0.0074 0.0021 0.1661 0.5530 0.1349 0.2769 0.2661 0.0553
Worst 0.0788 0.1519 0.0704 1.4174 3.0384 1.4675 1.3288 1.2428 0.7401
Best 0.0639 0.1273 0.0647 0.8978 1.4648 1.1231 0.4939 0.3793 0.5524
UF7 (bi-objective) UF8 (tri-objective) UF9 (tri-objective) UF10 (tri-objective)
Average 0.1726 0.3540 0.4552 0.2805 0.5367 0.4427 0.4885 0.9043 1.6372
Median 0.1567 0.3873 0.4377 0.2497 0.5364 0.4330 0.4145 0.8928 1.5916
STD. Dev. 0.0633 0.2044 0.1898 0.0749 0.1826 0.0609 0.1445 0.1848 0.2988
Worst 0.3320 0.6151 0.6770 0.4532 0.7964 0.5662 0.7221 1.2285 2.1622
Best 0.1150 0.0540 0.0290 0.2154 0.2453 0.3742 0.3336 0.6432 1.2201

to provide p-values greater than 0.05, which shows that this algorithms. The reasons for this can be summarized in two
algorithm is highly competitive on the test cases where it key features: high convergence and coverage. Superior con-
not the best. vergence is due to the target selection, in which one of
To sum up, the results show that MOGOA is very promis- the best non-dominated solutions always updates the posi-
ing and competitive compared to the current well-regarded tion of others. Another advantage is the high coverage of

Table 3 Statistical results for SP on UF1 to UF10

IGD MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D

UF1 (bi-objective) UF2 (bi-objective) UF3 (bi-objective)


Average 0.0012 0.0090 0.0038 0.0007 0.0083 0.0088 0.0019 0.0070 0.0268
Median 0.0012 0.0086 0.0038 0.0001 0.0081 0.0086 0.0006 0.0068 0.0251
STD. Dev. 0.0011 0.0025 0.0015 0.0011 0.0017 0.0008 0.0024 0.0017 0.0206
Worst 0.0031 0.0146 0.0067 0.0031 0.0125 0.0104 0.0055 0.0101 0.0626
Best 0.0000 0.0067 0.0021 0.0000 0.0062 0.0080 0.0000 0.0048 0.0008
UF4 (bi-objective) UF5 (bi-objective) UF6 (bi-objective)
Average 0.0001 0.0067 0.0073 0.0007 0.0048 0.0028 0.0003 0.0208 0.0063
Median 0.0000 0.0066 0.0073 0.0007 0.0049 0.0001 0.0002 0.0124 0.0000
STD. Dev. 0.0002 0.0009 0.0006 0.0005 0.0041 0.0055 0.0004 0.0326 0.0127
Worst 0.0006 0.0081 0.0084 0.0014 0.0121 0.0162 0.0011 0.1114 0.0303
Best 0.0000 0.0055 0.0061 0.0001 0.0001 0.0000 0.0000 0.0022 0.0000
UF7 (bi-objective) UF8 (tri-objective) UF9 (tri-objective) UF10 (tri-objective)
Average 0.0001 0.0067 0.0054 0.0175 0.0268 0.0139 0.0234 0.0067 0.0199
Median 0.0000 0.0066 0.0044 0.0177 0.0264 0.0123 0.0235 0.0067 0.0207
STD. Dev. 0.0001 0.0029 0.0030 0.0085 0.0083 0.0101 0.0041 0.0041 0.0035
Worst 0.0002 0.0124 0.0117 0.0320 0.0447 0.0320 0.0309 0.0123 0.0267
Best 0.0000 0.0033 0.0008 0.0069 0.0153 0.0000 0.0172 0.0000 0.0154
816 S.Z. Mirjalili et al.

Table 4 Statistical results for MS on UF1 to UF10

IGD MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D

UF1 (bi-objective) UF2 (bi-objective) UF3 (bi-objective)


Average 0.7270 0.6454 0.5177 0.8845 0.9121 0.8720 0.6051 0.6103 0.2399
Median 0.7607 0.6632 0.5954 0.8824 0.9164 0.8744 0.6513 0.6161 0.2294
STD. Dev. 0.1507 0.1929 0.1661 0.0353 0.0256 0.0056 0.1100 0.1058 0.1213
Worst 0.4899 0.2659 0.3149 0.8150 0.8665 0.8599 0.4026 0.3817 0.0898
Best 0.9120 0.9523 0.7413 0.9360 0.9530 0.8779 0.7060 0.7715 0.4786
UF4 (bi-objective) UF5 (bi-objective) UF6 (bi-objective)
Average 0.9050 0.8128 0.8832 0.2379 0.2793 0.2922 0.2525 0.2744 0.0968
Median 0.9060 0.8132 0.8813 0.2213 0.2865 0.2917 0.2109 0.2292 0.0001
STD. Dev. 0.0139 0.0137 0.0181 0.1131 0.0958 0.0347 0.1294 0.1129 0.2072
Worst 0.8834 0.7944 0.8532 0.1150 0.1557 0.2383 0.0695 0.1544 0.0000
Best 0.9310 0.8345 0.9139 0.4894 0.4383 0.3438 0.4600 0.5252 0.5948
UF7 (bi-objective) UF8 (tri-objective) UF9 (tri-objective) UF10 (tri-objective)
Average 0.8460 0.4293 0.5632 0.4417 0.5081 0.1938 0.1982 0.3233 0.1302
Median 0.8400 0.2952 0.6327 0.4980 0.5060 0.1700 0.1657 0.2738 0.1091
STD. Dev. 0.0792 0.2755 0.2421 0.1586 0.1614 0.0730 0.1635 0.1237 0.0626
Worst 0.7029 0.1446 0.1496 0.1661 0.2272 0.1175 0.0677 0.1704 0.0649
Best 0.9570 0.8771 0.9915 0.6342 0.7148 0.2940 0.6424 0.5290 0.2540

the MOGOA algorithm, which is because of both archive In addition, this algorithm is suitable only for problems with
maintenance mechanism and the selection of target. Since continuous variables and requires legit modifications to be
the solutions are always discarded from most populated used in problems with discrete variables.
segments and targets are chosen from the least populated The results proved that MOGOA can be very effective
segments of the archive, MOGOA improves the diversity for solving optimization problems with multiple objec-
and coverage of solutions across all objectives. Despite tives. The MOGOA algorithm showed high convergence
these benefits, MOGOA is supposed to be applied to prob- and coverage. The superior convergence of MOGOA is due
lems with three and maximum four objectives. As a Pareto to the updating solutions around the best non-dominated
dominance-based algorithm, MOGOA becomes less effec- solutions obtained so far. The solutions tend towards the
tive proportional to the number of objectives. This is due to best solutions. Also, the high convergence originates from
the fact that in problems with more than four objectives, a the adaptive mechanism which accelerates the move-
large number of solutions are non-dominated, so the archive ments of grasshoppers toward the best non-dominated solu-
become full quickly. Therefore, the MOGOA algorithm is tions obtained so far in the repository. The high coverage
suitable for solving problems with less than four objectives. of MOGOA is because of the repository maintenance and

Table 5 P-values obtained from the ranksum test on UF1 to UF7

Test function IGD SP MS

MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D

UF1 0.0091 N/A 0.0140 N/A 0.0002 0.0001 N/A 0.3447 0.0001
UF2 0.0113 N/A 0.0017 N/A 0.0002 0.0001 0.0757 N/A 0.0001
UF3 N/A 0.0173 0.0539 N/A 0.0006 0.1153 0.6776 N/A 0.0001
UF4 0.4727 0.0002 N/A N/A 0.0002 0.0001 N/A 0.0002 0.0001
UF5 N/A 0.0002 0.0640 N/A 0.0376 0.0001 0.2730 N/A 0.1153
UF6 0.2413 N/A 0.2123 N/A 0.0002 0.0001 0.5708 N/A 0.4429
UF7 N/A 0.1212 0.0028 N/A 0.0002 0.0001 N/A 0.0036 0.0001
Grasshopper optimization algorithm for multi-objective optimization problems 817

Table 6 P-values obtained


from the ranksum test on UF8 Test function IGD SP MS
to UF10
MOGOA MOPSO MOGOA MOPSO MOGOA MOPSO

UF8 N/A 0.0022 N/A 0.0376 0.4727 N/A


UF9 N/A 0.9698 N/A 0.0257 0.6232 N/A
UF10 N/A 0.0002 N/A 0.0002 N/A 0.0010

target selection mechanisms. When the repository becomes Appendix: Multi-objective test problems utilised
full, non-dominated solutions in populated regions are dis- in this work
carded by MOGOA, which results in improving the distri-
bution of solutions along the entire front.
The procedure of selecting the target also emphasizes Table 7 ZDT test suite
coverage because it selects solutions from the least popu-
lated regions to be explored and exploited by the swarm. ZDT1:
It is worth mentioning here that since the updating mech- Minimise: f1 (x) = x1
anism of the target in MOGOA is identical to those of Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
GOA, MOGOA inherits high exploration, local solutions 9  N
Where: G (x) = 1 + xi
avoidance, exploitation, and fast convergence rate from this N − 1 i=2
f1 (x)
algorithm. Also, the repulsion and comfort zone of this h (f1 (x) , g (x)) = 1 − g(x)
algorithm cause high exploration and consequently discov- 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
ering new paths towards the undiscovered regions of the ZDT2:
true Pareto optimal front. Therefore, the MOGOA algorithm Minimise: f1 (x) = x1
can avoid local fronts and converge towards the true Pareto Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
optimal front. N
Where: G (x) = 1 + N 9−1 xi
i=2

2
h (f1 (x) , g (x)) = 1 − fg(x)
1 (x)

5 Conclusion 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
ZDT3:
This work proposed a nature-inspired multi-objective algo- Minimise: f1 (x) = x1
rithm mimicking the interaction of individuals in a swarm of Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
9 
grasshoppers. At first, a mathematical model was employed N
Where: G (x) = 1 + 29 xi
to simulate the behaviour of grasshopper swarm and pro- i=2 

posed a single-objective optimization algorithm. An archive h (f1 (x) , g (x)) = 1 − fg(x)
1 (x)
− fg(x)
1 (x)
sin (10πf1 (x))
and target selection mechanism was then integrated into 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
this algorithm to solve multi-objective problems. A set of ZDT1 with linear PF:
test functions was used to test the performance of the pro- Minimise: f1 (x) = x1
posed MOGOA algorithm. The results were compared with Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
those of MOPSO and NSGA-II as the best algorithms in 
N
the literature. It was observed that the MOGOA algorithm is Where: G (x) = 1 + N 9−1 xi
i=2
very efficient and competitive in finding an accurate estima- h (f1 (x) , g (x)) = 1 − f1 (x)
g(x)
tion of Pareto optimal front with high distribution across all 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
objectives. In addition, it was discussed that accurate esti- ZDT2 with three objectives:
mated solutions are due the high convergence of MOGOA, Minimise: f1 (x) = x1
and the good distribution is because of the high explo- Minimise: f2 (x) = x2
ration. Also, the target selection mechanism and archive Minimise: f3 (x) = g (x) × h (f1 (x) , g (x)) × h (f2 (x) , g (x))
maintenance promote exploration and distribution of solu- N
Where: G (x) = 1 + N 9−1 xi
tions. For future works, it is recommended to investigate i=3

2
the effectiveness of different constraint handling techniques h (f1 (x) , g (x)) = 1 − fg(x)
1 (x)

to solve multi-objective algorithms with constraints using


0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
MOGOA.
818 S.Z. Mirjalili et al.

Table 8 Bi-objective test problems (CEC2009)

Name Mathematical formulation


 
2 √  
2
jπ jπ
UF1 f1 = x1 + 2
|J1 | j ∈J1 xj − sin 6πx1 + n , f2 = 1 − x+ 2
|J2 | j ∈J2 xj − sin 6πx1 + n
J1 = {j |j is odd and 2 ≤ j ≤ n}, J2 = {j |j is even and 2 ≤ j ≤ n}
 √ 
UF2 f1 = x1 + |J21 | j ∈J1 yj2 , f2 = 1 − x + |J22 | j ∈J2 yj2
J1 = {j
⎧ |j is odd
 and 2
≤ j ≤ n}, J2 = {j | j is even
 and 2 ≤ j ≤ n}

⎨ xj − 0.3x 2 cos 24πx1 + 4j π + 0.6x1 cos 6πx1 + j π ifj ∈ J1


yj =  1
n 
n
⎩ xj − 0.3x 2 cos 24πx1 + 4j π + 0.6x1 sin 6πx1 + j π ifj ∈ J2

1

n
n
20y π
UF3 f1 = x1 + |J21 | 4 j ∈J1 yj2 − 2 j ∈J1 cos √jj + 2

 

20y π
f2 = x 1 + |J22 | 4 j ∈J1 yj2 − 2 j ∈J2 cos √jj + 2


−2)
0.5 1.0+ 3(j
n−2
J1 and J2 are the same as those of U F 1, yj = xj − x1 , j = 2, 3, . . . , n
2 
  2 
 
UF4 f1 = x1 + |J1 | j ∈J1 h yj , f2 = 1 − x2 + |J2 | j ∈J2 h yj


|t|
J1 and J2 are the same as those of U F 1, yj = xj − sin 6πx1 + jnπ , j = 2, 3, . . . , n, h (t) = 1+e

 
2|t|

UF5 f1 = x1 + 2N 1
+  |sin(2Nπx1 )| + |J21 | j ∈J1 h (yi ) , f1 = 1 − x1 + ( 2N
1
+ )|sin(2Nπx1 )| + |J22 | j ∈J2 h (yi )


J1 amd J2 are identical to those of U F 1,  > 0, yj = xj − sin 6πx1 + jnπ , j = 2, 3, . . . , n
h (t) = 2t 2 − cos
+1
 (4πt) 

  20y π
UF6 f1 = x1 + max 0, 2 2N 1
+  sin (2Nπx1 ) + |J21 | (4 j ∈J1 yj2 − 2 j ∈J1 cos √jj + 1))

  

20y π
f2 = 1 − x1 + max 0, 2 2N 1
+  sin (2Nπx1 ) |J22 | (4 j ∈J2 yj2 − 2 j ∈J2 cos √jj + 1))


J1 amd J2 are identical to those of U F 1,  > 0, yj = xj − sin 6πx1 + jnπ , j = 2, 3, . . . , n
√  √ 
UF7 f1 = 5 x1 + |J21 | j ∈J1 yj2 , f2 = 1 − 5 x1 + |J22 | j ∈J2 yj2


J1 amd J2 are identical to those of U F 1,  > 0, yj = xj − sin 6πx1 + jnπ , j = 2, 3, . . . , n

Table 9 Tri-objective test problems (CEC2009)

Name Mathematical formulation



 2 

UF8 f1 = cos (0.5x1 π) cos (0.5x2 π) + 2
xj − 2x2 sin 2πx1 + jnπ
|J1 | j ∈J1

2 

f2 = cos(0.5x1 π) sin (0.5x2 π) + |J22 | j ∈J2 xj − 2x2 sin 2πx1 + jnπ

2 

f3 = sin (0.5x1 π) + |J23 | j ∈J3 xj − 2x2 sin 2πx1 + jnπ
J1 = {j |3 ≤ j ≤ n, and j − 1 is a multiplication of 3} , J2 = {j |3 ≤ j ≤ n, and j − 2 is a multiplication of 3} ,
J3 = {j |3 ≤ j ≤ n, and j is a multiplication of 3} 
  ! " 
2 
UF9 f1 = 0.5 max 0, (1 + ) 1 − 4 (2x1 − 1)2 + 2x1 x2 + |J21 | j ∈J1 xj − 2x2 sin 2πx1 + jnπ

2 
  ! " 
f2 = 0.5 max 0, (1 + ) 1 − 4 (2x1 − 1)2 + 2x1 x2 + |J22 | j ∈J2 xj − 2x2 sin 2πx1 + jnπ

2 

f3 = 1−x2 + |J23 | j ∈J3 xj − 2x2 sin 2πx1 + jnπ
J1 = {j |3 ≤ j ≤ n, and j − 1 is a multiplication of 3} , J2 = {j |3 ≤ j ≤ n, and j − 2 is a multiplication of 3} ,
J3 = {j |3 ≤ j ≤ n, and j is a multiplication  of 3} ,  = 0.1 
  
UF10 f1 = cos (0.5x1 π) cos (0.5x2 π) + |J21 | j ∈J1 4yj2 − cos 8πyj + 1
    
f2 = cos (0.5x1 π) sin (0.5x2 π) + |J22 | j ∈J1 4yj2 − cos 8πyj + 1
    
f3 = sin (0.5x1 π) + |J23 | j ∈J1 4yj2 − cos 8πyj + 1
J1 = {j |3 ≤ j ≤ n, and j − 1 is a multiplication of 3} , J2 = {j |3 ≤ j ≤ n, and j − 2 is a multiplication of 3} ,
J3 = {j |3 ≤ j ≤ n, and j is a multiplication of 3}
Grasshopper optimization algorithm for multi-objective optimization problems 819

References 23. Zhou A, Qu B-Y, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011)
Multiobjective evolutionary algorithms: a survey of the state of the
art. Swarm Evol Comput 1(3//):32–49
1. Holland JH, Reitman JS (1977) Cognitive systems based on
24. Deb K (2014) Multi-objective optimization. In: Search method-
adaptive algorithms. ACM SIGART Bulletin, pp 49–49
ologies, edn. Springer, pp 403–449
2. Tsai C-F, Eberle W, Chu C-Y (2013) Genetic algorithms in feature
25. Padhye N, Deb K (2010) Evolutionary multi-objective optimiza-
and instance selection. Knowl-Based Syst 39:240–247
3. Lin M-H, Tsai J-F, Yu C-S (2012) A review of deterministic opti- tion and decision making for selective laser sintering. In: Proceed-
mization methods in engineering and management. Mathematical ings of the 12th annual conference on genetic and evolutionary
Problems in Engineering, vol 2012 computation, pp 1259–1266
4. Shmoys DB, Swamy C (2004) Stochastic optimization is (almost) 26. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and eli-
as easy as deterministic optimization. In: Proceedings of the 45th tist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol
annual IEEE symposium on foundations of computer science, Comput 6:182–197
2004, pp 228–237 27. Coello CC, Lechuga MS (2002) MOPSO: A proposal for multiple
5. Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. objective particle swarm optimization. In: Proceedings of the 2002
IEEE Comput Intell Mag 1:28–39 congress on evolutionary computation, 2002. CEC’02, pp 1051–
6. Kennedy J (2011) Particle swarm optimization. In: Encyclopedia 1056
of machine learning, edn. Springer, pp 760–766 28. Padhye N (2008) Topology optimization of compliant mechanism
7. Knowles J, Corne D (1999) The pareto archived evolution strat- using multi-objective particle swarm optimization. In: Proceed-
egy: a new baseline algorithm for pareto multiobjective opti- ings of the 10th annual conference companion on genetic and
misation. In: Proceedings of the 1999 congress on evolutionary evolutionary computation, pp 1831–1834
computation, 1999. CEC 99 29. Padhye N (2009) Comparison of archiving methods in multi-
8. Storn R, Price K (1997) Differential evolution–a simple and effi- objective particle swarm optimization (MOPSO): empirical study.
cient heuristic for global optimization over continuous spaces. J In: Proceedings of the 11th annual conference on Genetic and
Glob Optim 11:341–359 evolutionary computation, pp 1755–1756
9. Padhye N, Mittal P, Deb K (2013) Differential evolution: per- 30. Alaya I, Solnon C, Ghedira K (2007) Ant colony optimization
formances and analyses. In: 2013 IEEE congress on evolutionary for multi-objective optimization problems. In: ICTAI (1), pp 450–
computation (CEC), pp 1960–1967 457
10. Padhye N, Bhardawaj P, Deb K (2010) Improving differential 31. Xue F, Sanderson AC, Graves RJ (2003) Pareto-based multi-
evolution by altering steps in EC. In: Asia-Pacific conference on objective differential evolution. In: The 2003 congress on evolu-
simulated evolution and learning, pp 146–155 tionary computation, 2003. CEC’03, pp 862–869
11. Boussaı̈D I, Lepagnot J, Siarry P (2013) A survey on optimization 32. Knowles JD, Corne DW (2000) Approximating the nondominated
metaheuristics. Inf Sci 237:82–117 front using the Pareto archived evolution strategy. Evol Comput
12. Helbig M, Engelbrecht AP (2013) Performance measures for 8:149–172
dynamic multi-objective optimisation algorithms. Inf Sci 250:61– 33. Wolpert D (1997) No free lunch theorem for optimization. In:
81 IEEE transactions on evolutionary computation, pp 467–482
13. Padhye N, Zuo L, Mohan CK, Varshney PK (2009) Dynamic 34. Ishibuchi H, Tsukamoto N, Nojima Y (2008) Evolutionary many-
and evolutionary multi-objective optimization for sensor selec- objective optimization: a short review. In: IEEE congress on
tion In sensor networks for target tracking. In: Proceedings of the evolutionary computation, pp 2419–2426
international joint conference on computational intelligence - vol-
35. Marler RT, Arora JS (2004) Survey of multi-objective optimiza-
ume 1: ICEC, (IJCCI 2009). ScitePress, INSTICC, pp 160–167.
tion methods for engineering. Struct Multidiscip Optim 26:369–
doi:10.5220/0002324901600167, ISBN: 978-989-674-014-6
395
14. Padhye N, Zuo L, Mohan CK, Varshney P (2009) Dynamic and
36. Deb K, Padhye N, Neema G (2007) Multiobjective evolutionary
evolutionary multi-objective optimization for sensor selection in
optimization-interplanetary trajectory optimization with swing-
sensor networks for target tracking. In: IJCCI, pp 160–167
bys using evolutionary multi-objective optimization. Lect Notes
15. Beyer H-G, Sendhoff B (2007) Robust optimization–a compre-
Comput Sci 4683:26–35
hensive survey. Comput Methods Appl Mech Eng 196:3190–
3218 37. Jin Y, Olhofer M, Sendhoff B (2001) Dynamic weighted aggre-
16. Coello CAC (1999) A comprehensive survey of evolutionary- gation for evolutionary multi-objective optimization: Why does it
based multiobjective optimization techniques. Knowl Inf Syst work and how?
1:269–308 38. Branke J, Deb K, Dierolf H, Osswald M (2004) Finding knees
17. Deb K (2001) Multi-objective optimization using evolutionary in multi-objective optimization. In: International conference on
algorithms, vol 16. Wiley parallel problem solving from nature, pp 722–731
18. von Lücken C., Barán B., Brizuela C (2014) A survey on multi- 39. Kollat JB, Reed P (2007) A framework for visually inter-
objective evolutionary algorithms for many-objective problems. active decision-making and design using evolutionary multi-
Comput Optim Appl 58:707–756 objective optimization (VIDEO). Environ Model Softw 22:1691–
19. Nguyen TT, Yang S, Branke J (2012) Evolutionary dynamic optimi- 1704
zation: a survey of the state of the art. Swarm Evol Comput 6:1–24 40. Topaz CM, Bernoff AJ, Logan S, Toolson W (2008) A model
20. Padhye N, Mittal P, Deb K (2015) Feasibility preserving for rolling swarms of locusts. Eur Phys J Special Topics 157:93–
constraint-handling strategies for real parameter evolutionary opti- 109
mization. Comput Optim Appl 62:851–890 41. Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation
21. Padhye N, Deb K, Mittal P (2013) An efficient and exclusively- algorithm: theory and application. Adv Eng Softw 105:30–47
feasible constrained handling strategy for evolutionary algorithms. 42. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and eli-
Technical Report tist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol
22. Asrari A, Lotfifard S, Payam MS (2016) Pareto dominance- Comput 6:182–197
based multiobjective optimization method for distribution network 43. Zitzler E, Laumanns M, Thiele L, Zitzler E, Zitzler E, Thiele L
reconfiguration. IEEE Trans Smart Grid 7:1401–1410 et al (2001) SPEA2: Improving the strength Pareto evolutionary
820 S.Z. Mirjalili et al.

algorithm, ed: Eidgenössische Technische Hochschule Zürich 49. Deb K, Thiele L, Laumanns M, Zitzler E (2002) Scalable multi-
(ETH). In: Institut für Technische Informatik und Kommunika- objective optimization test problems. In: Proceedings of the 2002
tionsnetze (TIK) congress on evolutionary computation, 2002. CEC’02, pp 825–
44. Srinivas N, Deb K (1994) Muiltiobjective optimization using non- 830
dominated sorting in genetic algorithms. Evol Comput 2:221– 50. Zhang Q, Zhou A, Zhao S, Suganthan PN, Liu W, Tiwari S (2008)
248 Multiobjective optimization test instances for the CEC 2009 spe-
45. Zitzler E, Thiele L (1998) Multiobjective optimization using cial session and competition. University of Essex, Colchester, UK
evolutionary algorithms—a comparative case study. In: Parallel and Nanyang Technological University, Singapore, Special Ses-
problem solving from nature—PPSN V, pp 292–301 sion on Performance Assessment of Multi-Objective Optimization
46. Deb K, Padhye N (2014) Enhancing performance of particle Algorithms, Technical Report
swarm optimization through an algorithmic link with genetic 51. Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic
algorithms. Comput Optim Appl 57:761–794 optimization technique for solving single-objective, discrete,
47. Padhye N, Bhardawaj P, Deb K (2013) Improving differential and multi-objective problems. Neural Comput Applic 27:1053–
evolution through a unified approach. J Glob Optim 55:771 1073
48. Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective 52. Mirjalili S, Jangir P, Saremi S (2016) Multi-objective ant lion
evolutionary algorithms: Empirical results. Evol Comput 8:173– optimizer: a multi-objective optimization algorithm for solving
195 engineering problems. Applied Intelligence, pp 1–17

You might also like