Grasshopper Optimization Algorithm For Multi-Objective Optimization Problems
Grasshopper Optimization Algorithm For Multi-Objective Optimization Problems
DOI 10.1007/s10489-017-1019-8
mainly due to the higher probability of local solutions avoid- that heuristics should be equipped with suitable opera-
ance [4]. However, the main drawback of GA was the tors to discard infeasible solutions during optimization and
stochastic nature of this algorithm which resulted in find- eventually find the best feasible solution. There are many
ing different solutions in every run. This problem seemed outstanding constraint handling techniques in the literature
easy to handle with enough number of runs yet a large num- [20, 21]. Since the proposed technique of this work will be
ber of function evaluations for every run was still an issue. applied to only unconstrained problems, we do not review
Advances in computer hardware, nowadays, are reducing the literature of such techniques further and refer interested
the computational cost of GA significantly. Therefore, GA readers to [20, 21].
can be considered as a reliable problem solving technique Some problems have computationally expensive objec-
compared to exact methods. tive functions, which make the whole heuristic optimization
The success of GA in solving a wide range of problems process very long due to the need to evaluate solutions
in science and industry paved the way of proposing new iteratively. In order to solve such a problem, researchers
heuristic algorithms. The proposal of well-regarded algo- try to reduce the number of function evaluations or utilize
rithms such as Ant Colony Optimization (ACO) [5], Particle surrogate models, which are computationally much cheaper.
Swarm Optimization (PSO) [6], Evolution Strategy (ES) One of the main difficulties mentioned above is the exis-
[7], and Differential Evolution (DE) [8–10] were the results tence of multiple objectives. A heuristic algorithm cannot
of the success of GA. This field of study is still one of the compare solutions when there is more than one objective to
most popular in computational intelligence field. In addi- be considered. In order to solve such problems, researchers
tion to the aforementioned advantage of heuristics, local compared solutions using Pareto dominance operator [22].
optima avoidance, there are several other advantages that Due to the nature of such problems, in addition, there is
have contributed in their popularity. more than one best (non-dominated) solution for a multi-
Heuristic algorithms do not need to calculate derivation objective problem. A heuristic should be able to find all the
of a problem to be able to solve it. This is because they are best solutions. Multi-objective optimization using heuristics
not designed based on gradient descent to find the global has recently attracted much attention and is the main focus
optimum. They consider a problem as a black box with of this work [23].
a set of inputs and outputs. Their inputs are the variables The majority of the heuristics have been equipped with
of the problem and outputs are the objectives. A heuristic proper operators to solve multi-objective problems in the
search starts with creating a set of random inputs as the can- literature. The mechanism of most of multi-objective heuris-
didate solutions for the problem. The search is continued tics is almost identical. The essential component is an
by evaluating each solution, observing the objectives val- archive or repository to store the non-dominated solutions
ues, and changing/combining/evolving the solutions based during the optimization process. Multi-objective heuristics
on their outputs. These steps are repeated until a termination iteratively update this archive to improve the quality and
criterion is met, which can be either a threshold of exceed- quantity of non-dominated solutions in the archive. Another
ing maximum number of iterations or maximum number of duty of a multi-objective heuristic is to find “different” non-
function evaluations. dominated solutions. This means that the non-dominated
Although heuristics are very effective in solving real solutions should be spread uniformly across all objectives
challenging problems, there are many difficulties when to show all the best trade-offs between the multiple objec-
solving optimization problems [11]. In addition, optimiza- tives. This is a key feature in a posteriori multi-objective
tion problems are not all similar and have diverse charac- algorithms [24] where decision making [25] occurs after the
teristics. Some of these difficulties/features are: dynamicity optimization.
[12–14], uncertainty [15], constraints, [16], multiple objec- There are many algorithms in the literature for solving
tives [17], and many objectives [18]. Each of such diffi- multi-objective algorithms. For the GA, the most well-
culties has created a subfield in the field of heuristics and regarded multi-objective counterpart is Non-dominated
attracted many researchers. Sorting Genetic Algorithm (NSGA) [26]. Other popular
In dynamic problems, the position of the global opti- algorithms are: Multi-Objective Particle Swarm Optimiza-
mum changes over time. Therefore, a heuristic should be tion (MOPSO) [27–29] Multi-objective Ant Colony Opti-
equipped with suitable operators to track the changes and mization [30], Multi-Objective Differential Evolution [31],
to not lose the global optimum [19]. In real problems, there Multi-objective Evolution Strategy [32]. All these algo-
is a variety of uncertainty applied to each component. In rithms are proved to be effective in finding non-dominated
order to address this, a heuristic should be able to find robust solutions for multi-objective problems. However, there
solutions that are fault tolerant. Constraints are another dif- question is if we still need more algorithms. According
ficulty of a real problem, which restricts the search space. the No Free Lunch theorem for optimization [33], there is
They divide solutions to feasible and infeasible. This means no algorithm capable of solving optimization algorithms of
Grasshopper optimization algorithm for multi-objective optimization problems 807
all kinds. This theorem logically proved this and allows the number of equality constraints, gi is the i-th inequality con-
proposal of new algorithms or improvement of current ones. straints, hi indicates the i-th equality constraints, and [Li,Ui]
Therefore, there is still room for new or improved algo- are the boundaries of i-th variable.
rithms to better solve the current problems and solve prob- One of the main difficulties in a multi-objective search
lems that are difficult to solve with the current techniques. space is that objectives can be in conflict and require special
Grasshopper Optimization Algorithm (GOA) [40] has considerations. In a multi-objective search space, a solution
been proven to benefit from high exploration while showing cannot be compared with another with relational operators.
very fast convergence speed. The special adaptive mecha- This is due to the existence of more than one criterion
nism in this algorithm smoothly balances exploration and for comparison [36]. Therefore, we need other operators to
exploitation. These characteristics make the GOA algorithm measure and find out how much a solution is better than
potentially able to cope with the difficulties of a multi- another. The most widely used operator is called Pareto
objective search space and outperform other techniques. In dominance and defined as follows:
addition, the computational complexity is better than those
of many optimization techniques in the literature. These
powerful features motivated our attempts to propose a multi- ∀i ∈ {1, 2, . . . , 0} : fi (
x ) ≤ fi (
y ) ∧ ∃i ∈ {1, 2, . . . , k} : fi (
x ) ≤ fi (
y)
objective optimizer inspired from the social behaviour of (2.5)
grasshoppers in nature. The rest of this paper is organized
as follows:
where x = (x1 x2 , . . . , xk ) and y = (y1 y2 , . . . , yk ).
Section 2 provides the preliminaries, essential definition,
This equation shows that a solution (vector x) is bet-
and literature review of multi-objective optimization using
ter than another (vector y) if it has equal and at least one
heuristics. The GOA and its proposed multi-objective ver-
better value on all objectives. In this case, x is said to
sion are described in detail in Section 3. The results are
dominate y and it can be denoted as: x≺y. An example is
presented, discussed, and analysed in Section 4. The latter
presented in Fig. 1. This figure shows that the circles are
section also includes the experimental set up, test func-
better than squares because they provide a lower value in
tions, and performance metrics as well. Finally, Section 5
both objectives.
concludes the work and suggests several future research
Despite the fact that circles dominated the squares in
directions.
the above figures, they are non-dominated with respect to
each other. This means that they are better in one objec-
tive and worse in the other. The Pareto optimality can be
2 Multi-objective optimization mathematically defined as follows:
2.1 Preliminaries and definitions
y ∈ X |fi (
∀i ∈ {1, 2, . . . , 0} : { y ) ≺ fi (
x) } (2.6)
As its name implies, multi-objective optimization deals with
optimizing multiple objectives. In the literature, the term This solution is referred to as a Pareto optimal solution
multi-objective refers to problems with up to four objec- because it cannot be dominated by the solution x.
tives. Due to the complexity of problems with more than 4 For every problem, there is a set of best non-dominated
objectives, there is a specialized field called many-objective solutions. This set is considered as a solution for multi-
optimization to solve problems with many objectives. Since objective optimization. Consequently, the projection of
solving such problems is out of the scope of this work, Pareto optimal solutions in the objective space are stored in
interested readers are referred to a survey in [34]. a set called Pareto optimal front.
In order to formulate a multi-objective problem, we can
use a problem definition. Without the loss of generality, the
following equations show multi-objective optimization as a
minimization optimization problem [35]:
Minimize f2
Minimize : F ( x ) = {f1 (
x ), f2 (
x ), . . . , f0 (
x )} (2.1)
Subj ect to : gi (
x ) ≥ 0, i = 1, 2, . . . , m (2.2)
x ) = 0, i = 1, 2, . . . , p
hi ( (2.3)
Li ≤ Ui , i = 1, 2, . . . , n (2.4)
Minimize f1
where n is the number of variables, o is the number of objective
functions, m is the number of inequality constraints, p is the Fig. 1 Pareto dominance
808 S.Z. Mirjalili et al.
2.2 Multi-objective optimization using metaheuristics which an expert preference is continuously fetched and
involved during the optimization process to find desired
In the literature, there are three main approaches for solv- Pareto optimal solutions.
ing multi-objective problems using metaheuristics: a pri- The literature shows that a posteriori are the domi-
ori [37], a posteriori [38], and interactive [39]. In the nant methods of multi-objective optimization. Most of the
first approach, multiple objectives are aggregated to one well-regarded algorithms in the field of single-objective
objective. This means that the multi-objective problem is optimization have been modified to perform a posteriori
converted to a single-objective as follows: multi-objective optimization. Needless to say, they all com-
pare solutions based on Pareto dominance and employ an
Minimize : F ( x ) = w1 f1 (
x ) + w2 f2 (x ) + . . . + wo fo (
x ) (2.7) archive to store the best Pareto optimal solutions obtained
Subj ect to : g i (
x ) ≥ 0, i = 1, 2, . . . , m (2.8) so far. The general framework of all a posteriori methods is
hi (x ) = 0, i = 1, 2, . . . , p (2.9) identical. They initiate the optimization process with a set
Li ≤ xi ≤ Ui , i = 1, 2, ..., n (2.10) of random solutions. After finding the Pareto optimal solu-
where w1 , w2 , w3, , . . . , wo are the weights of objectives, tions and storing them to an archive, they try to improve the
n is the number of variables, o is the number of objective solutions to find better Pareto optimal solutions. The pro-
functions, m is the number of inequality constraints, p is cess of improving Pareto optimal solutions is stopped when
the number of equality constraints, gi is the i-th inequal- a condition is met.
ity constraints, hi indicates the i-th equality constraints, and The main objective of a posteriori multi-objective algo-
[Li,Ui] are the boundaries of i-th variable. rithm is to find a very accurate approximation of the actual
Aggregation of objectives allows single-objective opti- (true) Pareto optimal solutions for a given multi-objective
mizers to effectively find Pareto optimal solutions. How- problem. Due to the occurrence of decision making after the
ever, the main drawbacks of this approach are the need to optimization proves, the solutions should be spread along
run an algorithm multiple times to find multiple Pareto opti- all objectives as uniform as possible as well. One of the
mal solutions, dealing with all the challenges in every run, main challenges here is that finding accurate Pareto optimal
lack of information exchange between Pareto optimal solu- solutions (convergence) is in conflict with distribution of
tions during optimization, the need to consult with an expert solutions (coverage). A multi-objective optimization should
to find the best weights, and the failure to find concave be able to effectively balance these two factors to solve a
regions of Pareto optimal front due to the addition of objec- multi-objective problem.
tives. Such methods are called a priori because decision In order to improve the coverage, different mechanisms
making is done before the optimization within determin- are utilized. In MOPSO, for instance, Pareto optimal solu-
ing the weights. Obviously, the disadvantages of a priori tions in the less populated segments of the archive have
approaches outweigh their benefits. The main duty of a a higher probability to be chosen as the leaders for other
designer when using such techniques is to run the algorithm particles. In NSGA-II, non-dominated sorting ranks Pareto
multiple times while changing the weights to find the Pareto optimal solutions and assigns them a number. This gives bet-
optimal front. ter Pareto optimal solutions a higher chance of participation
The second class of multi-objective metaheuristics are in creating the new generation.
considered as a posteriori. As the name implies, deci- In spite of the recent advances in the field of multi-
sion making is done after the optimization. There is no objective optimization, many researchers try to improve
aggregation anymore and such methods maintain the multi- the current techniques or propose new ones to solve the
objective formulation of a multi-objective problem. The current multi-objective optimization problems better. This
main advantages of this class are the ability of finding Pareto motivated our attempts to proposed and investigate the
optimal solution set in just one run, exchanging information effectiveness of a new algorithm called Grasshopper Opti-
between Pareto optimal solutions during optimization, and mization Algorithm (GOA) in this field. In the next section,
the ability to determine the Pareto optimal front of any type. the GOA is introduced first and then the multi-objective
However, a posteriori methods require special mechanisms version of this algorithm is proposed.
to address multiple and often conflicting objectives. In addi-
tion, the computational cost of such methods is normally
higher than that of aggregation techniques. 3 Multi-objective grasshopper optimization
The last method mentioned above is called interactive algorithm (MOGOA)
multi-objective optimization. The name indicates that deci-
sion making is done during optimization. Interactive opti- This section first introduces the GOA algorithm. The multi-
mization is also called human-in-the-loop optimization, in objective version of this algorithm is then proposed.
Grasshopper optimization algorithm for multi-objective optimization problems 809
Xi = Si + Gi + Ai (3.1) -0.1
-0.12
0 5 10 15
where Si is the social interaction, Gi is the gravity force on
Distance (d)
i-th grasshopper, and Ai shows the wind advection.
Equation (3.1) includes three main components to sim- Fig. 2 Function s when l =1.5 and f =0.5
ulate social interaction, impact of gravitational force, and
wind advection. These components fully simulate the move-
between them. To resolve this issue, the distance between
ment of grasshoppers, yet the main component originated
grasshoppers should be mapped or normalized to [1, 4].
from grasshoppers themselves is the social interaction dis-
The G component in (3.1) is calculated as follows:
cussed as follows:
Gi = −g eˆg (3.4)
N
Si = s(dij )d
ij (3.2)
where g is the gravitational constant and eˆg shows a unity
j =1
j
=i vector towards the centre of earth.
The A component in (3.1) is calculated as follows:
where dij is the distance between
i-th and
j-th grasshopper
and it is calculated as dij = xj − xi , s is a function to Ai = ueˆw (3.5)
define the strength of social forces as shown in (3.3), and
xj −xi where u is a constant drift and eˆw is a unity vector in the
d
ij = dij is a unit vector from i-th grasshopper to the j-th
direction of wind.
grasshopper.
Equation (3.1) can be written with all components as
The s function, which defines the social forces, is calcu-
follows:
lated as follows:
N
xj − xi
s xj − xi
−r
s (r) = f e l − e−r (3.3) Xi = − g eˆg + ueˆw (3.6)
dij
j =1
j
=i
where f indicates the intensity of attraction and lis the
attractive length scale. −r
where s (r) = f e l − e−r and N is the number of
The function s is illustrated in the following figure to
grasshoppers.
show how it impacts the social interaction (attraction and
To solve optimization problems, a stochastic algorithm
repulsion) of grasshoppers.
must perform exploration and exploitation effectively to
Inspecting Fig. 2, it may be seen that repulsion forces
determine an accurate approximation of the global opti-
are encouraged in the interval of [0, 2.079]. If the distance
mum. The mathematical model presented above should
becomes equal to 2.079, there is no attraction and repul-
be equipped with special parameters to show exploration
sion. This area is called comfort area. The attraction force
and exploitation in different stages of optimization. The
increases from 2.079 unit of distance to nearly 4 and then
proposed mathematical model is as follows:
it gradually decreases. Changing the parameters l and f
in (3.3) results in different social behaviours in artificial ⎛ ⎞
grasshoppers as may be seen in Fig. 3. ⎜ x − x ⎟
ubd − lbd
d
N
⎜ j i⎟
To show the interaction between grasshoppers with respect Xid = c ⎜ c s xj − xid ⎟ + Tˆd
to comfort area, Fig. 4 shows a conceptual schematic. ⎝ 2 dij ⎠
j =1
Despite the merits of the function s, it is not able to apply j
=i
strong forces between grasshoppers with large distances (3.7)
810 S.Z. Mirjalili et al.
l=1.5 f=0.5
0.2 0.2
0.1 0.1
0 0
-0.1 -0.1
f=0.0 l=1.0
-0.2 f=0.2 -0.2 l=1.2
f=0.4 l=1.4
f=0.5 l=1.5
-0.3 f=0.6 -0.3 l=1.6
f=0.8 l=1.8
f=1.0 l=2.0
-0.4 -0.4
0 5 10 15 0 5 10 15
d d
where ubd is the upper bound in the d-th dimension, lbd is do not consider the gravity (no G component) and assume
−r
the lower bound in the d-th dimension s (r) = f e l − e−r , that the wind direction (A component) is always towards a
Tˆd is the value of d-th dimension in the target (best solution target (Tˆd ).
found so far), and c is a decreasing coefficient to shrink the It should be noted that the inner c contributes to the
comfort area, repulsion area, and attraction area. Note that S reduction of repulsion/attraction forces between grasshoppers
is almost similar to the S component in (3.1). However, we proportional to the number of iterations, while the outer c
Comfort
Attraction force
Repulsion force
Grasshopper optimization algorithm for multi-objective optimization problems 811
reduces the search coverage around the target as the iteration where Ni is the number of solutions in the vicinity of the
counter increases. i-th solution.
The parameter c is updated with the following equation With this probability, a roulette wheel is utilized to select
to reduce exploration and increase exploitation proportional the target from the archive. This allows improving the dis-
to the number of iteration: tribution of less distributed regions of the search space.
Another advantage is that in case of premature convergence,
cmax − cmin
c = cmax − l (3.8) it is possible for solutions with crowded neighbourhood to
L be selected as the target and resolve this issue.
where cmax is the maximum value, cmin is the minimum The archive that is employed has a limitation. There
value, l indicates the current iteration, and L is the max- should be a limited number of solutions in the archive to be
imum number of iterations. In this work, we use 1 and able to decrease the computational cost of MOGOA. This
0.00001 values for cmax and cmin, respectively. leads to the issue of a full archive. We deliberately remove
solutions with crowded neighbourhood to decrease the num-
3.2 Multi-objective grasshopper optimization algorithm ber of solutions in the crowded regions. This allows accom-
(MOGOA) modation of new solutions in the less populated regions. In
order to do so, the inverse of the Pi in (4.1) and a roulette
A multi-objective algorithm seeks two goals, when solving wheel are used.
multi-objective problems. For one, a very accurate approx- The archive should also be updated regularly. However,
imations of the true Pareto optimal solutions should be there are different cases when comparing a solution out-
found. For another, the solutions should be well-distributed side the archive and the solutions inside the archive. The
across all objectives. This is essential in a posteriori meth- MOGOA should be able to handle those cases to improve
ods since the decision making is done after the optimization the archive. The easiest case is where the external solution
process. In the following paragraphs, the main mechanisms is dominated by at least one of the archive members. In this
to achieve these two essential goals are discussed. case it should be thrown away immediately. Another case is
As discussed in the preceding section, two solutions can- when the solution is non-dominated with respect to all solu-
not be compared with the regular relational operators. Also, tions in the archive. Since the archive stores non-dominated
there is more than one solution for a multi-objective prob- solutions obtained so far, a non-dominated solution should
lem. In order to compare the solutions in MOGOA, Pareto be added to the archive. However, if the solution dominates
optimal dominance is used. The best Pareto optimal solu- a solution in the archive, it should be replaced with it.
tions are also stored in an archive. The main challenge in de- The computational complexity of MOGOA is of O(MN2 )
signing MOGOA based on GOA is to choose the target. The where M is the number of objectives and N is the number
target is the main component that leads the search agents of solutions. The complexity is equal to other well-known
towards promising regions of the search space. The same algorithms in this field: NSGA-II [42], MOPSO, SPEA2
equations in the preceding section are used in the MOGOA, [43], and PAES [7]. The computational complexity is better
and the main difference is the process of updating the target. than other algorithms such as NSGA [44] and SPEA [45],
The target can be chosen easily in a single-objective which are of O(MN3 ).
search space by choosing the best solution obtained so far. Note that the MOGOA algorithm was designed above
However, the target should be chosen from a set of Pareto considering the ‘Unified Framework’ proposed by Padhye
optimal solutions in MOGOA. Obviously, the Pareto opti- et al. [46, 47] in which the main steps are initialization,
mal solutions are added to the archive and the target must be selection, generation and replacement. Improving the per-
one of them in the archive. The challenge here is to find a formance of MOGOA with integrating evolutionary opera-
target to improve the distribution of solutions in the archive. tors (e.g. crossover and mutation) is out of the scope of this
In order to achieve this, the number of neighbouring solu- work, but it would be a valuable contribution in future.
tions in the neighbourhood of every solution in the archive is With the above-mentioned mechanisms and rules, the
first calculated considering a fixed distance. This approach MOGOA is able to find the Pareto optimal solutions, store
is similar to that in MOPSO. Afterwards, the number of them in the archive, and improve their distribution. In the
neighbouring solutions is counted and assumed as the quan- following section, a set of test functions is employed to test
titative metric to measure the crowdedness of regions in the the performance of the proposed MOGOA algorithm.1
Pareto optimal front. The following equation is employed
which defines the probability of choosing the target from
the archive:
1 The source codes of MOGOA can be found at http://www.alimirjalili.
Pi = 1 Ni (3.9) com/Projects.html.
812 S.Z. Mirjalili et al.
the test functions in this test suite are not multi-modal. show that the proposed algorithm is able to find very accu-
To benchmark the performance of the proposed algorithm rate approximations of true Pareto optimal solutions for the
on more challenging test beds, this subsection employs the given multi-objective problems. High convergence, how-
CEC2009 test functions. These functions are of the most ever, might result in poor coverage. Inspecting the results
difficult test functions in the literature of multi-objective of Tables 3 and 4, it may be seen that the coverage of the
optimization and able to confirm whether the superiority MOGOA algorithm tends to be better than those of MOPSO
of MOGOA is significant or not. The mathematical formu- and MOEA/D on the majority of CEC2009 test functions.
lation of these test functions are given in the Appendix. This shows that the proposed algorithm benefits from high
The results of MOGOA on CEC2009 test functions are pre- coverage as well.
sented in Tables 2, 3, and 4 and compared to MOPSO and The results of the preceding tables were collected and
MOEA/D. presented considering 30 independent runs. The average,
Table 2 shows that the MOGOA algorithm provides bet- median, standard deviation, maximum, and minimum statis-
ter results on six of the CEC2009 test functions. The test tical metrics show how well the proposed algorithm performs
functions are UF3, UF5, UF7, UF8, UF9, and UF10. IGD on average. To see how significant the superiority of the pro-
quantifies the convergence of algorithms, so these results posed algorithm is considering each run and prove that the
814 S.Z. Mirjalili et al.
results were not obtained by chance, the Wilcoxon rank-sum independently. For example, if the best algorithm is
test is conducted in this subsection as well. The p-values that MOGOA, the pairwise comparison is done between
are less than 0.05 could be considered as strong evidence MOGOA/MOPSO and MOGOA/MOEA/D. The results are
against the null hypothesis. presented in Tables 5 and 6. It is evident in these tables that
For this statistical test, the best algorithm in each the superiority of the proposed algorithm is statistically sig-
test function is chosen and compared with other algorithms nificant on the majority of test cases. The MOGOA tends
Grasshopper optimization algorithm for multi-objective optimization problems 815
IGD MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D
to provide p-values greater than 0.05, which shows that this algorithms. The reasons for this can be summarized in two
algorithm is highly competitive on the test cases where it key features: high convergence and coverage. Superior con-
not the best. vergence is due to the target selection, in which one of
To sum up, the results show that MOGOA is very promis- the best non-dominated solutions always updates the posi-
ing and competitive compared to the current well-regarded tion of others. Another advantage is the high coverage of
IGD MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D
IGD MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D MOGOA MOPSO MOEA/D
the MOGOA algorithm, which is because of both archive In addition, this algorithm is suitable only for problems with
maintenance mechanism and the selection of target. Since continuous variables and requires legit modifications to be
the solutions are always discarded from most populated used in problems with discrete variables.
segments and targets are chosen from the least populated The results proved that MOGOA can be very effective
segments of the archive, MOGOA improves the diversity for solving optimization problems with multiple objec-
and coverage of solutions across all objectives. Despite tives. The MOGOA algorithm showed high convergence
these benefits, MOGOA is supposed to be applied to prob- and coverage. The superior convergence of MOGOA is due
lems with three and maximum four objectives. As a Pareto to the updating solutions around the best non-dominated
dominance-based algorithm, MOGOA becomes less effec- solutions obtained so far. The solutions tend towards the
tive proportional to the number of objectives. This is due to best solutions. Also, the high convergence originates from
the fact that in problems with more than four objectives, a the adaptive mechanism which accelerates the move-
large number of solutions are non-dominated, so the archive ments of grasshoppers toward the best non-dominated solu-
become full quickly. Therefore, the MOGOA algorithm is tions obtained so far in the repository. The high coverage
suitable for solving problems with less than four objectives. of MOGOA is because of the repository maintenance and
UF1 0.0091 N/A 0.0140 N/A 0.0002 0.0001 N/A 0.3447 0.0001
UF2 0.0113 N/A 0.0017 N/A 0.0002 0.0001 0.0757 N/A 0.0001
UF3 N/A 0.0173 0.0539 N/A 0.0006 0.1153 0.6776 N/A 0.0001
UF4 0.4727 0.0002 N/A N/A 0.0002 0.0001 N/A 0.0002 0.0001
UF5 N/A 0.0002 0.0640 N/A 0.0376 0.0001 0.2730 N/A 0.1153
UF6 0.2413 N/A 0.2123 N/A 0.0002 0.0001 0.5708 N/A 0.4429
UF7 N/A 0.1212 0.0028 N/A 0.0002 0.0001 N/A 0.0036 0.0001
Grasshopper optimization algorithm for multi-objective optimization problems 817
target selection mechanisms. When the repository becomes Appendix: Multi-objective test problems utilised
full, non-dominated solutions in populated regions are dis- in this work
carded by MOGOA, which results in improving the distri-
bution of solutions along the entire front.
The procedure of selecting the target also emphasizes Table 7 ZDT test suite
coverage because it selects solutions from the least popu-
lated regions to be explored and exploited by the swarm. ZDT1:
It is worth mentioning here that since the updating mech- Minimise: f1 (x) = x1
anism of the target in MOGOA is identical to those of Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
GOA, MOGOA inherits high exploration, local solutions 9 N
Where: G (x) = 1 + xi
avoidance, exploitation, and fast convergence rate from this N − 1 i=2
f1 (x)
algorithm. Also, the repulsion and comfort zone of this h (f1 (x) , g (x)) = 1 − g(x)
algorithm cause high exploration and consequently discov- 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
ering new paths towards the undiscovered regions of the ZDT2:
true Pareto optimal front. Therefore, the MOGOA algorithm Minimise: f1 (x) = x1
can avoid local fronts and converge towards the true Pareto Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
optimal front. N
Where: G (x) = 1 + N 9−1 xi
i=2
2
h (f1 (x) , g (x)) = 1 − fg(x)
1 (x)
5 Conclusion 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
ZDT3:
This work proposed a nature-inspired multi-objective algo- Minimise: f1 (x) = x1
rithm mimicking the interaction of individuals in a swarm of Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
9
grasshoppers. At first, a mathematical model was employed N
Where: G (x) = 1 + 29 xi
to simulate the behaviour of grasshopper swarm and pro- i=2
posed a single-objective optimization algorithm. An archive h (f1 (x) , g (x)) = 1 − fg(x)
1 (x)
− fg(x)
1 (x)
sin (10πf1 (x))
and target selection mechanism was then integrated into 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
this algorithm to solve multi-objective problems. A set of ZDT1 with linear PF:
test functions was used to test the performance of the pro- Minimise: f1 (x) = x1
posed MOGOA algorithm. The results were compared with Minimise: f2 (x) = g (x) × h (f1 (x) , g (x))
those of MOPSO and NSGA-II as the best algorithms in
N
the literature. It was observed that the MOGOA algorithm is Where: G (x) = 1 + N 9−1 xi
i=2
very efficient and competitive in finding an accurate estima- h (f1 (x) , g (x)) = 1 − f1 (x)
g(x)
tion of Pareto optimal front with high distribution across all 0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
objectives. In addition, it was discussed that accurate esti- ZDT2 with three objectives:
mated solutions are due the high convergence of MOGOA, Minimise: f1 (x) = x1
and the good distribution is because of the high explo- Minimise: f2 (x) = x2
ration. Also, the target selection mechanism and archive Minimise: f3 (x) = g (x) × h (f1 (x) , g (x)) × h (f2 (x) , g (x))
maintenance promote exploration and distribution of solu- N
Where: G (x) = 1 + N 9−1 xi
tions. For future works, it is recommended to investigate i=3
2
the effectiveness of different constraint handling techniques h (f1 (x) , g (x)) = 1 − fg(x)
1 (x)
UF5 f1 = x1 + 2N 1
+ |sin(2Nπx1 )| + |J21 | j ∈J1 h (yi ) , f1 = 1 − x1 + ( 2N
1
+ )|sin(2Nπx1 )| + |J22 | j ∈J2 h (yi )
J1 amd J2 are identical to those of U F 1, > 0, yj = xj − sin 6πx1 + jnπ , j = 2, 3, . . . , n
h (t) = 2t 2 − cos
+1
(4πt)
20y π
UF6 f1 = x1 + max 0, 2 2N 1
+ sin (2Nπx1 ) + |J21 | (4 j ∈J1 yj2 − 2 j ∈J1 cos √jj + 1))
20y π
f2 = 1 − x1 + max 0, 2 2N 1
+ sin (2Nπx1 ) |J22 | (4 j ∈J2 yj2 − 2 j ∈J2 cos √jj + 1))
J1 amd J2 are identical to those of U F 1, > 0, yj = xj − sin 6πx1 + jnπ , j = 2, 3, . . . , n
√ √
UF7 f1 = 5 x1 + |J21 | j ∈J1 yj2 , f2 = 1 − 5 x1 + |J22 | j ∈J2 yj2
J1 amd J2 are identical to those of U F 1, > 0, yj = xj − sin 6πx1 + jnπ , j = 2, 3, . . . , n
References 23. Zhou A, Qu B-Y, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011)
Multiobjective evolutionary algorithms: a survey of the state of the
art. Swarm Evol Comput 1(3//):32–49
1. Holland JH, Reitman JS (1977) Cognitive systems based on
24. Deb K (2014) Multi-objective optimization. In: Search method-
adaptive algorithms. ACM SIGART Bulletin, pp 49–49
ologies, edn. Springer, pp 403–449
2. Tsai C-F, Eberle W, Chu C-Y (2013) Genetic algorithms in feature
25. Padhye N, Deb K (2010) Evolutionary multi-objective optimiza-
and instance selection. Knowl-Based Syst 39:240–247
3. Lin M-H, Tsai J-F, Yu C-S (2012) A review of deterministic opti- tion and decision making for selective laser sintering. In: Proceed-
mization methods in engineering and management. Mathematical ings of the 12th annual conference on genetic and evolutionary
Problems in Engineering, vol 2012 computation, pp 1259–1266
4. Shmoys DB, Swamy C (2004) Stochastic optimization is (almost) 26. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and eli-
as easy as deterministic optimization. In: Proceedings of the 45th tist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol
annual IEEE symposium on foundations of computer science, Comput 6:182–197
2004, pp 228–237 27. Coello CC, Lechuga MS (2002) MOPSO: A proposal for multiple
5. Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. objective particle swarm optimization. In: Proceedings of the 2002
IEEE Comput Intell Mag 1:28–39 congress on evolutionary computation, 2002. CEC’02, pp 1051–
6. Kennedy J (2011) Particle swarm optimization. In: Encyclopedia 1056
of machine learning, edn. Springer, pp 760–766 28. Padhye N (2008) Topology optimization of compliant mechanism
7. Knowles J, Corne D (1999) The pareto archived evolution strat- using multi-objective particle swarm optimization. In: Proceed-
egy: a new baseline algorithm for pareto multiobjective opti- ings of the 10th annual conference companion on genetic and
misation. In: Proceedings of the 1999 congress on evolutionary evolutionary computation, pp 1831–1834
computation, 1999. CEC 99 29. Padhye N (2009) Comparison of archiving methods in multi-
8. Storn R, Price K (1997) Differential evolution–a simple and effi- objective particle swarm optimization (MOPSO): empirical study.
cient heuristic for global optimization over continuous spaces. J In: Proceedings of the 11th annual conference on Genetic and
Glob Optim 11:341–359 evolutionary computation, pp 1755–1756
9. Padhye N, Mittal P, Deb K (2013) Differential evolution: per- 30. Alaya I, Solnon C, Ghedira K (2007) Ant colony optimization
formances and analyses. In: 2013 IEEE congress on evolutionary for multi-objective optimization problems. In: ICTAI (1), pp 450–
computation (CEC), pp 1960–1967 457
10. Padhye N, Bhardawaj P, Deb K (2010) Improving differential 31. Xue F, Sanderson AC, Graves RJ (2003) Pareto-based multi-
evolution by altering steps in EC. In: Asia-Pacific conference on objective differential evolution. In: The 2003 congress on evolu-
simulated evolution and learning, pp 146–155 tionary computation, 2003. CEC’03, pp 862–869
11. Boussaı̈D I, Lepagnot J, Siarry P (2013) A survey on optimization 32. Knowles JD, Corne DW (2000) Approximating the nondominated
metaheuristics. Inf Sci 237:82–117 front using the Pareto archived evolution strategy. Evol Comput
12. Helbig M, Engelbrecht AP (2013) Performance measures for 8:149–172
dynamic multi-objective optimisation algorithms. Inf Sci 250:61– 33. Wolpert D (1997) No free lunch theorem for optimization. In:
81 IEEE transactions on evolutionary computation, pp 467–482
13. Padhye N, Zuo L, Mohan CK, Varshney PK (2009) Dynamic 34. Ishibuchi H, Tsukamoto N, Nojima Y (2008) Evolutionary many-
and evolutionary multi-objective optimization for sensor selec- objective optimization: a short review. In: IEEE congress on
tion In sensor networks for target tracking. In: Proceedings of the evolutionary computation, pp 2419–2426
international joint conference on computational intelligence - vol-
35. Marler RT, Arora JS (2004) Survey of multi-objective optimiza-
ume 1: ICEC, (IJCCI 2009). ScitePress, INSTICC, pp 160–167.
tion methods for engineering. Struct Multidiscip Optim 26:369–
doi:10.5220/0002324901600167, ISBN: 978-989-674-014-6
395
14. Padhye N, Zuo L, Mohan CK, Varshney P (2009) Dynamic and
36. Deb K, Padhye N, Neema G (2007) Multiobjective evolutionary
evolutionary multi-objective optimization for sensor selection in
optimization-interplanetary trajectory optimization with swing-
sensor networks for target tracking. In: IJCCI, pp 160–167
bys using evolutionary multi-objective optimization. Lect Notes
15. Beyer H-G, Sendhoff B (2007) Robust optimization–a compre-
Comput Sci 4683:26–35
hensive survey. Comput Methods Appl Mech Eng 196:3190–
3218 37. Jin Y, Olhofer M, Sendhoff B (2001) Dynamic weighted aggre-
16. Coello CAC (1999) A comprehensive survey of evolutionary- gation for evolutionary multi-objective optimization: Why does it
based multiobjective optimization techniques. Knowl Inf Syst work and how?
1:269–308 38. Branke J, Deb K, Dierolf H, Osswald M (2004) Finding knees
17. Deb K (2001) Multi-objective optimization using evolutionary in multi-objective optimization. In: International conference on
algorithms, vol 16. Wiley parallel problem solving from nature, pp 722–731
18. von Lücken C., Barán B., Brizuela C (2014) A survey on multi- 39. Kollat JB, Reed P (2007) A framework for visually inter-
objective evolutionary algorithms for many-objective problems. active decision-making and design using evolutionary multi-
Comput Optim Appl 58:707–756 objective optimization (VIDEO). Environ Model Softw 22:1691–
19. Nguyen TT, Yang S, Branke J (2012) Evolutionary dynamic optimi- 1704
zation: a survey of the state of the art. Swarm Evol Comput 6:1–24 40. Topaz CM, Bernoff AJ, Logan S, Toolson W (2008) A model
20. Padhye N, Mittal P, Deb K (2015) Feasibility preserving for rolling swarms of locusts. Eur Phys J Special Topics 157:93–
constraint-handling strategies for real parameter evolutionary opti- 109
mization. Comput Optim Appl 62:851–890 41. Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation
21. Padhye N, Deb K, Mittal P (2013) An efficient and exclusively- algorithm: theory and application. Adv Eng Softw 105:30–47
feasible constrained handling strategy for evolutionary algorithms. 42. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and eli-
Technical Report tist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol
22. Asrari A, Lotfifard S, Payam MS (2016) Pareto dominance- Comput 6:182–197
based multiobjective optimization method for distribution network 43. Zitzler E, Laumanns M, Thiele L, Zitzler E, Zitzler E, Thiele L
reconfiguration. IEEE Trans Smart Grid 7:1401–1410 et al (2001) SPEA2: Improving the strength Pareto evolutionary
820 S.Z. Mirjalili et al.
algorithm, ed: Eidgenössische Technische Hochschule Zürich 49. Deb K, Thiele L, Laumanns M, Zitzler E (2002) Scalable multi-
(ETH). In: Institut für Technische Informatik und Kommunika- objective optimization test problems. In: Proceedings of the 2002
tionsnetze (TIK) congress on evolutionary computation, 2002. CEC’02, pp 825–
44. Srinivas N, Deb K (1994) Muiltiobjective optimization using non- 830
dominated sorting in genetic algorithms. Evol Comput 2:221– 50. Zhang Q, Zhou A, Zhao S, Suganthan PN, Liu W, Tiwari S (2008)
248 Multiobjective optimization test instances for the CEC 2009 spe-
45. Zitzler E, Thiele L (1998) Multiobjective optimization using cial session and competition. University of Essex, Colchester, UK
evolutionary algorithms—a comparative case study. In: Parallel and Nanyang Technological University, Singapore, Special Ses-
problem solving from nature—PPSN V, pp 292–301 sion on Performance Assessment of Multi-Objective Optimization
46. Deb K, Padhye N (2014) Enhancing performance of particle Algorithms, Technical Report
swarm optimization through an algorithmic link with genetic 51. Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic
algorithms. Comput Optim Appl 57:761–794 optimization technique for solving single-objective, discrete,
47. Padhye N, Bhardawaj P, Deb K (2013) Improving differential and multi-objective problems. Neural Comput Applic 27:1053–
evolution through a unified approach. J Glob Optim 55:771 1073
48. Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective 52. Mirjalili S, Jangir P, Saremi S (2016) Multi-objective ant lion
evolutionary algorithms: Empirical results. Evol Comput 8:173– optimizer: a multi-objective optimization algorithm for solving
195 engineering problems. Applied Intelligence, pp 1–17