0% found this document useful (0 votes)
15 views24 pages

Golden Jackal Optimization With Joint Opposite Selection An Enhanced Nature-Inspired Optimization Algorithm For Solving Optimization Problems

This paper introduces an enhanced optimization algorithm called Golden Jackal Optimization with Joint Opposite Selection (GJO-JOS), which combines two strategies, Dynamic Opposite (DO) and Selective Leading Opposition (SLO), to improve exploration and exploitation in optimization problems. The performance of GJO-JOS was evaluated against various benchmark functions and was found to outperform the original GJO and other nature-inspired algorithms. The study highlights the effectiveness of JOS in achieving a balance between exploration and exploitation in optimization tasks.

Uploaded by

drasmaaehab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views24 pages

Golden Jackal Optimization With Joint Opposite Selection An Enhanced Nature-Inspired Optimization Algorithm For Solving Optimization Problems

This paper introduces an enhanced optimization algorithm called Golden Jackal Optimization with Joint Opposite Selection (GJO-JOS), which combines two strategies, Dynamic Opposite (DO) and Selective Leading Opposition (SLO), to improve exploration and exploitation in optimization problems. The performance of GJO-JOS was evaluated against various benchmark functions and was found to outperform the original GJO and other nature-inspired algorithms. The study highlights the effectiveness of JOS in achieving a balance between exploration and exploitation in optimization tasks.

Uploaded by

drasmaaehab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Received 17 November 2022, accepted 29 November 2022, date of publication 8 December 2022,

date of current version 14 December 2022.


Digital Object Identifier 10.1109/ACCESS.2022.3227510

Golden Jackal Optimization With Joint Opposite


Selection: An Enhanced Nature-Inspired
Optimization Algorithm for Solving
Optimization Problems
FLORENTINA YUNI ARINI 1,2 , KHAMRON SUNAT 2, AND CHITSUTHA SOOMLEK 2
1 Department of Computer Science, Universitas Negeri Semarang, Semarang 50229, Indonesia
2 College of Computing, Khon Kaen University, Khon Kaen 40002, Thailand

Corresponding author: Khamron Sunat ([email protected])


This work was supported in part by Khon Kaen University Grant for ASEAN and GMS Countries Year of 2019; and in part by the
Advanced Smart Computing Laboratory, College of Computing, Khon Kaen University.

ABSTRACT This paper presents the logical relationships of Aristotle’s square of opposition on four basic
categorial prepositions (i.e., contrary, contradictory, subcontrary, and subaltern) of Joint Opposite Selection
(JOS). JOS brings a mutual reinforcement by a joint of the two opposition strategies Dynamic Opposite (DO)
and Selective Leading Opposition (SLO). The DO and SLO improve the balance of exploration and
exploitation, respectively, in a given search space. We also propose an enhancement of Golden Jackal
Optimization (GJO) with a Joint Opposite Selection named GJO-JOS. In the optimization process, JOS
assists GJO in assaulting the prey swiftly using SLO. DO assists GJO in finding better chances to locate
the fittest prey. With JOS, the GJO succeeds in elevating its performance. We evaluated the performance of
GJO-JOS on the CEC 2017 benchmark functions. The benchmark includes unimodal, multimodal, hybrid,
and composition functions. The evaluation results of GJO-JOS were better than GJO using each of the
seven single opposition-based learning strategies (OBLs). We also compared GJO-JOS to eight nature-
inspired algorithms including the original version of GJO. GJO-JOS produced promising results among
seven single OBLs, eight nature-inspired algorithms, and GJO. The experimental results confirmed that
GJO-JOS effectively generated equilibrium in the balance mechanism.

INDEX TERMS Joint opposite selection, nature-inspired optimization algorithm, opposition-based learning,
unconstraint optimization problem.

I. INTRODUCTION power emissions, which can be solved by applying multi-


Nature-inspired optimization algorithms imitate natural objective optimization to decision-making [6], effective fea-
behavior and phenomena to produce effective solutions [1], ture selection on cancer datasets is achieved by utilizing
e.g., the Ebola optimization search algorithm [2], the African Spark Distributed PSO [7], and the balance of convergence
vultures optimization algorithm [3], the Pelican optimiza- and diversity on many-objective PSO is accomplished by
tion algorithm [4], and the Reptile search algorithm [5]. employing a hybrid leader selection strategy [8]. Moreover,
The more notable abilities of the enhanced nature-inspired other techniques can be used to improve nature-inspired opti-
optimization algorithms are their abilities to solve essen- mization algorithms, e.g., gradient-based [9], chaotic [10],
tial issues or enhance existing solutions. Examples of quantum [11], and opposition-based learning (OBL) [12].
these abilities include the economics of combined heat and Many researchers recommend OBL as a learning tech-
nique that can improve the performance of an optimization
The associate editor coordinating the review of this manuscript and algorithm in a competition [13]. Tizhoosh [14] proposed the
approving it for publication was Cheng Chin . opposition-based learning (OBL) technique on the basis of

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
128800 VOLUME 10, 2022
F. Y. Arini et al.: GJO With JOS

Aristotle’s theory of opposition. The philosophy of Aristo- Quasi-Opposition Differential Evolution (QODE) is able
tle’s square of opposition introduced in the fourth century to reduce grid congestion on reactive power dispatch by
has attracted the interest of many scientists. Some of them minimizing the loss of active power, accelerating the pro-
have reviewed this philosophy in depth. Parsons reviewed the file of the voltage, and improving the stability of the volt-
historical aspects of the logical relationship of the Square of age [35]. An improved firefly optimization algorithm with
Opposition [15], [16]. Béziau et al. [17] described the square quasi-reflection can tackle the scheduling of the work-
of opposition as ‘‘a cornerstone of thought’’. Bernhard [18] flow cloud-edge environment and satisfy real-time require-
exhibited deep insight into the relationships within the logical ments [36]. Generalized opposition on the quantum salp
diagram of the square of opposition. Other scientists have swarm algorithm effectively approximates the accuracy of
explored how the principles of the square of opposition can quantile function on Nakagami-m [37]. Centroid opposi-
be found in nature as the basic theory of the philosophy of tion integrated with multiple strategies embedded on the
science. In mathematics, Smessaert et al. [19] introduced new salp swarm algorithm can reduce the probability of the fail-
logical geometries based on the Aristotelian logic diagram. ure of the design system of reliability optimization [38].
In physics, Arenhart et al. [20] utilized the square of opposi- An improved grey wolf optimizer with selective opposition
tion to describe the potential state of quantum superposition. shows efficiency in estimating the model parameters of pro-
Moreover, the utilization of OBL embedded in the slime ton exchange membrane fuel cells of a 250W stack [39].
mould algorithm combined with k-nearest neighbor (kNN) The dynamic opposite generates mutual learning which is
effectively elevates its exploration ability for solving feature integrated with the mutation strategy for solving multi-task
selection in medical classification [21], the Jaya algorithm is optimization problems [40].
enhanced when using adaptive OBL, which integrates more Gonzales [41] affirmed that maintaining the equilibrium of
than one opposition [22], the moth flame optimization is exploration and exploitation in the search space is essential to
improved with quasi opposition based learning for solving the main optimization process. There is no exact formula and
the path planning of a mobile robot [23], and the tunicate calculation to define the balance of exploration and exploita-
swarm algorithm performance is increased to optimize solar tion in the search space of the nature mimicking of nature-
cell power systems [24]. inspired algorithms [42], [43], [44]. Moreover, Wolpert et al.
Rahnamayan et al. [25] highlighted that opposite numbers [45] emphasized that no algorithm can solve all optimization
produced a higher probability of obtaining better fit compared problems. Then, Wang et al. [46] questioned whether two
to pure random numbers. Supporting evidence from scientific oppositions are better than one.
reviews also confirmed that the opposition strategy produced As mentioned earlier, Aristotle claimed that there is a
promising results [26], [27], [28]. It is for these reasons logical relation between the contrary ability that defines the
that many scientists have tried to improve, extend, or merge square of opposition. In a given search space of optimiza-
the opposition strategy. The examples are exhibited as fol- tion, exploration is contrary to exploitation. As mentioned
lows. Rahnamayan et al. [29] proposed Quasi-Opposition in the literature, among variations of single opposition ideas,
Based Learning (QOBL), which produces higher chances dynamic opposite (DO) conquers the exploration phase [34]
of being close to the solution by utilizing a jumping rate and selective opposition (SO) enriches the exploitation phase
and calculating the middle of opposite points. Ergezer et al. [33]. However, SO, which employs the far away dimensions,
[30] launched quasi-reflection, which increases the success still experiences premature convergence [33], which leads
rate of BBO with less fitness computation. Rahnamayan to a trap in the local optima. Therefore, Arini et al. [47]
et al. [31] initialized a random opposite point between the proposed an improved SO named Selective Leading Oppo-
center and boundary named centroid opposition. Hu et al. sition (SLO) and DO. SLO selects the close-distance dimen-
[32] estimated partial opposite populations simultaneously sions to improve the enrichment of exploitation. Meanwhile,
as an effort to produce a better solution. Dhargupta et al. DO [34] supports the diversity at the exploration and helps
[33] applied selective opposition by selecting the far away the search process to escape from being trapped at the local
dimensions that produce a fast convergence rate and improve optima by moving to the center and opposite position and to
the exploitation ability. Xu et al. [34] merged the quasi- the center position and current position.
opposition and quasi-reflection to enrich the diversity with Based on Aristotle’s doctrine, we can correlate that DO is
its asymmetric search behavior and enhance the exploration a sub-part of exploration, SLO is a sub-part of exploitation,
ability named dynamic opposite (DO). These examples are and DO is sub-contrary to SLO. The opposed action of explo-
improvements of OBL and are still recognized as a single ration and exploitation according to Gonzales [41] strengthen
opposition strategy. A single opposition strategy means the each other. Therefore, the joint of DO and SLO produces
opposition strategy will only perform once in every gen- equilibrium of the mutual reinforcement and is named Joint
eration. As a result, the improved optimization algorithm Opposite Selection (JOS) [47].
using a single opposition strategy can only enrich either the In this paper, we employ JOS to enhance the Golden Jackal
exploration ability or the exploitation ability. Optimization (GJO). GJO mimics the golden jackal’s collab-
Those single opposition ideas also generate promising orative hunting behavior, which consists of three phases: prey
results for solving real-world problems. The approach of searching, enclosing, and pouncing [48], and also shows the

VOLUME 10, 2022 128801


F. Y. Arini et al.: GJO With JOS

efficacy in the applications [49], [50]. JOS assists the GJO techniques embedded in GJO and eight nature-inspired
by attacking the prey expeditiously using SLO. The SLO optimization algorithms.
utilizes the existing linear decrement operator from the orig- This paper is structured as follows. Section II presents the
inal version of GJO to apply its strategy. DO contributes to philosophy of Aristotle on Joint Opposite Selection and a
enriching the diversification of GJO in finding other potential review of the Golden Jackal Optimization. Section III briefly
prey locations. discusses the proposed GJO-JOS. Section IV discusses the
In the process of optimization, GJO requires exploration setup of experiments and the analysis of experimental results.
and exploitation. Based on the experimental results of [51] Section V provides conclusions and future work.
on 23 benchmark functions, GJO offers very decent results
[48]. Nevertheless, there is no supporting evidence that GJO II. RELATED WORK
can conquer other benchmark problems such as CEC 2017. In the first sub-section, we elaborate on the philosophy of
We conducted an experiment on CEC 2017 and found that Joint Opposite Selection (JOS), which is followed by a brief
GJO did not perform sufficiently, when compared to the other discussion of the Golden Jackal Optimization (GJO).
nature-inspired optimization algorithms. Therefore, we uti-
lized the strength of JOS in balancing the exploration and A. THE PHILOSOPHY OF JOINT OPPOSITE SELECTION
exploitation to improve the capability of GJO performance (JOS)
in the phases of exploration and exploitation. The philosophy of JOS adopts the theory of square of opposi-
The performance of JOS embedded in GJO (GJO-JOS), tion [15]. The theory of square of opposition was defined by
was compared with other opposition strategies embedded in Aristotle in the fourth century BC [15]. The representation
GJO, such as Dynamic Opposite (DO), Reflection (R), Quasi- of the square opposition is illustrated in Figure 1(a) with four
opposition (QO), Generalized Opposition (GO), Selective corner propositions. These four corners are A, E, I, and O with
Opposition (SO) and Selective Leading Opposition (SLO). each letter corresponding to a universal affirmative; (every S
It should be noted that we did the experiment on GJO is P), universal negative (no S is P), particular affirmatives
with those OBLs to confirm the performance of GJO- (some S is P), and particular negative (some S is not P),
JOS among the opposition-based strategies. The perfor- respectively.
mance of GJO-JOS is also compared to eight nature-inspired
algorithms, i.e., Wild Horse Optimization (WHO), Aquila
Optimization (AO), Artificial Bee Colony (ABC), Harris
Hawk Optimization (HHO), Atomic Orbital Search (AOS),
Archimedes Optimization Algorithm (AOA), Reinforcement
Learning Neural Network Algorithm (RLNNA), and the orig-
inal version of Golden Jackal Optimization (GJO). The com-
parison of GJO-JOS versus GJO with the OBLs and GJO-JOS
versus nature-inspired algorithms are included in a collec-
tion competition of 29 benchmark functions of CEC 2017.
The main contributions of our research are highlighted as
follows:
 The philosophy of Aristotle’s square opposition exhibits
the mutual reinforcement of the opposed action
Dynamic Opposite (DO) and Selective Leading Oppo-
sition SLO) of Joint Opposite Selection (JOS) in explo-
ration and exploitation, respectively.
 The jumping rate adjustment of DO accelerates the
diversity of GJO-JOS in the exploration phase.
 The existence of the GJO linear decrement operator is
used by SLO as the threshold influencing the scheduling
behavior on GJO-JOS to accelerate its performance in
the exploitation phase.
 GJO-JOS is proposed to boost the optimization process FIGURE 1. (a) The square of opposition in Aristotle’s philosophy and
of mimicking the collaborative hunting’s performance of (b) the philosophy of JOS based on Aristotle’s square of opposition.
golden jackal (GJO).
 The effectiveness of GJO-JOS is demonstrated in a com- An affirmative statement and its negation produce a con-
petition of 29 benchmark functions CEC 2017 and is tradiction condition. For example, A is contrary to E, A is
evaluated using the statistical analysis; Wilcoxon sign contradictory to O, and E is contradictory to I. I is subaltern
rank test, scoring metric, and convergence curve. GJO- to A, O is subaltern to E and I is subcontrary to O. The
JOS is also compared to seven single opposition learning contrary indicates both of the statements cannot be true but

128802 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

FIGURE 2. The emergence balancing mechanism of JOS.

both statements can be false. The contradictory presents that parts: initialization and generation. Meanwhile, SLO on JOS
both statements cannot be true and also cannot be false. The occurs on each generation of the GJO workflow by setting
subcontrary shows that both statements can be true, but both its boundary and linear decrement energy of the prey Eld =
statements cannot be false. The subaltern exhibits a condition 1.5 × (1 − t/T ). A detailed explanation of the occurrence of
that if the global statement is true, the specific statement must JOS (the joint DO and SLO) is given in Algorithm 1.
be true. The subaltern is a particular statement of the global For further details, the steps process of SLO is described in
statement. Algorithm 2. The SLO can be applied by setting the popula-
Based on the contradictory concept of the square of oppo- tion size NP, dimension D, iteration t, and maximum iteration
sition, the philosophy of JOS can be presented as shown T , as inputs and setting the linear decrement as the threshold.
in Figure 1(b). In a given search space, Figure 1(b) shows The SLO will check on the position of each population. If the
that the exploration is contrary to exploitation, exploration current position of an individual Xk is not equal to the best
is contradictory to SLO, exploitation is contradictory to DO, position of an individual Xkbest then SLO will measure the
DO is subcontrary to SLO, DO is subaltern to exploration, difference distance ddm on each dimension based on the best
and SLO is subaltern to exploitation. position of the dimension Xkbest,m and the current position
As mentioned earlier, Gonzales [41] affirms that the of the dimension Xk,m . If the ddm is less than the threshold
opposed action of exploration and exploitation strengthen then they are identified as close distance dimensions Dc and
each other, which produces a balancing mechanism. The are counted. However, if the ddm is greater than the threshold
emergence of the balanced mechanism of JOS is described in then identify the far away distance dimension Df and count
Figure 2. It starts with the basic theory of Opposition Based them. Then, measure the associativity of the current posi-
Learning (OBL), which contains the opposition function tion and the best position with the Spearman’s Correlation
X
e = LB + UB − X. The basics of opposition function are Coefficient (src).
variously improved with many ideas. OBL with merging the If the src is less than zero and the number of close dimen-
center position and opposite position with the center position sions (Dc ) is greater than the number of far away dimen-
and current position named Dynamic Opposite (DO). Mean- sions then the opposition strategy of SLO will occur. The
while, OBL utilizes the linear decrement operator, selects computational complexity of SLO [47] is O(NP × T × Dc )
and counts the close distance dimensions, then analyzes their where N is presented as the number of search agents, T is the
association by using Spearman’s Rank Coefficient named maximum number of iterations and Dc is the number of close
Selective Leading Opposition (SLO). These two improved dimensions.
OBLs (DO and SLO) are single opposition functions. In the Meanwhile, the detailed steps of DO are exhibited in
given search space, as exhibited in Figure 2, DO supports Algorithm 3 on stage 1 and stage 2, respectively. In the
the exploration phase and SLO supports the exploitation population initialization stage, the DO occurs after the initial
phase. When the DO and SLO are joined then the balanced population (See line number 2-5). Line  2 sets the opposition-
mechanism is required. Therefore, the joint of DO and SLO based learning (OBL) strategy Xg OP by utilizing the initial
is named Joint Opposite Selection (JOS). position X within the range of lower boundary LB and upper
JOS in the workflow of GJO is shown in Figure 3. This boundary UB. In line 3, the OBL moves with a random num- 
shows that DO on JOS in the workflow of GJO occurs in two ber that produces the reflection opposition position Xg OR .

VOLUME 10, 2022 128803


F. Y. Arini et al.: GJO With JOS

Algorithm 1 GJO-JOS
1: Generate initial random population of X jackal
2: Produce initial random population of Xg DO based on X jackal
//Algoritm 2 Stage 1
3: X jackal ← Xg DO //Assign X gDO to X jackal
4: nFE = 0, t = 0, T = max_iteration
5: while nFE < maxFE do
6: Checked Boundary X jackal
7; Evaluate Fitness values of X jackal
8: Update nFE
9: Update Position of X jackal
10: Set selective boundary for SLO
11: Set Eld = 1.5 × (1 − t/T) as threshold for SLO
//SLO Threshold
12: Perform SLO //Algorithm 1
13: for each pair jackals do
14: E0 = 2 × rand − 1 //Initial Prey Energy
15: Ev = Eld × E0 //Prey Evading Energy
16: if Ev > 1 then //Exploration
17: Update the position X M dan X F
//Eq. (2) and Eq. (3)
18: else // if Ev < 1 then //Exploitation
19: Update the position X M dan X F
//Eq. (5) and Eq. (6)
20: end if
21: end for
22: if rand < Jr
23: Perform DO position (Xg DO )
//Algorithm 2 Stage 2
24: X jackal ← Xg DO //Assign X g DO to X jackal
25: end
26: t =t +1
27: end

stage 1 line 5. In each of the generations at stage 2, the DO


will proceed with the same process as in stage 1, with the
condition that the random number is less than the Jumping
rate Jr. The suitable Jr for DO on JOS is 0.25 [47]. The
computational complexity of DO is O(NP × Jr × T × D)
where NP is presented as the number of search agents, Jr is
the jumping rate, T is the maximum number of iterations and
D is the number of dimensions.

B. GOLDEN JACKAL OPTIMIZATION


GJO is proposed by Chopra et al. [48]. GJO is inspired by the
FIGURE 3. The workflow of GJO-JOS. pair (male and female) bond-hunting behavior of golden jack-
als in nature. The bond of a pair of golden jackals is shown
by their choral howling. The howling of the golden jackal
With the influence of the random number, the initial position is considered as some kind of engagement [52]. With their

X approaches the reflection opposition position Xg OR and choral howl, golden jackals inform others of their position
at the same time this approach moves away from the initial  and communicate with those others to locate their prey [53].
position X. This move is named the dynamic opposite Xg DO . For foraging, golden jackals utilize cooperative foraging,
Before the start of the
 process generation, the dynamic oppo- which allows them to search an available territory of larger
site position Xg DO is set as the initial position as stated in prey [54], [55]. They will move around the prey to ensure

128804 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

Algorithm 2 Selective Leading Opposition (SLO) of exploration, exploitation, and transition from exploration
1: Input: NP, D, t, T and exploitation.
2: Ouput: XDc : new opposition population based on SLO In the phase of exploration, golden jackals in nature seek
3: Set linear decrement as threshold and track their prey. However, sometimes the prey cannot
4: for k = 1:NP do consistently be spotted in a certain place and can easily be
5: if Xk is not equal to Xkbest lost. The strength of the prey energy is presented as Evading
6: for m = 1:D do Energy Ev . When |Ev | is greater than 1 means that the prey
7: ddm = | Xkbest,m − Xk | still has enough energy to escape. In this state, the phase of
8: if ddm ≤ threshold exploration occurs. At this phase, the hunting action of the
9: identify Dc (close distance dimensions) golden jackals preferred
 the male (X male ) as the leader and
10: Dc = Dc + 1 the female X female as adherent, as denoted in Eq. (2) and
11: Else Eq. (3) respectively
12: identify Df (faraway distance dimensions) 
13: Df = Df + 1 X M = X male − Ev X male − 0.05 × LFD (β) ⊗ X prey
14: end if (2)
15: end for X F = X female − Ev X female − 0.05 × LFD (β) ⊗ X prey

16: sum all ddm (difference ! distance)
(3)
(ddm )2
P
6
m=1
17: src = 1 − dd dd 2 −1 where X prey is the prey vector position with the influence
m( m )
18: if src ≤ zero and Dc greater than Df of a constant value of 0.05 and the Lévy flights LFD (β),
19: XDc = LBDc + UBDc − XDc as formulated in Eq. (4), approach the X male and X female as
20: end if denoted in Eq. (2) and Eq. (3) sequentially. Note that ⊗ is the
21: end if element-wise multiplication. The move of X male and X female
22: end for are controlled by the prey Evading Energy Ev = Eld ×E  0 . Eld
shows the prey decrement energy. Eld = 1.5× 1 − Tt linear
Algorithm 3 Dynamic Opposite (DO) decreases from 1.5 to zero during the generation. Mean-
while, E0 is defined as the prey’s initial energy. E0 is equal
Stage 1 Population Initialization
to 2 × rand − 1 with rand being within [0, 1]. Therefore,
1: Initialize search agents’ position X
X M and X F indicate the male and female updated position
2: Xg OP = LB + UB − X
toward the prey in the phase of exploration.
3: Xg OR = rand × X g OR  Mantegna [56] affirms that Lévy flights, as denoted in
4: Xg DO = X + rand × XgDO − X
Eq. (4), contain random numbers uD and vD as the results of
5: X ← Xg DO
a normal distribution with standard deviations of uD is σ and
Stage 2 Population Generation utilizes Jr
vD is 1. The parameter β of Lévy flights is 1.5. The dimension
6: while nFE < maxFE do
of the Lévy flights vector is represented as D.
7: if rand < Jr
8: Xg OP = LB + UB − X
  1
β
πβ

0 (1+β)×sin
!
9: Xg OR = rand × XgOR uD · σ 2
LFD (β) = , σ = 
  
10: XgDO = X + rand × Xg OR − X 1  
β−1

β
11: X ← Xg DO
|vD | 0 1+β2 ×β ×2 2

12: end if (4)


13: end while
In the phase of exploitation, the pair of golden jackals
enclose the prey and then chase them. Undoubtedly the Evad-
and prepare for their assault, then they encircle the prey until ing Energy Ev of the prey becomes weak. This state shows
it cannot escape. Finally, if escape seems hopeless, they will that |Ev | is less than 1 and that the exploitation occurs. When
attack the prey. This foraging hunting behavior of golden the mates of the golden jackals succeed in surrounding the
jackals is then formulated into a mathematical formula. First, prey, they will assault the prey until it looks lifeless. This
the population of golden jackals in the search space is defined pair hunting behavior of golden jackals (male (X male ) and
in Eq. (1) female (X female )) is then formulated in Eq. (5) and Eq. (6)
respectively.
X 0 = LB + rand × (UB − LB) (1)
X M = X male − Ev (0.05 × LFD (β) ⊗ X male ) − X prey
where UB is the upper boundary and LB is the lower boundary
(5)
in the search space with rand as a random number in the range 
of [0, 1]. The phases of mimicking the hunting behavior of X F = X female − Ev 0.05 × LFD (β) ⊗ X female − X prey
golden jackals in the optimization are exhibited in the phase (6)

VOLUME 10, 2022 128805


F. Y. Arini et al.: GJO With JOS

where X prey is the prey vector position approach the X male not achieve optimally, the DO strategy occurs as defined in
and X female as denoted in Eq. (2) and Eq. (3), respectively. lines 22-25. With DO, the jackal position will be scattered.
The position of male X male and female X female is influenced This effort is to find a better location for an appropriate prey.
by the constant value of 0.05, the Lévy flights LFD (β) where The detailed description of DO is shown in Algorithm 2 in
D represents the dimension as formulated in Eq. (4). This is Section IIA. Figure 4(a) illustrates the linear decrement oper-
the main difference of the pair of golden jackals’ movements ator that is used in GJO and is utilized by the SLO of JOS.
in the exploitation phase compared to the exploration phase. Figure 4(b) delineates the magnitude scheduling behavior of
The constant value of 0.05 and the Lévy flights LFD (β) are the prey evading energy Ev .
utilized to avoid the sluggishness trapped in the local optima. Both figures show the evolution of the value which
Nevertheless, in the exploitation phase, the operator of prey decreases from 1.5 to zero, carried out along 1000 iterations.
Evading Energy Ev performs the in same manner as in the The evading energy of the prey Ev restrains the occurrence of
exploration, by considering the rapid move of the golden updates to the golden jackal pair’s position in the phases of
jackal approaching the prey. Note that detailed variables and exploration and exploitation. Therefore, we can see that the
parameters are given in the Appendix. linear decrement operator influences the scheduling behavior
of the Evading Energy Ev of the prey.
III. THE PROPOSED GOLDEN JACKAL OPTIMATION (GJO) The efficiency of an algorithm can be measured in terms
WITH JOINT OPPOSITE SELECTION (JOS) of computational cost or computational complexity of time
In this paper, a joint of two single oppositions Dynamic complexity [57]. For GJO, Chopra et al. [48] affirmed that
Opposite (DO) [34] and Selection Leading Opposition (SLO) the computational complexity of GJO consists of the main
[47], namely Joint Opposite Selection (JOS) [47], was uti- two computational complexities. They are initialization and
lized to improve the performance of the Golden Jackal Opti- updating mechanisms. The initialization computational com-
mization [48]. We named the proposed algorithm Golden plexity of GJO is O(NP), where NP is the number of jackals.
Jackal Optimization – Joint Opposite Selection (GJO-JOS). The updating mechanism computational complexity of GJO
In the flowchart of optimization, shown in Figure 3, at the is O(NP × T ) + O(NP × T × D), where NP is the number of
very initialization, the initial jackal position X j is defined jackals, T is the number of maximum iterations, and D is the
randomly. This generates the initial DO position Xg DO . Then, dimension of definite problems. The run-time computational
the initial DO position Xg DO is assigned to X j . The following complexity of GJO is O(NP × (T + (T × D) + 1)), which
step then generates the new jackal position based on SLO means that the time complexity of GJO grows based on the
and DO. The jackal’s upper and lower boundary is checked NP × T × D.
first then the jackal’s fitness is assessed. SLO is applied by Arini et al. [47] confirmed that the computational com-
utilizing the linear decrement operator Eld = 1.5 × 1 − Tt plexity of JOS (SLO and DO) concludes as follows:
as a threshold. The new position based on SLO influences the
main optimization. Following the main optimization process, O(SLO) = O(NP × T × Dc ),
the DO occurs and is optimized under a proper Jumping Rate where NP is the number of jackals, T is the number of
(Jr). This process is terminated when it reaches the maximum maximum iterations, and Dc is represented for close distance
number of function evaluations (maxFE). dimensions,
The detail optimization process of GJO-JOS is exhibited in
Algorithm 3. The original GJO is described in the black font O(DO) = O(NP × Jr × T × D),
and the red font shows the JOS occurring in the GJO, as stated where NP is the number of jackals, Jr represents jumping rate,
in Algorithm 3 lines no. 2, 3, 12, 20, 21, and 22. As shown in T is the number of maximum iterations, and D is present as
line no. 12, the SLO is executed. The SLO  utilizes the linear the dimensions.
decrement operator Eld = 1.5 × 1 − Tt as the threshold as Therefore, the computational complexity efficiency for
stated in the green font at line no. 11. The action of SLO is GJO-JOS is:
described in detail in Algorithm 1 in Section IIA. The main
optimization process, shown in Figure 3, is shown in detail O(GJO − JOS) = O(NP × (T + (T × D) + 1))
in lines 13-21. The position of each pair of jackals is updated + O(NP × T × Dc + NP × Jr × T × D)
based on the Evading Energy of the prey Ev . If Ev ≥ 1 then
= O(NP × T (2 + Dc + D(T + Jr)).
update X M and X F positions, based on Eq. (2) and Eq. (3).
The phase of this condition is exploration. It means that in Hence, the computational complexity of GJO-JOS still shows
this phase the jackals attempt to trap their prey. However, the the same order time complexity as that of GJO. For the
prey could escape from the jackals’ trap because it still has proposed algorithm, the memoryrequirement is influenced by
the energy to escape. If Ev < 1 then update X M dan X F the size of the variables and the parameters of the algorithm,
positions based on Eq. (5) and Eq. (6). This phase is defined which are shown in the Appendix. Appendix A shows the
as exploitation. In this phase, the evading energy Ev of the memory size of GJO with the updated vector position of X M
prey has already decreased and the jackals have managed to and X F equals 2 × (NP × D). Meanwhile,  Appendix B
lead the prey into their trap. If all these strategies still do presents the memory size of DO Xg DO : NP × D and

128806 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

are composed of at least three functions. Note that due to


unjustified results, the benchmark function f2 was omitted
from this experiment. According to CEC evaluation criteria,
the search space in all categories of benchmark functions is
determined as [−100, 100]D . The lower bound (LB) is −100
and the upper bound (UB) is 100.
The performance of the proposed GJO-JOS and its com-
parison algorithms on CEC 2017 were assessed by a scoring
metric [58]. The scoring metric is summarized by the sum
of the error (SE) and the sum of the rank (SR). The highest
scoring metric is 100. Therefore, the maximum score of each
item is 50.
The value of SE is present in Eq. (7). The SE is formulated
by weight and summation of best objective values (ef ) in all
CEC 2017 benchmark functions (f1,3−30 ) based on 10, 30,
50, and 100 dimensions (D) [58]. The higher the dimensions,
the higher its weight. The weight for 10D is 0.1, for 30D it’s
0.2, for 50D it’s 0.3, and last for 100D it’s 0.4. For SR, the
obtained value is also influenced by the weight. However,
the summation on SR based on the rank of the algorithm in
the comparison, as denoted in Eq. (8),
   
X X
SE = 0.1 ×  ef10D  + 0.2 ×  ef30D 
f1,3−30 f1,3−30
 
X
+ 0.3 ×  ef50D 
f1,3−30
 
X
+ 0.4 ×  ef100D  . (7)
FIGURE 4. (a) Linear decrement operator and (b) Prey evading energy. f1,3−30
 
SE − SEmin
ScoreSE = 50 × 1 − , (8)
SE
Appendix C presents the memory size of SLO (X Dc ): NP ×
D Therefore, the memory size of GJO-JOS is k × (NP × D). where SEmin is the minimum value of SE. The obtained result
value of SR and the score of SR are shown in Eq. (9) and Eq.
IV. EXPERIMENTAL RESULTS AND DISCUSSION (10), respectively.
The experiments were carried out with an Intel R CoreTM    
i9-7980XE CPU @ 2.60 GHz and 64 GB RAM, running SR = 0.1 × 
X
ef10D  + 0.2 × 
X
ef30D 
Microsoft Windows 10 Pro. The optimization algorithms f1,3−30 f1,3−30
were written in MATLAB.  
The experiments include solving the single-objective real X
parameter numerical optimization of Congress on Evolution- + 0.3 ×  ef50D 
f1,3−30
ary Computation (CEC) 2017 [58]. The CEC is comprised  
of 29 benchmark functions (f1 , f3 − f30 ). These benchmark X
functions are used to evaluate the performance of GJO-JOS + 0.4 ×  ef100D  . (9)
compared to its competitors and are classified into four f1,3−30
categories as shown in Table 1. The four categories of the
 
SR − SRmin
benchmark functions are defined as follows: unimodal, sim- ScoreSR = 50 × 1 − , (10)
SR
ple modal, hybrid, and composition functions. The unimodal
functions (f1 , f3 ) consist of shifted and rotated Bent Cigar where SRmin is the minimal value of SR. Thus, the total score
and Zakhow Fincton, respectively; the simple multimodal is summarized in Eq. (11).
functions (f4 − f10 ) are the representation of one function that Total Score = ScoreSE + ScoreSR . (11)
has been shifted and rotated; the hybrid functions (f11 − f20 )
are the representation of two or more functions that blend Therefore, the parameters required to run each algorithm
into one function; and the composition functions (f21 − f30 ) on the CEC are shown as follows: (a) population size (NP)

VOLUME 10, 2022 128807


F. Y. Arini et al.: GJO With JOS

TABLE 1. The description of benchmark functions CEC 2017.

is fixed equal to 30 and (b) Jumping Rate (Jr) is set at 0.25


[47]. Each algorithm (a) repeats in 51 runs, each run tests
29 benchmark functions of CEC 2017 in 10, 30, 50, and
100 dimensions (D); (b) defines the maximum number of
function evaluations (maxFE) as 1 × 104 multiplied by the
number of dimensions (D); and (c) the maximum number of
iterations (T ) is defined by dividing the maxFE with NP.
The obtained experimental results are further discussed
in three sub-sections. Section A describes the performance
of GJO-JOS and GJO on Hybrid and composition functions
CEC 2017. Section B, the GJO-JOS is compared to six single
variants of oppositions (i.e., DO, SLO, SO, Quasi, General-
ized Opposition, and Reflection Opposition) and the original
version of GJO. Section C discusses the comparison of GJO-
JOS to seven nature-inspired algorithms and the original
version of GJO. FIGURE 5. The comparison of GJO-JOS Hybrid Functions in 10, 30, 50,
and 100 dimensions based on the obtained experiment results in
Tables 4, 5, 6, and 7.
A. GJO-JOS VS. GJO ON HYBRID AND COMPOSITION
FUNCTIONS CEC 2017
The GJO-JOS versus GJO was tested on four categories of composition functions of CEC 2017 have higher complexity
benchmark functions (unimodal, simple unimodal, hybrid, problems compared to the other two groups (unimodal and
and composition) of CEC 2017. In all categories, GJO- simple multimodal) in CEC 2017. Both hybrid and com-
JOS showed promising best fitness values compared to the position functions consist of 10 benchmark functions on
original version of GJO. It was noticed that the hybrid and (f11 − f20 ) and (f21 − f30 ), respectively. Each of the functions

128808 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

FIGURE 6. The comparison of GJO Hybrid Functions in 10, 30, 50, and FIGURE 8. The comparison of GJO Composition Functions in 10, 30, 50,
100 dimensions based on the obtained experiment results in Tables 4, 5, and 100 dimensions based on the obtained experiment results in
6, and 7. Tables 4, 5, 6, and 7.

is a representation of a hybrid or is composed of several


functions. fitness of f12 is the worst case compared to the best case that
reached by f20 with values of 5.50E+03. In 30D and 50D,
only f11 , f16 , f17 , and f20 produced mean best fitness around
+03, with the others producing above that number. However,
in the lower dimension of 10D, GJO on the hybrid function
relatively generates considerable mean best fitness of around
+03 and +04. The +03 numbers produced were on bench-
mark functions f11 , f14 , f15 , f16 , f17 and f20 with the mean
best fitness at 1.20E+03, 1.60E+03, 3.30E+03, 1.80E+03,
1.80E+03, and 2.10+03, respectively. Only f12 produced
1.20E+06 on 10D.
In Figure 5 and Figure 7, we zoom in on the rapid line
of the hybrid and composition functions of GJO-JOS on
30D and 50D to see the trend of the mean best fitness.
The mean best fitness on hybrid functions of GJO-JOS
(Figure 5) 30D and 50D showed promising results on bench-
mark functions f11 , f16 , f17 and f20 . The results produced by
benchmark function f11 on 30D and 50D were 1.20E+03
FIGURE 7. The comparison of GJO-JOS Composition Functions in 10, 30,
50, and 100 dimensions based on the obtained experiment results in and 1.40E+03, respectively; f16 on 30D and 50D produced
Tables 4, 5, 6, and 7. 2.50E+03 and 2.90E+03, respectively; f17 on 30D and 50D
obtained 2.00E+03 and 2.80E+03, respectively; f20 on 30D
Figure 5 and Figure 6 show the GJO-JOS and GJO perfor- and 50D generated 2.30E+03 and 2.80E+03, respectively.
mance, respectively, on hybrid functions on 10, 30, 50, and However, the mean best fitness of GJO-JOS on hybrid func-
100 dimensions in CEC 2017. Figure 7 and Figure 8 show tions did not perform as badly as the original GJO.
the GJO-JOS and GJO performance, respectively, on compo- The worst case of mean best fitness of GJO-JOS was only
sition functions on 10, 30, 50, and 100 dimensions in CEC 2.70E+08.
2017. The left and right sides of the four figures present Overall, the mean best fitness of the GJO-JOS com-
the same best fitness scale of the GJO-JOS and GJO hybrid position function (Figure 7) produced a better result than
functions. The red arrows on the hybrid and composition the original GJO composition function (Figure 8). For the
figures show the better mean best fitness of the algorithms. most part, the result values of the mean best fitness of
The lower the mean best fitness of the algorithms, the higher GJO-JOS and original GJO were around +03 and +04
their performance. except benchmark for function f30 . The benchmark func-
Figure 6 shows the mean best fitness of GJO on hybrid tion f30 GJO on 10D, 30D, 50D, and 100D produced mean
functions. In 100D, we found that four benchmark functions best fitness values of 2.04E+05, 2.30E+07, 4.20E+08,
of GJO on f12 , f13 , f15 and f19 did not perform as well as and 6.50E+09, respectively. The benchmark function f30
the others with values of 4.70E+10, 8.10E+09, 2.80E+09, GJO-JOS on 10D, 30D, 50D, and 100D produced mean
and 2.70E+09, respectively. Of those four, the mean best best fitness values of 5.10E+04, 2.00E+06, 2.00E+07, and

VOLUME 10, 2022 128809


F. Y. Arini et al.: GJO With JOS

TABLE 2. The SE score of GJO-JOS with six single variant oppositions and GJO.

TABLE 3. The SR score of GJO-JOS with six single variant oppositions and GJO.

2.10E+07, respectively. These mean best fitness results show 1) SCORING METRIC OF GJO-JOS COMPARED TO SIX
that GJO-JOS resulted in a considerable improvement, how- SINGLE VARIANTS OF OPPOSITIONS AND GJO
ever, it did not reach the mean best fitness of around +03 in all Table 2 and Table 3 show the experimental results from the
dimensions. calculation outcome of the scoring metric on SE and SR,
Moreover, an interesting trend occurs on 100D, the mean respectively. The SE values are represented in Table 2 as type
best fitness of benchmark functions GJO-JOS compared to of problem size and are calculated from Eq. (7). The total SR
GJO also showed promising improvement on f25 , f26 , f28 , and is a calculation result from Eq. (8).
f29 . The mean best fitness results of GJO on f25 , f26 , f28 and In both tables, the result values (SE and SR) are shown in
f29 were 1.10E+04, 2.70E+04, 1.60E+04, and 1.40E+04, problem sizes of 10, 30, 50, and 100 dimensions. The total
respectively. The mean best fitness results of GJO on score is presented as the conclusion of each algorithm. With
f25 , f26 , f28 and f29 were 3.50E+03, 6.20E+03, 3.50E+03, regards to the problem size, Table 2 presents a summation
and 6.90E+03, respectively. of mean best fitness in all benchmark functions CEC 2017 of
each algorithm, each of which is then multiplied by its weight
B. THE COMPARISON OF GJO-JOS WITH SIX SINGLE as a result of the total score according to the aforementioned
VARIANTS OF OPPOSITIONS AND GJO scoring metric formula (Eq. 9, 10, and 11). The results in
This sub-section discusses the experimental results of Table 2 clearly show that SE on GJO-JOS in 10D, 30D, 50D,
GJO-JOS, GJO with variant oppositions (DO, SLO, SO, and 100D have the lowest scores of 7.20E+05, 1.54E+07,
Quasi, Generalized, and Reflection), and the original 7.10E+07, and 2.96E+08, respectively. Moreover, we can
GJO using a scoring metric (as explained previously), see three trends based on the total scores of SE. Those
statistical analysis based on the mean and the stan- trends are high, middle, and low values. The low scores of
dard deviation of each algorithm on GJO-JOS, GJO-DO, GJO-SLO, GJO-SO, GJO-Generalized, and the original GJO
GJO-SLO, GJO-SO, GJO-Quasi, GJO-Generalized, GJO- are 0.0904, 0.0756, 0.9202, and 0.0747, respectively. The
Reflection, and GJO, and the convergence curve of those middle scores of GJO-DO, GJO-Quasi, and GJO-Reflection
algorithms. are 17.5759, 15.3323, and 16.5037, respectively. GJO-JOS

128810 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

reaches a high trend with a total score of 50. Therefore, GJO- tied with GJO-DO on the 50D in the benchmark function
JOS on SE produces a promising score compared to GJO with f10 (8.2E+03) and only lost to GJO-SLO on 100D in f9
other variant oppositions and the original GJO. (4.4E+04). For hybrid functions on 10D, GJO-JOS mostly
Furthermore, the problem size of SR in Table 3 shows the experienced wins and ties compared with its competitors. The
rank summation among its competitors. The best rank score ties of GJO-JOS on 10D occurred on f11 with GJO-DO, GJO-
of an algorithm among its competitors is valued as 1. The Q, and GJO-R with a score of 1.2E+03, f13 with GJO-Q, and
maximum rank is based on the total number of the compared GJO-R with a score of 1.1E+04, f15 with GJO-SLO with a
algorithms. Among the comparison, the GJO-JOS produced score of 1.5E+03, and f20 with GJO-DO, GJO-SLO, GJO-
the lowest rank value in 10D, 30D, 50D, and 100D with SO, GJO-Q, GJO-R and the original GJO with a score of
values 45, 31, 30, and 30, respectively. Moreover, GJO-JOS in 2.1E+03. However, GJO-JOS experienced a slight loss to
SR also achieved the highest value of the SR total score (50). GJO-SLO on f16 (1.7E+03) and GJO-G on f18 (2.9E+04).
The highest score was achieved by GJO-DO with a total score We also discovered losses in 30D for the hybrid function
of 19.1986. Next was GJO-Reflection with a total score of GJO-JOS with GJO-DO in f13 (8.6E+04) and f15 (2.7E+04).
SR 15.4031, which was just slightly above GJO-Quasi with The other benchmark functions, GJO-JOS won. On the higher
a score of 15.0281. GJO-SLO achieved a score of 10.9407, dimensions (50 and 100) in the hybrid functions, GJO-JOS
which was slightly above GJO-Generalized with a total score showed superiority among its competitors without any ties
of 10.8373. Although the original GJO has the lowest total or losses. In the higher dimensions (50 and 100) on the
score in SR with 7.5035, GJO-SO is only slightly higher with composition functions, GJO-JOS exhibited dominancy, how-
a score of 7.6942. ever, GJO-JOS tied with GJO-DO in only one function f24
(3.1E+03). In the 10D and 30D of the composition functions,
2) STATISTICAL ANALYSIS OF GJO-JOS COMPARED TO SIX GJO-JOS achieved wins and ties with the other competitors.
SINGLE VARIANTS OF OPPOSITIONS AND GJO
This section exhibits the effectiveness of GJO-JOS among six 3) CONVERGENCE CURVE OF GJO-JOS COMPARED TO SIX
existing single variants of oppositions embedded on GJO and SINGLE VARIANTS OF OPPOSITIONS AND GJO
the original version of GJO [48] on 10, 30, 50, and 100 dimen- The sufficiency of GJO-JOS with six variants of oppositions
sions (D) on the 29 benchmark functions of CEC 2017. and GJO is also measured by the convergence curve. The
The opposition variants were Dynamic Opposite (DO) [34], convergence curve is the visualization of the mean best fit-
Selective Leading Opposition (SLO) [47], Selective Oppo- ness of an algorithm over the generations [17]. In this case,
sition (SO) [33], Quasi Opposition (Q) [29], Generalized the progress of the convergence curve is analyzed with the
Opposition (G) [59] and Reflection Opposition (R) [30]. The benchmark functions CEC 2017. The convergence of GJO-
29 benchmark functions were classified into four categories: JOS and its competitors should reach the global optimum and
unimodal, simple multimodal, hybrid, and composition. The should be able to avoid premature convergence, which would
effectivenesses of GJO-JOS and its competitors were mea- not leave it trapped in the local optima.
sured by statistical analysis (mean and standard deviation) The convergence curves of GJO-JOS and GJO with six
based on the best fitness as seen in Tables 4, 5, 6, and 7. variants of oppositions and GJO are plotted based on the
The boldface indicates the best result based on the mean mean best fitness on the y-axis and the number of function
best fitness. The italic presents the tied results among the evaluations on the x-axis as seen in Tables 8 and 9. The lower
competitors; however, the results of standard deviations (std) the mean best fitness, the better the algorithm. The GJO-JOS
that are informed along the mean values are varied. is presented in a bold diamond line at the very bottom of
In all dimensions and in all categories of the benchmark each graph and the original GJO is presented with a bold
functions, GJO-JOS among six existing single variants of arrow. The convergences are displayed in four categories of
oppositions embedded on GJO and the original version of the benchmark functions of CEC 2017 on 10, 30, 50, and
GJO obtained promising experimental results. For unimodal 100 dimensions. We selected one function for each category
functions (f1 , f3 ) in all dimensions, GJO-JOS produced a of benchmark functions: unimodal (f1 ), simple multimodal
better fitness value compared to other competitors. For sim- (f6 ), hybrid (f16 ), and composition (f24 ).
ple multimodal functions (f4 − f10 ) in 10 dimensions, GJO- For the unimodal benchmark function f1 in 10D, GJO-JOS
JOS only achieved better results in f7 and f9 . For the other is in close competition with the other algorithms. In order to
benchmark functions of simple multimodal, GJO-JOS expe- observe this more closely we zoomed in on the tight line at
rienced a tie with GJO-DO (3 benchmark functions f7 , f8 , f10 ), 9 × 104 to get a better view of the GJO-JOS competitiveness
GJO-SLO (3 benchmark functions f7 , f8 , f10 ), and GJO-R with its competitors. As further proof, the harder the problem
(3 benchmark functions f4 , f8 , f10 ). Therefore, in simple mul- (30, 50, and 100 dimensions), the better the result of the
timodal 10D mostly tied in f8 and f10 . Nevertheless, GJO-JOS convergence curve of GJO-JOS among its competitors.
did not experience any losses compared to all of the other In the case of the simple multimodal on the bench-
competitors in simple multimodal 10D. In simple multimodal mark function f6 in 10, 30, and 50 dimensions, GJO-JOS
with higher dimensions (30, 50 and 100), GJO-JOS only showed dominancy among its competitors. However, in 100D

VOLUME 10, 2022 128811


F. Y. Arini et al.: GJO With JOS

TABLE 4. GJO-JOS compared to six variant oppositions and GJO (tested on 10 dimensions, CEC 2017).

fierce competition occurred among all algorithms then we C. GJO-JOS COMPARED WITH SEVEN VARIANT
tried to zoom in on. Based on the zoomed in on line at NATURE-INSPIRED ALGORITHMS AND GJO
(9 − 10) × 104 , the GJO-JOS is able to compete with the The experimental evaluation discussions of GJO-JOS with
original GJO, however, GJO-JOS did not reach the lowest seven variant nature-inspired algorithms (Wild Horse Opti-
score. At 100D, only GJO-SLO came slightly lower than the mization (WHO) [60], Aquila Optimization (AO) [61], Arti-
GJO-JOS. ficial Bee Colony (ABC) [62], Harris Hawk Optimization
For the hybrid on the benchmark function f16 in 10D, (HHO) [63], Atomic Orbital Search (AOS) [64], Archimedes
the GJO-JOS line was located just below the GJO. Under- Optimization Algorithm (AOA) [65], and Neural Network
neath GJO-JOS, there are three algorithms (GJO-R in purple, Algorithm with Reinforcement Learning (RLNNA) [66]) and
GJO-G in light brown and the lowest GJO-SLO in light original GJO are explained in detail as follows.
blue). However, the line representation of all algorithms The first sub-section describes the performance of GJO-
in 30, 50, and 100 dimensions seems to converge in one JOS with seven variant nature-inspired algorithms and GJO
place. using a scoring metric. The second sub-section elucidates the
On 30D, when we zoom in on those lines, the mean best statistical analysis of GJO-JOS with seven variant nature-
fitness of GJO-JOS is able to reach lower than the original inspired algorithms and GJO using mean and standard devi-
GJO. However, the GJO-Q position is just slightly under the ation (std). The third sub-section discusses the convergence
mean best fitness of GJO-JOS. Zooming in to 9 × 104 on ability of GJO-JOS with seven variant nature-inspired algo-
50D, GJO-Q with the other two algorithms, GJO-G and GJO- rithms and GJO based on the mean best fitness and number
SO, also have lower mean best fitness compared to GJO- of function evaluations.
JOS. However, the zoom in at 9 × 104 on 50D, GJO-JOS
managed to achieve the lowest mean best fitness among its
competitors. 1) SCORING METRIC GJO-JOS WITH SEVEN VARIANT
The composition of the benchmark function f24 in 10D ALGORITHMS AND ORIGINAL GJO
shows very clearly that GJO-JOS produced the lowest mean Figure 9 and Figure 10 show the total score of the scoring
best fitness among its competitors in all dimensions (10, 30, metric of GJO-JOS with seven variant algorithms and the
50, and 100 dimensions). original GJO based on the sum of error (SE) and the sum of

128812 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

TABLE 5. GJO-JOS compared to six variant oppositions and GJO (tested on 30 dimensions, CEC 2017).

rank (SR), respectively, present in Eq. (7) and Eq. (8). The
patterned brick on the very left and right side in both figures
shows the proposed GJO-JOS and the original GJO. The solid
bricks that are shown between GJO-JOS and GJO are the
competitor algorithms that were used to validate GJO-JOS
performance.
The arrow pointing up represents the score value of SE
and SR. The higher the score the better the performance. The
scores presented at the top of the bricks are the achievement
scores for each algorithm in SE and SR. Both figures confirm
that JOS on GJO does enhance the performance of GJO.
In Figure 9, the SE score on the total score of GJO-JOS
(47.69) shows a huge improvement compared with the origi-
FIGURE 9. Scoring Metric Result Sum of Error (SE) GJO-JOS and its
nal GJO (0.07). We determined, from the SE based on GJO- competitors.
JOS and its competitors, that there were three trends: low,
mid, and high. The lower scores are on AOA, RLNNA, and
GJO with values of 0.54, 0.08, and 0.07, respectively. The
middle scores of 27.90 and 18.48 are achieved by HHO and the original GJO is not as significant as in SE (Figure 9).
AOS. The higher scores obtained by GJO-JOS (47.69), WHO Interestingly, the SR score values of AO, ABC, HHO, AOS,
(44.04), AO (46.96), and ABC (50). This demonstrates that AOA, RLNNA, and GJO are all quite close with score val-
these algorithms can produce considerable results of mean ues of 26.45, 18.03, 18.97, 19.10, 23.53, 16.14, and 15.43,
best fitness values. respectively.
Figure 10 shows the SR scores with GJO-JOS achieving We did, however, notice discrepancies in the algorithms
the highest score at 50. The original GJO, however, reached based on SE and SR. On the SE scores, we saw three trends
a score of 15.43, which is slightly under the RLNNA (16.14). but only two trends on the SR, low and high. SR is the scoring
Therefore, the occurrence of the gap between GJO-JOS and summation of the comparison scoring on each algorithm.

VOLUME 10, 2022 128813


F. Y. Arini et al.: GJO With JOS

TABLE 6. GJO-JOS compared to six variant oppositions and GJO (tested on 50 dimensions, CEC 2017).

AOS, AOA, and RLNNA) and the original GJO. The rep-
resentation of the statistical analysis is presented in the
same way as the previous sub-section, which evaluated the
algorithms based on the mean and standard deviation (std)
of 29 benchmark functions CEC 2017 in 10, 30, 50, and
100 dimensions. The discussions are specified on the four
classifications for the 29 benchmark functions: unimodal,
simple multimodal, hybrid, and composition as shown in
Tables 10, 11, 12, and 13. The best-obtained results are shown
in boldface on those tables and the ties are presented in
italic.
In all dimensions (10, 30, 50, and 100) and in all categories
FIGURE 10. Scoring Metric Result Sum of Rank (SR) GJO-JOS and its of the benchmark functions (unimodal, simple multimodal,
competitors.
hybrid, and composition), GJO-JOS showed dominant result
values to all seven variant algorithms (WHO, AO, ABC,
The lower scores of SR were acquired on AO, ABC, HHO, HHO, AOS, AOA, and RLNNA) and the original GJO. WHO
AOS, AOA, RLNNA, and GJO with scores of 26.45, 18.03, produced a slight high loss on f1 in 10, 30 and 50 dimensions
18.97, 19.10, 23.53, 16.14, and 15.43, respectively. The high with values 3.6E+02, 8.5E+03, and 8.6E+03, respectively.
scores were achieved by GJO-JOS (50) and WHO (46.50). However, GJO-JOS generated better best fitness for all uni-
Although GJO-JOS at SE did not reach the highest score of modal functions (f1 , f3 ) in all dimensions, compared to AO,
50 among its competitors, GJO-JOS still gained a promising ABC, HHO, AOS, AOA, RLNNA, and GJO.
total score (97.69) for SE and SR, as denoted in Eq. (11). As shown in Table 10, for unimodal function f3 on 10
dimensions, we discovered several ties for the mean best
2) STATISTICAL ANALYSIS OF GJO-JOS WITH SEVEN fitness of GJO-JOS with WHO, AO, HHO, AOS, and AOA
VARIANT ALGORITHMS AND THE ORIGINAL GJO with a score of 3.0E+02.
This section presents the statistical analysis of GJO-JOS For the simple multimodal functions (f4 − f10 ), GJO-
with seven variant algorithms (WHO, AO, ABC, HHO, JOS showed promising results in all dimensions, which is

128814 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

TABLE 7. GJO-JOS compared to six variant oppositions and GJO (tested on 100 dimensions, CEC 2017).

TABLE 8. Convergence curve of GJO-JOS vs. six variant oppositions and GJO: unimodal function on f1 and simple multimodal function on f6 .

confirmed by the fact that GJO-JOS did not experience Tables 11, 12, and 13. However, in 10D, for the sim-
any losses in 30, 50, and 100 dimensions as shown in ple multimodal functions, GJO-JOS only lost out to WHO

VOLUME 10, 2022 128815


F. Y. Arini et al.: GJO With JOS

TABLE 9. Convergence curve of GJO-JOS vs. six variant oppositions and GJO: hybrid function on f16 and composition function on f24 .

TABLE 10. GJO-JOS compared to seven nature-inspired algorithms and GJO (tested on 10 dimensions, CEC 2017).

(f4 , f8 , f10 ) and ABC (f4 ). On 10D for simple multimodal (f5 ), and RLNNA (f5 , f6 ). Table 8 shows that for the
functions, GJO-JOS tied with WHO (f5 , f6 ), AO (f5 ), AOA hybrid functions on 10D, GJO-JOS experienced fragility

128816 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

TABLE 11. GJO-JOS compared to seven nature-inspired algorithms and GJO (tested on 30 dimensions, CEC 2017).

compared to RLNNA. However, with other competi- 100 dimensions. The lines representing each algorithm in the
tors, GJO-JOS just experienced ties with WHO (f11 , f19 ), convergence curve are detailed and explained in the legend.
AO (f11 ), ABC (f17 ), AOA (f11 , f19 , f20 ), and RLNNA The lines of GJO-JOS and GJO exhibited differently than the
(f11 , f17 , f19 , f20 ). In the higher dimensions (30, 50, and other algorithms. The aim of this was to see the strength of
100 dimensions), GJO-JOS dominated its competitors, as the GJO-JOS compared to the original GJO and the other
shown in Tables 9-11. nature-inspired algorithms. The line of GJO-JOS is presented
For the composition functions, GJO-JOS experienced in bold diamond and the line of GJO is defined in bold
more ties than the unimodal, simple multimodal, and hybrid arrow. The lower line means that the mean of the best fitness
functions. However, based on Tables 10-13 it can be seen that algorithm is better.
as the dimension size increases the number of ties reduces. In unimodal benchmark function f1 in 10D, GJO-JOS
This is confirmed by the result of the 100D score, GJO-JOS with the variant nature-inspired algorithms demonstrated
produced a better mean best fitness value compared to its rapid solutions with each other. Therefore, we eluci-
competitors. date and zoom out the rapid line around the point of
9 × 104 . In this case, the GJO-JOS (with the diamond
3) CONVERGENCE CURVE OF GJO-JOS COMPARED TO line) showed a lower line than the original GJO arrowed
SEVEN ALGORITHMS AND THE ORIGINAL GJO line. GJO-JOS produces better results than the original
The convergence curve of GJO-JOS compared to the seven GJO.
variant algorithms and GJO, as shown in Tables 14 and 15, RLNNA however, displayed between GJO-JOS and GJO
is similar to the convergence curve of GJO-JOS compared to and the others followed the same line as GJO-JOS. This
the six variants of oppositions and GJO as shown in Tables means that this problem size is convenient for the other algo-
8 and 9. We describe the convergence curve of the algorithms rithms to solve. For confirmation of this, it can be seen that
by selecting one of the benchmark functions of CEC 2017 on the competition with GJO-JOS becomes increasingly tough
each classified function, i.e., unimodal (f1 ), simple multi- when the dimensions become higher. However, GJO-JOS still
modal (f6 ), hybrid (f16 ), and composition (f24 ). These selected manages to reach a lower mean best fitness than the original
benchmark functions are then evaluated on 10, 30, 50, and GJO.

VOLUME 10, 2022 128817


F. Y. Arini et al.: GJO With JOS

TABLE 12. GJO-JOS compared to seven nature-inspired algorithms and GJO (tested on 50 dimensions, CEC 2017).

TABLE 13. GJO-JOS compared to seven nature-inspired algorithms and GJO (tested on 100 dimensions, CEC 2017).

In the simple multimodal benchmark function f6 in 10D, and competed with GJO-JOS. In 30, 50, and 100 dimen-
GJO-JOS fights fiercely with AOA, WHO, and RLNNA. sions, WHO and RLNNA produced significantly mean the
On 30D, AOA gradually gained a lower mean best fitness best fitness below GJO-JOS. Nevertheless, GJO-JOS did not

128818 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

TABLE 14. Convergence curve of GJO-JOS vs. seven variant algorithms and GJO: unimodal function on f1 and simple multimodal function on f6 .

TABLE 15. Convergence curve of GJO-JOS vs. seven variant algorithms and GJO: hybrid function on f16 and composition function on f24 .

lose to the original GJO, the well-known ABC, and the new below GJO-JOS. However, GJO-JOS still shows promising
promising algorithms such as AO, AOS, HHO, and AOA. mean-best fitness in all dimensions.
The position of AOS on 50 and 100 dimensions is just under
GJO-JOS. V. CONCLUSION AND FUTURE WORK
The benchmark function f16 in all dimensions in hybrid The philosophy of ‘‘square of opposition’’ by Aristotle
shows that GJO-JOS performs adequately among its com- exhibits the logical relationships of four basic categorial
petitors. Finally, the benchmark function f24 in all dimen- prepositions, namely contrary, contradictory, subcontrary,
sions of composition shows that WHO came just slightly and subaltern. These categories successfully portray the

VOLUME 10, 2022 128819


F. Y. Arini et al.: GJO With JOS

correlation of logical relationships of the ‘‘square of oppo- occurs. However, there are situations where GJO-JOS could
sition’’ and Joint Opposite Selection (JOS). Based on the be employed, i.e., it can be applied in a scenario with millions
‘‘square of opposition’’, the representation of JOS can of different variables. The estimated time of GJO-JOS for
define the relationship between Dynamic opposite (DO) and solving this problem immediately requires a learning pro-
Selective Leading Opposition (SLO) in the exploration and cess for optimization, such as the knowledge transfer tech-
exploitation phases, respectively. The relation characteristics nique [70], [71]. Using this method, the computational time
of DO and SLO of JOS show the capability to strengthen each can be shortened. Additionally, the management of massive
other in a given search space. From this, we can conclude data storage with millions of different variables [72] can
that the JOS produces mutual reinforcement in the balance be applied with distributed data and distributed processing.
mechanism. In the case of clustering, Tripathi et al. [73] partitioned a large
In the process optimization, JOS succeeds in utilizing dataset into small-scale input and then parallelized the fitness
SLO to assist GJO to strike out its prey swiftly. While DO computation using the Map-Reduce mapper function.
on JOS enriches the probability of the GJO finding bet- For further future work, GJO-JOS could be applied to
ter chances to locate the fittest prey. We verified and ana- real-world problems. Scientific proofs have affirmed that
lyzed the performance of GJO-JOS based on three different opposition-based learning can solve real-world issues. For
perspectives. First, GJO-JOS was compared to GJO among example, Swamy et al. [74] utilized an opposition-enhancing
the OBLs embedded in GJO and nature-inspired algorithms genetic algorithm that integrates with Cauchy mutation
on the hybrid and composition functions of CEC 2017. to minimize the cost of wind power plants. Kamau et al.
Second, GJO-JOS was compared to six OBLs embedded [75] used opposition to improve chaotic elephant herd-
in GJO (GJO-DO, GJO-SLO, GJO-SO, GJO-Quasi, GJO- ing optimization for accelerating the MLP prediction rate,
Generalized, and GJO-Reflection). Third, GJO-JOS was and Jiang et al. [76] showed the effectiveness of opposition
compared to seven nature-inspired optimization algorithms embedded in the seagull optimization algorithm for classifi-
(i.e., Wild Horse Optimization (WHO), Aquila Optimization cation achievement.
(AO), Artificial Bee Colony (ABC), Harris Hawk Optimiza-
tion (HHO), Atomic Orbital Search (AOS), Archimedes Opti- APPENDIX OF VARIABLES/PARAMETERS,
mization Algorithm (AOA), and Neural Network Algorithm DESCRIPTIONS, AND SIZES
with Reinforcement Learning (RLNNA)) and the original Appendix A presents the nomenclature of GJO, Appendix B
version of GJO. These perspectives were included in the com- presents the nomenclature of DO, and Appendix C presents
petition of 29 benchmark functions of CEC 2017. The CEC the nomenclature of SLO.
2017 consists of four groups of classification benchmark
functions: unimodal, multimodal, hybrid, and composition.
The first perspective shows the gap improvement of APPENDIX A
GJO-JOS and GJO among the OBLs embedded in GJO GJO’s VARIABLE/PARAMETER, DESCRIPTION, AND SIZE
and nature-inspired algorithms on hybrid and composition
functions of CEC 2017. In the second perspective, GJO-
JOS shows consistent dominancy among the OBLs and GJO,
which was determined using a scoring metric, mean and stan-
dard deviation, and convergence rates on 10D, 30D, 50D, and
100D. The determination in the second perspective was also
utilized in the third perspective. Based on this determination,
GJO-JOS competed fiercely with the original version of GJO
and seven nature-inspired optimization algorithms. There-
fore, based on those three perspectives, GJO-JOS exhibited
a strong performance in improving the original version of
GJO. The obtained results of GJO-JOS on the benchmark
functions compared to six single OBL embedded on GJO,
seven nature-inspired algorithms, and the original version of
GJO demonstrated a promising result, especially in higher
dimensions. Therefore, GJO-JOS can be considered to be a
promising metaheuristic optimization algorithm.
Despite the usefulness of an algorithm for solving single-
objective problems, scientific reviews [44], [67], [68], [69],
have affirmed that there are limitations in the metaheuristic,
meaning that there are no guarantees for finding an optimal
global solution or the final solution even if sufficient diversity

128820 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

APPENDIX B [6] Y. Li, J. Wang, D. Zhao, G. Li, and C. Chen, ‘‘A two-stage approach
DO’s VARIABLE/PARAMETER, DESCRIPTION, AND SIZE for combined heat and power economic emission dispatch: Combining
multi-objective optimization with integrated decision making,’’ Energy,
vol. 162, pp. 237–254, Nov. 2018, doi: 10.1016/j.energy.2018.07.200.
[7] K. Tadist, F. Mrabti, N. S. Nikolov, A. Zahi, and S. Najah, ‘‘SDPSO:
Spark distributed PSO-based approach for feature selection and cancer
disease prognosis,’’ J. Big Data, vol. 8, no. 1, pp. 1–22, Dec. 2021, doi:
10.1186/s40537-021-00409-x.
[8] M.-F. Leung, C. A. C. Coello, C.-C. Cheung, S.-C. Ng, and A. K.-F. Lui,
‘‘A hybrid leader selection strategy for many-objective particle swarm
optimization,’’ IEEE Access, vol. 8, pp. 189527–189545, 2020, doi:
10.1109/ACCESS.2020.3031002.
[9] R. Azizipanah-Abarghooee, T. Niknam, M. Gharibzadeh, and
F. Golestaneh, ‘‘Robust, fast and optimal solution of practical economic
dispatch by a new enhanced gradient-based simplified swarm optimisation
algorithm,’’ IET Gener., Transmiss. Distrib., vol. 7, no. 6, pp. 620–635,
Jun. 2013, doi: 10.1049/iet-gtd.2012.0616.
[10] M. Mitić, N. Vuković, M. Petrović, and Z. Miljković, ‘‘Chaotic fruit
fly optimization algorithm,’’ Knowl.-Based Syst., vol. 89, pp. 446–458,
Nov. 2015, doi: 10.1016/j.knosys.2015.08.010.
[11] L. Wang, L. Liu, J. Qi, and W. Peng, ‘‘Improved quantum particle swarm
APPENDIX C optimization algorithm for offline path planning in AUVs,’’ IEEE Access,
SLO’s VARIABLE/PARAMETER, DESCRIPTION, AND SIZE vol. 8, pp. 143397–143441, 2020, doi: 10.1109/ACCESS.2020.3013953.
[12] B. S. Yildiz, N. Pholdee, S. Bureerat, A. R. Yildiz, and S. M. Sait,
‘‘Enhanced grasshopper optimization algorithm using elite opposition-
based learning for solving real-world engineering problems,’’ Eng. Com-
put., vol. 2021, pp. 1–13, Jan. 2021, doi: 10.1007/s00366-021-01368-w.
[13] Q. Xu, L. Wang, N. Wang, X. Hei, and L. Zhao, ‘‘A review of opposition-
based learning from 2005 to 2012,’’ Eng. Appl. Artif. Intell., vol. 29,
pp. 1–12, Mar. 2014, doi: 10.1016/j.engappai.2013.12.004.
[14] H. R. Tizhoosh, ‘‘Opposition-based learning: A new scheme for machine
intelligence,’’ in Proc. Int. Conf. Comput. Intell. Modeling, Control
Autom. Int. Conf. Intell. Agents, Web Technol. Internet Commerce, 2005,
pp. 695–701, doi: 10.1109/cimca.2005.1631345.
[15] T. Parsons. (1997). The Traditional Square of Opposition. Stanford
Encyclopedia Philosophy. [Online]. Available: https://plato.stanford.edu/
entries/square/
[16] T. Parsons, ‘‘Things that are right with the traditional square of oppo-
sition,’’ Logica Universalis, vol. 2, no. 1, pp. 3–11, Mar. 2008, doi:
10.1007/s11787-007-0031-x.
[17] J.-Y. Béziau and G. Basti, ‘‘The square of opposition: A cornerstone of
thought,’’ in The Square of Opposition: A Cornerstone of Thought. Cham,
DECLARATION OF COMPETING INTEREST Switzerland, 2017, pp. 3–12.
The authors declare that they have no known competing [18] P. Bernhard, ‘‘Visualizations of the square of opposition,’’ Logica Univer-
salis, vol. 2, no. 1, pp. 31–41, Mar. 2008, doi: 10.1007/s11787-007-0023-x.
financial interests or personal relationships that could have
[19] H. Smessaert and L. Demey, ‘‘Logical geometries and information in the
influenced the work reported in this paper. square of oppositions,’’ J. Log., Lang. Inf., vol. 23, no. 4, pp. 527–565,
Dec. 2014, doi: 10.1007/s10849-014-9207-y.
ACKNOWLEDGMENT [20] J. R. B. Arenhart and D. Krause, ‘‘Potentiality and contradiction in quan-
tum mechanics,’’ in The Road to Universal Logic. Cham, Switzerland:
The authors would like to thank researchers for sharing the Springer, 2015, pp. 201–211.
source code and knowledge. [21] Y. M. Wazery, E. Saber, E. H. Houssein, A. A. Ali, and E. Amer,
‘‘An efficient slime mould algorithm combined with K -nearest neighbor
REFERENCES for medical classification tasks,’’ IEEE Access, vol. 9, pp. 113666–113682,
2021, doi: 10.1109/ACCESS.2021.3105485.
[1] X.-S. Yang, ‘‘Nature-inspired optimization algorithms: Challenges and
[22] A. B. Nasser, K. Z. Zamli, F. Hujainah, W. A. H. M. Ghanem,
open problems,’’ J. Comput. Sci., vol. 46, Oct. 2020, Art. no. 101104, doi:
A.-M.-H. Y. Saad, and N. A. M. Alduais, ‘‘An adaptive opposition-based
10.1016/j.jocs.2020.101104.
learning selection: The case for Jaya algorithm,’’ IEEE Access, vol. 9,
[2] O. N. Oyelade, A. E.-S. Ezugwu, T. I. A. Mohamed, and L. Abualigah,
pp. 55581–55594, 2021, doi: 10.1109/ACCESS.2021.3055367.
‘‘Ebola optimization search algorithm: A new nature-inspired metaheuris-
tic optimization algorithm,’’ IEEE Access, vol. 10, pp. 16150–16177, 2022, [23] X. Dai and Y. Wei, ‘‘Application of improved moth-flame opti-
doi: 10.1109/ACCESS.2022.3147821. mization algorithm for robot path planning,’’ IEEE Access, vol. 9,
[3] B. Abdollahzadeh, F. S. Gharehchopogh, and S. Mirjalili, ‘‘African vul- pp. 105914–105925, 2021, doi: 10.1109/ACCESS.2021.3100628.
tures optimization algorithm: A new nature-inspired metaheuristic algo- [24] A. Sharma, A. Sharma, A. Dasgotra, V. Jately, M. Ram, S. Rajput,
rithm for global optimization problems,’’ Comput. Ind. Eng., vol. 158, M. Averbukh, and B. Azzopardi, ‘‘Opposition-based tunicate swarm algo-
Aug. 2021, Art. no. 107408, doi: 10.1016/j.cie.2021.107408. rithm for parameter optimization of solar cells,’’ IEEE Access, vol. 9,
[4] P. Trojovský and M. Dehghani, ‘‘Pelican optimization algorithm: A novel pp. 125590–125602, 2021, doi: 10.1109/ACCESS.2021.3110849.
nature-inspired algorithm for engineering applications,’’ Sensors, vol. 22, [25] S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, ‘‘Opposition versus
no. 3, p. 855, Jan. 2022, doi: 10.3390/s22030855. randomness in soft computing techniques,’’ Appl. Soft. Comput., vol. 8,
[5] L. Abualigah, M. A. Elaziz, P. Sumari, Z. W. Geem, and A. H. Gandomi, no. 2, pp. 906–918, 2008, doi: 10.1016/j.asoc.2007.07.010.
‘‘Reptile search algorithm (RSA): A nature-inspired meta-heuristic opti- [26] F. S. Al-Qunaieer, H. R. Tizhoosh, and S. Rahnamayan, ‘‘Opposition based
mizer,’’ Expert Syst. Appl., vol. 191, Apr. 2022, Art. no. 116158, doi: computing—A survey,’’ in Proc. Int. Joint Conf. Neural Netw. (IJCNN),
10.1016/j.eswa.2021.116158. Jul. 2010, pp. 1–7, doi: 10.1109/IJCNN.2010.5596906.

VOLUME 10, 2022 128821


F. Y. Arini et al.: GJO With JOS

[27] S. Mahdavi, S. Rahnamayan, and K. Deb, ‘‘Opposition based learning: [48] N. Chopra and M. M. Ansari, ‘‘Golden jackal optimization: A novel
A literature review,’’ Swarm Evol. Comput., vol. 39, pp. 1–23, Apr. 2018, nature-inspired optimizer for engineering applications,’’ Expert Syst. Appl.,
doi: 10.1016/j.swevo.2017.09.010. vol. 198, Jul. 2022, Art. no. 116924, doi: 10.1016/j.eswa.2022.116924.
[28] N. Rojas-Morales, M.-C. R. Rojas, and E. M. Ureta, ‘‘A survey and classi- [49] E. H. Houssein, D. A. Abdelkareem, M. M. Emam, M. A. Hameed,
fication of opposition-based metaheuristics,’’ Comput. Ind. Eng., vol. 110, and M. Younan, ‘‘An efficient image segmentation method for skin
pp. 424–435, Aug. 2017, doi: 10.1016/j.cie.2017.06.028. cancer imaging using improved golden jackal optimization algo-
[29] S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, ‘‘Quasi- rithm,’’ Comput. Biol. Med., vol. 149, Oct. 2022, Art. no. 106075, doi:
oppositional differential evolution,’’ in Proc. IEEE Congr. Evol. Comput., 10.1016/j.compbiomed.2022.106075.
Sep. 2007, pp. 2229–2236, doi: 10.1109/CEC.2007.4424748. [50] M. Rezaie, K. K. Azar, A. K. Sani, E. Akbari, N. Ghadimi, N. Razmjooy,
[30] M. Ergezer, D. Simon, and D. Du, ‘‘Oppositional biogeography-based and M. Ghadamyari, ‘‘Model parameters estimation of the proton exchange
optimization,’’ in Proc. IEEE Int. Conf. Syst., Man, Cybern., Oct. 2009, membrane fuel cell by a modified golden jackal optimization,’’ Sustain.
pp. 1009–1014, doi: 10.1109/ICSMC.2009.5346043. Energy Technol. Assessments, vol. 53, Oct. 2022, Art. no. 102657, doi:
[31] S. Rahnamayan, J. Jesuthasan, F. Bourennani, G. F. Naterer, and 10.1016/j.seta.2022.102657.
H. Salehinejad, ‘‘Centroid opposition-based differential evolution,’’ Int. [51] S. Mirjalili, S. M. Mirjalili, and A. Lewis, ‘‘Grey wolf
J. Appl. Metaheuristic Comput., vol. 5, no. 4, pp. 1–25, 2014, doi: optimizer,’’ Adv. Eng. Softw., vol. 69, pp. 46–61, Mar. 2014, doi:
10.4018/ijamc.2014100101. 10.1016/j.advengsoft.2013.12.007.
[32] Z. Hu, Y. Bao, and T. Xiong, ‘‘Partial opposition-based adaptive differential [52] I. Golani and H. Mendelssohn, ‘‘Sequences of precopulatory behavior of
evolution algorithms: Evaluation on the CEC 2014 benchmark set for real- the jackal (Canis aureus L.),’’ Behaviour, vol. 38, nos. 1–2, pp. 169–191,
parameter optimization,’’ in Proc. IEEE Congr. Evol. Comput. (CEC), 1971, doi: 10.1163/156853971x00078.
Jul. 2014, pp. 2259–2265, doi: 10.1109/CEC.2014.6900489. [53] D. W. Macdonald, ‘‘The flexible social system of the golden jackal, Canis
[33] S. Dhargupta, M. Ghosh, S. Mirjalili, and R. Sarkar, ‘‘Selective opposition aureus,’’ Behav. Ecol. Sociobiol., vol. 5, no. 1, pp. 17–38, 1979, doi:
based grey wolf optimization,’’ Expert Syst. Appl., vol. 151, Aug. 2020, 10.1007/BF00302692.
Art. no. 113389, doi: 10.1016/j.eswa.2020.113389. [54] J. Lamprecht, ‘‘On diet, foraging behaviour and interspecific food compe-
tition of jackals in the Serengeti National Park, East Africa,’’ Zeitschrift
[34] Y. Xu, Z. Yang, X. Li, H. Kang, and X. Yang, ‘‘Dynamic opposite learn-
Fuer Saeugetierkd, vol. 43, no. 4, pp. 1–15, 1978.
ing enhanced teaching–learning-based optimization,’’ Knowl.-Based Syst.,
vol. 188, Jan. 2020, Art. no. 104966, doi: 10.1016/j.knosys.2019.104966. [55] M. W. Hayward, L. Porter, J. Lanszki, J. F. Kamler, J. M. Beck,
G. I. H. Kerley, D. W. Macdonald, R. A. Montgomery, D. M. Parker,
[35] M. Basu, ‘‘Quasi-oppositional differential evolution for optimal reactive
D. M. Scott, J. O’Brien, and R. W. Yarnell, ‘‘Factors affecting the prey
power dispatch,’’ Int. J. Electr. Power Energy Syst., vol. 78, pp. 29–40,
preferences of jackals (Canidae),’’ Mammalian Biol., vol. 85, pp. 70–82,
Jun. 2016, doi: 10.1016/j.ijepes.2015.11.067.
Jul. 2017, doi: 10.1016/j.mambio.2017.02.005.
[36] N. Bacanin, M. Zivkovic, T. Bezdan, K. Venkatachalam, and
[56] R. N. Mantegna, ‘‘Fast, accurate algorithm for numerical simulation of
M. Abouhawwash, ‘‘Modified firefly algorithm for workflow scheduling
Lévy stable stochastic processes,’’ Phys. Rev. E, Stat. Phys. Plasmas Fluids
in cloud-edge environment,’’ Neural Comput. Appl., vol. 34, no. 11,
Relat. Interdiscip. Top., vol. 49, no. 5, pp. 4677–4683, May 1994, doi:
pp. 9043–9068, Jun. 2022, doi: 10.1007/s00521-022-06925-y.
10.1103/PhysRevE.49.4677.
[37] H. Gao, Y. Hou, S. Zhang, and M. Diao, ‘‘An efficient approximation [57] A. Soccio, J. P. Barbosa, and M. S. Reis, ‘‘A scalable approach
for Nakagami-m quantile function based on generalized opposition-based for the efficient segmentation of hyperspectral images,’’ Chemomet-
quantum SALP swarm algorithm,’’ Math. Probl. Eng., vol. 2019, 2019, ric Intell. Lab. Syst., vol. 213, Jun. 2021, Art. no. 104314, doi:
doi: 10.37247/paam2ed.2.2021.21. 10.1016/j.chemolab.2021.104314.
[38] D. Chen, J. Liu, C. Yao, Z. Zhang, and X. Du, ‘‘Multi-strategy improved [58] N. H. Awad, M. Z. Ali, P. N. Suganthan, J. J. Liang, and B. Y. Qu,
salp swarm algorithm and its application in reliability optimization,’’ Math. ‘‘Problem definitions and evaluation criteria for the CEC 2017 special
Biosci. Eng., vol. 19, no. 5, pp. 5269–5292, Jul. 2022, Art. no. 8291063, session and competition on single objective bound constrained real-
doi: 10.3934/mbe.2022247. parameter numerical optimization,’’ Nanyang Technol. Univ., Singapore,
[39] A. A. Z. Diab, H. I. Abdul-Ghaffar, A. A. Ahmed, and H. A. Ramadan, Tech. Rep., 2016. [Online]. Available: https://www3.ntu.edu.sg/
‘‘An effective model parameter estimation of PEMFCs using GWO home/epnsugan/index_files/CEC2017/CEC2017.htm
algorithm and its variants,’’ IET Renew. Power Gener., vol. 16, no. 7,
[59] W. Hui, W. Zhijian, Y. Liu, W. Jing, J. Dazhi, and C. Lili, ‘‘Space
pp. 1380–1400, May 2022, doi: 10.1049/rpg2.12359.
transformation search: A new evolutionary technique,’’ in Proc. 1st
[40] N. Li, L. Wang, Q. Jiang, X. Li, B. Wang, and W. He, ‘‘An improved ACM/SIGEVO Summit Genetic Evol. Comput., 2009, pp. 537–544, doi:
genetic transmission and dynamic-opposite learning strategy for multitask- 10.1145/1543834.1543907.
ing optimization,’’ IEEE Access, vol. 9, pp. 131789–131805, 2021, doi:
[60] I. Naruei and F. Keynia, ‘‘Wild horse optimizer: A new meta-heuristic
10.1109/ACCESS.2021.3114435.
algorithm for solving engineering optimization problems,’’ Eng. Comput.,
[41] T. F. Gonzalez, Handbook of Approximation Algorithms and Metaheuris- vol. 38, no. S4, pp. 3025–3056, Oct. 2022, doi: 10.1007/s00366-021-
tics. Boca Raton, FL, USA: CRC Press, 2007. 01438-z.
[42] B. Morales-Castañeda, D. Zaldívar, E. Cuevas, F. Fausto, and [61] L. Abualigah, D. Yousri, M. A. Elaziz, A. A. Ewees, M. A. A. Al-qaness,
A. Rodríguez, ‘‘A better balance in Metaheuristic algorithms: Does and A. H. Gandomi, ‘‘Aquila optimizer: A novel meta-heuristic optimiza-
it exist?’’ Swarm Evol. Comput., vol. 54, May 2020, Art. no. 100671, doi: tion algorithm,’’ Comput. Ind. Eng., vol. 157, Jul. 2021, Art. no. 107250,
10.1016/j.swevo.2020.100671. doi: 10.1016/j.cie.2021.107250.
[43] M. Črepinšek, S.-H. Liu, and M. Mernik, ‘‘Exploration and exploitation [62] D. Karaboga, ‘‘An idea based on honey bee swarm for numerical optimiza-
in evolutionary algorithms: A survey,’’ ACM Comput. Surv., vol. 45, no. 3, tion,’’ Erciyes Univ., Kayseri, Turkey, Tech. Rep., TR06, 2005.
pp. 1–33, Jun. 2013, doi: 10.1145/2480741.2480752. [63] A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, and
[44] X.-S. Yang, S. Deb, and S. Fong, ‘‘Metaheuristic algorithms: Optimal H. Chen, ‘‘Harris hawks optimization: Algorithm and applications,’’
balance of intensification and diversification,’’ Appl. Math. Inf. Sci., vol. 8, Future Gener. Comput. Syst., vol. 97, pp. 849–872, Aug. 2019, doi:
no. 3, p. 977, 2014, doi: 10.12785/amis/080306. 10.1016/j.future.2019.02.028.
[45] D. H. Wolper and W. G. Macready, ‘‘No free lunch theorems for optimiza- [64] M. Azizi, ‘‘Atomic orbital search: A novel metaheuristic algorithm,’’
tion,’’ IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 67–82, Apr. 1997, doi: Appl. Math. Model., vol. 93, pp. 657–683, May 2021, doi:
10.1109/4235.585893. 10.1016/j.apm.2020.12.021.
[46] N. Wang, Q. Xu, R. Fei, L. Wang, and C. Shi, ‘‘Are two opposite points [65] F. A. Hashim, K. Hussain, E. H. Houssein, M. S. Mabrouk, and
better than one?’’ IEEE Access, vol. 7, pp. 146108–146122, 2019, doi: W. Al-Atabany, ‘‘Archimedes optimization algorithm: A new metaheuris-
10.1109/ACCESS.2019.2946089. tic algorithm for solving optimization problems,’’ Appl. Intell., vol. 51,
[47] F. Y. Arini, S. Chiewchanwattana, C. Soomlek, and K. Sunat, ‘‘Joint pp. 1531–1551, Sep. 2020, doi: 10.1007/s10489-020-01893-z.
opposite selection (JOS): A premiere joint of selective leading opposition [66] Y. Zhang, ‘‘Neural network algorithm with reinforcement learning
and dynamic opposite enhanced Harris’ hawks optimization for solv- for parameters extraction of photovoltaic models,’’ IEEE Trans.
ing single-objective problems,’’ Expert Syst. Appl., vol. 188, Feb. 2022, Neural Netw. Learn. Syst., early access, Sep. 14, 2021, doi:
Art. no. 116001, doi: 10.1016/j.eswa.2021.116001. 10.1109/TNNLS.2021.3109565.

128822 VOLUME 10, 2022


F. Y. Arini et al.: GJO With JOS

[67] X.-S. Yang, S. Deb, S. Fong, X. He, and Y.-X. Zhao, ‘‘From swarm FLORENTINA YUNI ARINI received the bach-
intelligence to metaheuristics: Nature-inspired optimization algorithms,’’ elor’s and master’s degrees in computer sci-
Computer, vol. 49, no. 9, pp. 52–59, Sep. 2016, doi: 10.1109/MC. ence from Universitas Gadjah Mada, Yogyakarta,
2016.292. Indonesia, in 2002 and 2009, respectively. She
[68] T. O. Ting, X. S. Yang, S. Cheng, and K. Huang, ‘‘Hybrid metaheuristic is currently a Lecturer at the Department of
algorithms: Past, present, and future,’’ in Recent Advances in Swarm Computer Science, Universitas Negeri Semarang,
Intelligence and Evolutionary Computation, vol. 585. Cham, Switzerland: Indonesia. She is also a Ph.D. Scholar at the
Springer, 2015, pp. 71–83. College of Computing, Khon Kaen University,
[69] X. S. Yang, ‘‘Metaheuristic optimization: Algorithm analysis and Thailand. Her research interests include nature-
open problems,’’ in Experimental Algorithms (Lecture Notes in
inspired algorithms, swarm intelligence, and
Computer Science), vol. 6630, 2011, pp. 21–32, doi: 10.1007/
its application.
978-3-642-20662-7_2.
[70] W. Du, Z. Ren, A. Chen, and H. Liu, ‘‘A knowledge transfer-
based evolutionary algorithm for multimodal optimization,’’ in Proc.
IEEE Congr. Evol. Comput. (CEC), Jun. 2021, pp. 1953–1960, doi:
KHAMRON SUNAT received the B.S. degree in
10.1109/CEC45853.2021.9504774. chemical engineering, the M.S. degree in compu-
[71] J.-Y. Li, Z.-H. Zhan, K. C. Tan, and J. Zhang, ‘‘A meta-knowledge tational science, and the Ph.D. degree in computer
transfer-based differential evolution for multitask optimization,’’ IEEE science from Chulalongkorn University, Thailand,
Trans. Evol. Comput., vol. 26, no. 4, pp. 719–734, Aug. 2022, doi: in 1990, 1999, and 2003, respectively. He is cur-
10.1109/TEVC.2021.3131236. rently an Assistant Professor at the College of
[72] A. Slob, S. Cornelissenl, G. R. Dohle, L. Gijs, and J. W. T. B. Van Der, Computing, Khon Kaen University, Thailand. His
‘‘Distributed, collaborative data analysis from heterogeneous sites using a research interests include neural networks, pat-
scalable evolutionary technique,’’ Appl. Intell., vol. 16, no. 1, pp. 19–42, tern recognition, computer vision, soft comput-
2002, doi: 10.1023/A:1012813326519. ing, fuzzy systems, evolutionary computing, and
[73] A. K. Tripathi, H. Mittal, P. Saxena, and S. Gupta, ‘‘A new machine learning.
recommendation system using map-reduce-based tournament
empowered whale optimization algorithm,’’ Complex Intell. Syst.,
vol. 7, no. 1, pp. 297–309, Feb. 2021, doi: 10.1007/s40747-020- CHITSUTHA SOOMLEK received the B.E. degree
00200-0. in computer engineering from the King Mongkut’s
[74] S. M. Swamy, B. R. Rajakumar, and I. R. Valarmathi, ‘‘Design of hybrid
Institute of Technology Ladkrabang, Bangkok,
wind and photovoltaic power system using opposition-based genetic algo-
Thailand, in 2004, and the M.A.Sc. and Ph.D.
rithm with Cauchy mutation,’’ in IET Seminar Dig., 2013, p. 8, doi:
10.1049/ic.2013.0361.
degrees in electronic systems engineering from the
[75] N. Kamau, R. Rimiru, and L. Nderu, ‘‘A chaotic elephant herding University of Regina, Canada, in 2006 and 2013,
optimization algorithm for multilayer perceptron based on opposition- respectively. She is currently an Assistant Profes-
based learning,’’ in Proc. IEEE AFRICON, Sep. 2021, pp. 1–6, doi: sor at the College of Computing, Khon Kaen Uni-
10.1109/AFRICON51333.2021.9570917. versity, Thailand. Her research interests include
[76] H. Jiang, Y. Yang, W. Ping, and Y. Dong, ‘‘A novel hybrid clas- intelligent agents, multi-agent systems, computa-
sification method based on the opposition-based seagull optimiza- tional intelligence for software engineering and secure software develop-
tion algorithm,’’ IEEE Access, vol. 8, pp. 100778–100790, 2020, doi: ment, code smells, and software quality.
10.1109/ACCESS.2020.2997791.

VOLUME 10, 2022 128823

You might also like