0% found this document useful (0 votes)
34 views39 pages

Corrected Proof: Greater Cane Rat Algorithm (GCRA) : A Nature-Inspired Metaheuristic For Optimization Problems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views39 pages

Corrected Proof: Greater Cane Rat Algorithm (GCRA) : A Nature-Inspired Metaheuristic For Optimization Problems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Heliyon xxx (xxxx) e31629

Contents lists available at ScienceDirect

Heliyon
journal homepage: www.elsevier.com/locate/heliyon

F
Research article

OO
Greater cane rat algorithm (GCRA): A nature-inspired
metaheuristic for optimization problems
Je�rey O. Agushaka a, Absalom E. Ezugwu b, ⁎⁎, Apu K. Saha c, Jayanta Pal d,

PR
Laith Abualigah e, fgh, ⁎, Seyedali Mirjalili i
a Department of Computer Science, Federal University of La�a, La�a 950101, Nigeria
b Unit for Data Science and Computing, North-West University, 11 Ho�man Street, Potchefstroom, 2520, South Africa
c Department of Mathematics, National Institute of Technology Agartala, Agartala, Tripura, 799046, India
d Department of IT, Tripura University, Suryamaninagar, Tripura 799022, India
e Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
f Computer Science Department, Al Al-Bayt University, Mafraq 25113, Jordan
g MEU Research Unit, Middle East University, Amman, Jordan
ED
h Applied science research center, Applied science private university, Amman 11931, Jordan
i Centre for Arti�cial Intelligence Research and Optimization, Torrens University, Australia

ARTICLE INFO ABSTRACT

Keywords: This paper introduces a new metaheuristic technique known as the Greater Cane Rat Algorithm
CT

Greater cane rat algorithm (GCRA) for addressing optimization problems. The optimization process of GCRA is inspired by
optimization the intelligent foraging behaviors of greater cane rats during and off mating season. Being highly
metaheuristic
nocturnal, they are intelligible enough to leave trails as they forage through reeds and grass.
population-based
Such trails would subsequently lead to food and water sources and shelter. The exploration
nature-inspired
CEC 2011 phase is achieved when they leave the different shelters scattered around their territory to for-
CEC 2020 age and leave trails. It is presumed that the alpha male maintains knowledge about these routes,
RE

real-world problem and as a result, other rats modify their location according to this information. Also, the males are
aware of the breeding season and separate themselves from the group. The assumption is that
once the group is separated during this season, the foraging activities are concentrated within
areas of abundant food sources, which aids the exploitation. Hence, the smart foraging paths
and behaviors during the mating season are mathematically represented to realize the design of
the GCR algorithm and carry out the optimization tasks. The performance of GCRA is tested us-
ing twenty-two classical benchmark functions, ten CEC 2020 complex functions, and the CEC
R

2011 real-world continuous benchmark problems. To further test the performance of the pro-
posed algorithm, six classic problems in the engineering domain were used. Furthermore, a thor-
ough analysis of computational and convergence results is presented to shed light on the ef�cacy
CO

and stability levels of GCRA. The statistical significance of the results is compared with ten state-
of-the-art algorithms using Friedman's and Wilcoxon's signed rank tests. These �ndings show
that GCRA produced optimal or nearly optimal solutions and evaded the trap of local minima,
distinguishing it from the rival optimization algorithms employed to tackle similar problems.

⁎ Corresponding author.
⁎⁎ Corresponding author.
E-mail addresses: [email protected] (A.E. Ezugwu), [email protected] (L. Abualigah).

https://doi.org/10.1016/j.heliyon.2024.e31629
Received 13 March 2024; Received in revised form 19 May 2024; Accepted 20 May 2024
2405-8440/© 20XX
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

The GCRA optimizer source code is publicly available at: https://www.mathworks.com/


matlabcentral/�leexchange/165241-greater-cane-rat-algorithm-gcra.

1. Introduction

Optimization is a ubiquitous process, integral to every human endeavor. It involves �nding the most effective combination of con-

F
straints or decision variables within a de�ned boundary to solve a speci�c problem. The challenge of �nding optimal solutions across
various domains necessitates the use of optimization methods [1]. As such, there is a common need to develop an optimization algo-
rithm that can handle the complexity of scienti�c and engineering problems. While there is no dearth of optimization methods in the

OO
literature [2,3], each has its strengths and weaknesses, ranging from traditional optimization methods to more recent population-
based, nature-inspired metaheuristic algorithms.
Traditional optimization methods include linear and non-linear programming techniques [4]. Although these methods are highly
effective in solving optimization problems, they are gradient-dependent and can be significantly in�uenced by the nature and diver-
sity of the initial population [5–8]. Conversely, newer population-based, nature-inspired metaheuristic algorithms have proven to be
effective optimization tools for certain optimization problems. However, they do not guarantee optimal solutions for various opti-
mization problems [9] and are prone to getting trapped in local (suboptimal) solutions [10].

PR
Interestingly, optimization problems in different real-world domains come with varying complexities and challenges. They could
be linear or non-linear, convex or non-convex, complicated and constrained objective functions, and involve many decision variables.
Further, these problems have varying local and global optimums [1]. Solving these problems involves starting from a decent point
and navigating the search space to arrive at a global solution. Many methods that attempt to solve these problems exist in the litera-
ture and are usually classi�ed into exact and approximate methods [2].
Exact methods guarantee that an optimal solution would be found in polynomial time, provided the problem is not an NP problem.
Conversely, the approximate methods do not guarantee optimal solutions but near-optimal ones in polynomial time. The performance
ED
of these methods is measured by how close the found solutions are to the global optimal [3]. The past decades have witnessed signifi-
cant interest in approximate methods, which consist of traditional and heuristic or metaheuristic optimization methods.
Traditional optimization methods effectively handle simple engineering problems with linear search space and less complexity
[4]. The performance of these methods depends heavily on knowledge of the problem. It often �nds only local solutions (trapped in
local optima) for complex problems with many constraints and non-linear problem space [5]. Furthermore, these solutions must be
found within a reasonable time.
The dif�culty with these conventional optimization techniques is that multiple optimal solutions often exist for most real-world
CT

optimization problems. As such, the traditional methods are ineffective for these problems because they rely on the initial solution
(which may be suboptimal) around which the global solution is searched [6]. Metaheuristic optimization offers a way out of these
challenges, and researchers have shifted their attention toward this optimization method [7,8].

1.1. Metaheuristic algorithms


RE

In the context of computer science and mathematical optimization, metaheuristic optimization is a superior-level process or
heuristic designed to locate an adequately satisfactory or approximate solution to an optimization problem [9]. There are three basic
ways of developing such optimization methods: improvement of existing optimizers, hybridizing two or more existing optimizers, and
designing entirely new optimizers. Additionally, numerous classi�cations of this optimization technique are present in scholarly
works, such as population-based versus single-solution, and nature-inspired versus non-nature-inspired, among others [4].
A classi�cation based on a source of inspiration infers that the processes of a natural phenomenon are used to carry out the opti-
R

mization process. The commonly used phenomenon can be summarized as either mimicking some biological behavior, way of life of
some living things, human activity, or physical phenomenon [10]. A summary of this classi�cation is given as follows.
CO

1.1.1. Evolutionary-based algorithms (EAs)


The EAs are the oldest and one of the common metaheuristic algorithms whose source of inspiration is the theory of evolution re-
lying on the survival of the �ttest. The optimization process begins with an initial population, randomly generated, that continuously
evolves into the subsequent generation, eliminating the least effective solutions in the process. The algorithms in this category gener-
ally have the advantage of �nding optimal solutions or solutions very close to the optimal solution. Some examples of algorithms in
this category can be found in Ref. [4]. However, some of the algorithms published within the last decade in this category are listed in
Table 1.

1.1.2. Swarm intelligence-based (SI) algorithms


This category mimics the dynamic collative intelligence and behaviours of groups, �ocks, herds, or communities of creatures in
their natural habitat. Community behaviours are observed in the �ock of birds, �sh schools, colonies of insects, herds of animals, and
many more [21]. The group members are independent but cooperate to achieve their collective goal, foraging or migration. Coopera-

2
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 1
Algorithm listing for EAs.
Algorithm Source of Inspiration Application area Year

Evolutionary Mating Algorithm The concept of random mating from the Hardy-Weinberg equilibrium, Optimal Power Flow (OPF) problems 2022
(EMA) [11] along with the crossover index, is utilized to generate new offspring.
Methylation evolution-based bi- Evolutionary-based bi-gram Post Translational Modi�cation (PTM) 2022
gram (MethEvo) [12]
Predictor-assisted evolutionary Strategies for evolutionary search are incorporated, along with the NAS-Bench-201 and DARTS search 2022

F
neural architecture integration of high-�delity weight inheritance across multiple spaces
Search (PRE-NAS) [13] generations.
Heuristic initialization based Ant safety features Multicast routing and its parameter 2022
modi�ed ACO (HIMACO) [14] tuning

OO
An online generation-less genetic continuous changes in organisms knapsack optimization problem 2022
algorithm (OLGGA)
Synergistic �broblast optimization Migration and methodical behavior of the �broblast Non-linear complicated optimization 2022
(SFO) [15] problem
Corona virus optimization [16] Corona virus pandemic Benchmark functions 2022
Learner Performance-based Studying behaviors of learners in a university CEC-C06 2019 test functions and 2021
Behavior (LPB) [17] generalized assignment problem (GAP)
Physarum-inspired computational Physarum-inspired model. The traveling salesman problem (TSP) 2019

PR
model [18]
Stochastic Fractal Search (SFS) [19] Natural phenomenon of growth Engineering design optimization 2015
problems
Backtracking Search Algorithm Evolutionary concepts with memory Numerical optimization problems 2013
(BSA) [20]

tion is emulated to address intricate optimization challenges. This category stands as the most extensive among all, with comprehen-
sive examples available in Ref. [22]. Some recent SI algorithms are presented in Table 2.
ED
1.1.3. Human-based algorithms (HAs)
The algorithms in this category mimic phenomena related to human community or behavior, such as teaching or learning, think-
ing (mathematics), and other human-related activities [42]. The number of algorithms in this category is not as many as in other cate-
gories, but lately, it has received its fair share of attention. Several algorithms within this category can be found in Table 3.

1.1.4. Physics-based (PB) algorithms


CT

The algorithms in the classi�cation derive their inspiration from physical phenomena such as physics, chemistry laws, and other
physical processes [51]. The optimization mimics the rule governing the physical processes in nature. Some examples of algorithms in
this category can be found in Refs. [51–53]. However, a list of recent algorithms in this group is given in Table 4.

1.1.5. Hybrid based


Hybrid algorithms merge characteristics from multiple optimization algorithms, leading to an enhanced algorithm that optimizes
RE

more effectively while simultaneously reducing the computational intricacies of the resulting hybrid [59]. This category of algo-
rithms has the most extensive collection of articles because more research efforts are directed toward this area. Hybrid algorithms
make up a long list of algorithms and their variants.
Significant efforts have also been made in applying these modi�ed algorithms in the area of optimal power �ow problems to im-
prove load �ows and focus on minimizing the objective functions [60]. Additionally, the optimal power �ow (OPF) problem of a
power system was addressed using a salp swarm algorithm (SSA), which included the integration of the thyristor-controlled series ca-
R

pacitor (TCSC) [61].

1.2. Exploration and exploitation


CO

The commonality between all metaheuristic algorithms is that they all aim to �nd the best possible solution for any optimization
problem using two critical transitional phases: exploration and exploitation. The process of optimization with population-based meta-
heuristic algorithms initiates by randomly producing an initial set of solutions or populations within the de�ned boundary constraint.
The search agents move globally as extensively as possible, looking the better solutions. This global movement is called exploration
[62]. The objective of this stage is to pinpoint potential areas within the search space where the universal solution might be located,
thus preventing the algorithm from becoming stuck in the local or present solution.
After identifying promising regions, the algorithm must search these areas for the global solution and move towards that solution,
thereby improving the quality of solutions found so far. This local movement is called exploitation. The primary aim of this phase is to
better the solutions obtained during the exploration phase [25]. Local movement or exploitation also helps avoidance of local optima;
however, only local solutions very close to the global optima are avoided. Also, the area covered by the exploitation phase is very
small compared to the areas covered by the exploration [63].

3
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 2
Algorithm listing for SIs.
Algorithm Source of Inspiration Application area Year

Slime mould algorithm [23] Oscillation mode of slime mould Classical engineering structure problems 2020
Border collie optimization [24] Herding style of the Border Collie dog Benchmark functions 2020
Sparrow Search Algorithm [25] Group wisdom, foraging, and anti-predation Benchmark function and engineering problems 2020
behaviours of sparrows
Capuchin Search Algorithm [26] Dynamic behavior of capuchin monkeys Benchmark function and engineering problems 2021

F
Chameleon Swarm Algorithm Dynamic behavior of chameleons Constrained and computationally expensive engineering 2021
design problems
Aquila Optimizer [8] Aquila's behaviors CEC2017, CEC2019 test functions, and engineering 2021

OO
problems.
Ebola Propagation mechanism of the Ebola virus Image classi�cation of digital mammography 2021
Optimization Search Algorithm [27]
Rock Hyraxes Swarm Optimization [28] Collective behavior of Rock Hyraxes Benchmark functions 2021
Red Colobuses Monkey [29] Behavior related to red monkeys Benchmark functions 2021
African vultures optimization algorithm African vultures' lifestyle Engineering problems 2021
[30]
Battle royale optimization algorithm Digital games knowns as “battle royale.” The inverse kinematics problem of the 6-DOF PUMA 2021
[31] 560 robot arm

PR
Arti�cial Jellyfish Search [32] Jellyfish behavior Benchmark functions and optimization problems. 2021
Poplar Optimization Algorithm [33] Sexual and asexual propagation mechanism of Image segmentation 2022
poplar
Snake Optimizer [34] Special mating behavior of snakes Constrained real-world engineering problems 2022
War Strategy Optimization [35] Strategic movement of army troops during the war Engineering models 2022
Honey Badger algorithm [36] Intelligent foraging behavior of honey badger Engineering design problems and CEC’17 test suite. 2022
Gannet optimization algorithm [37] Unique behaviors of gannets during foraging Engineering optimization problems 2022
Trees Social Relations Optimization Hierarchical and collective life of trees in the Continuous and discrete optimization problems 2022
Algorithm [38] jungle
ED
Reptile Search Algorithm [39] Hunting behaviour of Reptiles Classical, CEC2017, CEC2019 test functions and 2022
engineering problems.
Prairie dog optimization [7] Foraging behavior of prairie dogs Engineering optimization problems 2022
Gazelle Optimization Algorithm [40] Survival adaptation of gazelles Engineering problems 2022
Dwarf mongoose optimization algorithm Foraging behavior of dwarf mongoose Benchmark and engineering problems 2022
[41]
CT

Table 3
Algorithm listing for HAs.
Algorithm Source of Inspiration Application area Year

Gaining Sharing Knowledge-based The procedure of acquiring and disseminating CEC2017 benchmark and IEEE-CEC2011 real- 2020
Algorithm [43] knowledge throughout human life. world optimization problems
RE

Coronavirus Herd Immunity Optimizer Herd immunity concept as a way to tackle the corona Engineering optimization problems extracted from 2021
[44] virus pandemic (COVID-19) IEEE-CEC 2011
The arithmetic optimization algorithm Arithmetic operators Engineering application 2021
[45]
Ali Baba and the forty thieves [10] Story of Ali Baba and the forty thieves Engineering design problems 2022
Driving Training-Based Optimization Human activity of driving training IEEE CEC2017 test functions 2022
[46]
Human felicity algorithm [47] Efforts of human society to become felicity CEC 2014, CEC 2019, CEC2020, and complex 2022
R

engineering problems
Stock exchange trading optimization Behavior of traders and stock price changes in the stock Engineering design problems 2022
algorithm [48] market
Puzzle Optimization Algorithm [49] Process of solving a puzzle Benchmark functions 2022
CO

Child Drawing Development Based on Child's Cognitive Development Benchmark functions 2022
Optimization Algorithm [50]

The problem most metaheuristic algorithms face is knowing which phase to start and when to transition between the phases [64].
The nature or shape of the problem landscape is unknown, so the timings must be such that the stochastic nature of the algorithms is
maintained. As a solution, most algorithms follow the continuous and repeated transition between the phases, trying to maintain a
search balance [65]. This solution is implemented in three ways.

• First, the existing algorithm may be modi�ed by parameter tunning or enhancing the optimization processes. The algorithm
parameters can in�uence the quality of the results. It is impossible to know beforehand the best parameter settings. Also, the
problem's nature to be solved in�uences the parameter setting. Therefore, there is no universally optimal parameter setting.

4
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 4
Algorithm listing for PBs.
Algorithm Source of Inspiration Application area Year

Henry gas solubility optimization [52] The behavior of Henry's law Engineering design problems and CEC’17 test 2019
suite problems.
Equilibrium optimizer [53] Mass balance models Mathematical and engineering benchmarks 2020
Archimedes optimization algorithm [54] Law of physics Archimedes' Principle Engineering design problems 2021
Lichtenberg Algorithm [55] Lichtenberg �gure pattern Benchmark functions and design problems in a 2021

F
welded beam
Heat transfer relation-based optimization Heat transfer relationships based on the second law of PID controller and linear regression 2021
algorithm [56] thermodynamics
The String Theory Algorithm [57] String Theory, Optimal design of a fuzzy controller 2021

OO
Crystal Structure Algorithm [58] Principles underlying the formation of crystal structures 239 mathematical functions 2021

• Secondly, the a need for further improvement of metaheuristics called hybridization. The hybrid algorithms bene�t from the
synergy between the candidate algorithms for the hybridization. Therefore, which algorithm can complement the other is the
key to successful hybridization.

The complexity of optimization problems greatly in�uences the need to design a new metaheuristic algorithm. Generally, a prob-

PR
lem that can be solved in a polynomial time is called easy or tractable otherwise, it is dif�cult or intractable. Most real-world prob-
lems are NP-hard. No universally demonstrably ef�cient algorithms exist to solve them in polynomial time except in exponential
time. Therefore, developing a new metaheuristic algorithm may be a valuable way to solve these NP-hard problems [66].

1.3. Motivation

A metaheuristic is commonly judged by its ability and competitiveness in �nding solutions to optimization problems. While there
ED
is no shortage of algorithms with proven capacity to �nd optimal solutions to a given optimization problem, there is a wide range of
problems yet to be solved by the algorithms, and there is no guarantee that the existing ones would be effective in solving those prob-
lems. The ‘‘no-free-lunch’’ (NFL) theory [67] clari�es that one metaheuristic algorithm can solve all optimization problems. The im-
plication is that �ne-tuning an algorithm to effectively �nd optimal solutions to a problem could potentially offset the algorithm's per-
formance, given another problem. This theory could potentially mean �ne-tuning or developing new algorithms for every optimiza-
tion problem. It also keeps the �eld of metaheuristic optimization open and pushes researchers to endless limits to improve the qual-
CT

ity of solutions found to cater to emerging, complex real-world problems.


Beyond the aforementioned points, the subsequent reasons are provided for the creation of the novel GCR metaheuristic algorithm
to address the chosen problems that are examined and showcased in this research.

• The nature and adaptation of the greater cane rats are exciting phenomena that have never been used to model any
optimization method. The combination of intelligible activities that results in effective foraging during and off mating season
RE

provides tools that can be modeled to enhance optimization.


• These activities are unique to the greater cane rats and effectively translate to the exploration and exploitation activities
discussed in Section 2.
• The exchange of information about trails during foraging is uniquely modeled by attractive movements of the other GCR
towards the position of the dominant rat instead of just directly copying the position from the dominant rat, as seen in
Equation (3). This strategy explores regions near the promising region represented by the dominant rat position.
• The reason for using values, as shown in Equations (7)–(9), is to maintain diversity and enhance the intensification.
R

• The selection of the individual GCR follows the tournament approach where the �ttest one (according to �tness function) is
better between two feasible solutions. A feasible solution is always better than an infeasible one, and between two infeasible
solutions, the one having the smaller sum of constraint violations is preferred.
CO

Similarly, the aforementioned reasons are the primary motivation for this work.

1.4. Contributions

This study introduces a novel optimization algorithm, the Greater Cane Rat Algorithm (GCRA), which draws inspiration from the
discerning foraging behavior of the Greater Cane Rat during and outside the mating season. The GCRA has been employed to tackle
both constrained and unconstrained CEC 2011 real-world optimization problems. Furthermore, the GCRA provides an improved bal-
ance between exploration and exploitation, leading to the anticipation of superior or more competitive results compared to existing
solutions. The primary contributions of this research are outlined below.

• A new optimization algorithm, inspired by the GCR's intelligible foraging behavior during and off mating season new meta-
heuristic

5
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

• The performance of the proposed GCRA was assessed on 23 classical benchmark functions, 10 CEC 2020 complex functions, 22
real-world optimization problems found in the CEC 2011 suite, and six (6) classical engineering problems.
• The performance of GCRA was compared with other metaheuristics that have been used to solve these problems.

1.5. Advantages of GCRA

The proposed GCRA offers several advantages as a tool for global optimization, which are listed as follows.

F
• The GCRA is �exible, evident in only one parameter to be tuned (representing whether it is a mating season or not). This
�exibility makes GCRA adaptable to di�erent optimization problems.
• The proposed mathematical model for GCRA makes it an effective optimization tool for the selected problems in six (6)

OO
engineering design problems and the CEC 2011 test suite.
• The GCRA's straightforwardness and resilience enable it to swiftly and precisely identify global solutions with a high rate of
convergence.
• Finally, the GCRA is a low-cost and potent tool for challenging real-world optimization.

The rest of this paper is organized as follows: Section 2 presents the proposed algorithm's source of inspiration, and the GCRA's
mathematical models are presented in Section 3. The experimental setup, classical benchmark functions, CEC 2020, CEC 2011 prob-

PR
lems, six (6) engineering design problems, results, and discussion are presented in Section 4. Finally, the conclusions and future
trends are given in Section 5.

2. Inspiration

The greater cane rat (Thryonomys swinderianus) is also called the grasscutter in Ghana, Nigeria, and other regions of West Africa.
It belongs to the family of cane rats or the African hystricognath rodents. The other member is the lesser cane rat (T. gregorianus)
[68]. They live by reed beds, riverbanks, lakes, swamps, and tall, thick cane-like grasses in Sub-Saharan Africa [69]. The greater cane
ED
rats are one of the largest rodents in Africa, measuring between 43 and 60 cm head-body, with the tail reaching between 16 and
19.5 cm. The males and females differ in weight, with the males averaging 4.5 kg and the females averaging 3.4–3.8 kg. Generally,
they sometimes weigh up to approximately 7–9 kg. Their ears are rounded, coarse-looking, bristly hair, and they have short noses.
The front feet are smaller than the hind feet, each with three toes [70]. Fig. 1 shows the GCR, depicting all the features mentioned ear-
lier.
CT

2.1. Behavioral adaptations

Greater cane rats are excellent swimmers and use the water as a haven from danger. Also, they are swift and agile on land. They
are primarily nocturnal and occasionally active during the day [69]. They are patriarchal and live in a small family group led by a
dominant male. Their den is usually a nest in thick vegetation, although they sometimes use underground burrows abandoned by
other animals or termites. When danger is sensed, they either grunt or dash for the water [71]. The grass is the major diet of the
RE

greater cane rats, though they eat other plants, fruits, and tree bark. As seen in Fig. 2, the GCRs live near a water source depicted by
the shaded portion at the base of the �gure, tall cane-like grasses are visibly depicted, and the white spaces and paths represent the
trails to previously known food sources through the cane-like features.
R
CO

Fig. 1. Greater cane rat (image drawn using InkScape version 1.2.1: https://inkscape.org/).

6
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
Fig. 2. The natural habitat of the GCR (image drawn using InkScape version 1.2.1: https://inkscape.org/).

PR
2.2. Assumptions

The greater cane rats (GCR) can be territorial, with cases of �ghting between only the males using the nose for duels. They are also
social animals living in groups with a dominant male, many females, and juveniles (can be more than a generation old). Greater cane
rats' foraging comprises cutting canes and grasses using their specially adapted upper incisors. Being highly nocturnal, they are intelli-
gible enough to leave trails as they forage through reeds and grass. These trails would subsequently lead to food and water sources
and shelter [72].
ED
3. GCRA model

This section presents the mathematical models and formulations of the proposed GCRA optimization procedures.

3.1. Population initialization


CT

The optimization process of GCRA begins with the stochastic generation of the Greater Cane Rats (GCR) population (X) using
Equation (1). This generation utilizes the upper bound (UB) and lower bound (LB).

1.
RE

where X represents the entire GCR population and the individual rat ( ) in the ith position in the jth dimension is generated ran-
domly using Equation (2). Finally, n and d denote the population size and the problem dimension, respectively.

2.
R

where is a random number between 0 and 1.


CO

3.2. The GCRA model

Greater cane rats (GCR) exhibit territorial behavior, often engaging in duels using their noses, primarily among males. These crea-
tures are social, typically forming groups composed of a dominant male, multiple females, and juveniles that may span more than one
generation. Their foraging habits involve cutting canes and grasses with their uniquely adapted upper incisors. As predominantly noc-
turnal animals, GCRs are adept at leaving trails during their foraging activities through reeds and grass, leading to sources of food,
water, and shelter. This intelligent behavior of the GCRs is the central focus of this study. The exploration phase is marked by their de-
parture from various shelters within their territory to forage and create trails.
It is hypothesized that the dominant male retains information about these trails, allowing other rats to update their positions ac-
cordingly. Males also have the ability to recognize the breeding season and segregate themselves from the group. It is assumed that
during this separation, foraging activities are concentrated in areas with plentiful food sources, facilitating exploitation. This study
mathematically models the intelligent foraging trails and actions during the mating season to design the GCR algorithm and carry out
optimization activities.

7
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

The conceptual model of the GCR is shown in Fig. 3. The target food source is assumed to be at position (X′, Y′). The path to this
food source is known to the alpha male, who then passes this information to the rest of the family members and updates their position
based on the information passed. An alpha at position (X, Y) is aware of the food source at position (X′, Y′) and the different neigh-
bouring positions that can be reached depending on the in�uence of Equations (6) and (7). At another iteration, an alpha at position
(X′-X, Y′-Y) undergoes the same process. During the mating season, information about the path leading to the plentiful food source is
shared. This prompts the family to divide into groups based on gender, with both male and female groups relocating to the area of the
food source to establish their camp.
Within the framework of the GCRA design, the rat that is considered dominant is presumed to be the �ttest, or the position vector

F
yields the best value of the objective function. Given that this �ttest rat guides the group and possesses knowledge of past routes to
food or shelter, the positions of the other rats are adjusted according to the location of the dominant male, as per Equation (3). The
dominant rat position is denoted by . The GCRA goes into either the exploration or exploitation phase depending on the value of ,

OO
which is a variable that determines whether it is a rainy season or not. The value of is carefully selected to balance exploration and
exploitation. The value of is carefully tuned to 0.5 after rigorous parametric analysis.

3.

where denotes the new GCR position, denotes the current GCR position, is the dominant male in the jth dimension.

PR
3.3. Exploration

The GCR build their shelter (nest or shallow burrows) scattered around their territory (marshes, riverbanks, and cultivated crop
farms). They leave the different shelters to forage by either following trails to previous food sources or scouring for a new food source
and leaving trails. Fig. 4 shows the greater cane rats scouring for food everywhere within their territory, the rats that appear to be
walking represent the dominant rat's different position, and the rats that appear to be eating represent found food sources. It is as-
sumed that the dominant male retains the information about these trails, and other rats adjust their position based on this data. A new
ED
position for the remaining rat population in the search space is determined according to the dominant male's position, as depicted in
Equation (4). In this phase of the GCR motion simulation, if another rat's objective function value surpasses that of the �ttest rat, the
�ttest rat is updated, and the positions of the other rats are adjusted based on the newly established �ttest rat. If not, it deviates from
the �ttest rat's position. This GCR movement strategy is modeled in Equation (5). The �nal step of the exploration phase suggests that
the GCR only relocates to this newly calculated position if the objective function value improves at this new location; otherwise, the
GCR maintains its previous position.
CT

4.

5.
R RE
CO

Fig. 3. 2D possible position vectors (image drawn using InkScape version 1.2.1: https://inkscape.org/).

8
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
PR
Fig. 4. Greater cane rats looking for food sources (image drawn using InkScape version 1.2.1: https://inkscape.org/).
ED
where signi�es the upcoming or new state of the ith GCR, represents its value in the jth dimension, indicates the current
GCR position, is the dominant male in the jth dimension, is the value of the dominant male's objective function, is the cur-
rent value of the objective function, is a random number de�ned within the problem space boundaries, simulating the dispersed
food sources and shelter, simulates the effect of an abundant food source, which then prompts more exploitation and is de�ned in
Equation (6), is a coef�cient that simulates a diminishing food source, which compels the search for new food sources or shelter and
is de�ned in Equation (7), is the coef�cient that prompts the GCR to move to other available abundant food sources within the
CT

breeding area and is de�ned in Equation (8). The parameters are modi�ed from concepts in Ref. [73].

6.

7.
8.
RE

where is the current iteration, is the maximum iteration

3.4. Exploitation

The breeding season varies from habitat to habitat and usually occurs during the wet season. The males are known to separate
themselves from the group during the breeding season. The assumption is that once the group is separated, the foraging activities are
R

concentrated within areas with abundant food sources. Fig. 5 shows the separate groups foraging within the promising regions only.
The simulation of the phase starts by randomly selecting a female such that (the dominant male). Because breeding occurs
around abundant food sources, the intensi�cation occurs around the selected female. The process's modeling is depicted in Equation
CO

(9). If the newly computed position for the GCR enhances the target function's value, as modeled in Equation (5), it supersedes the
prior position.

9.

where represents the position of the randomly selected female in the jth dimension, randomly takes the values from 1 to 4,
simulating the number of young produced by each female GCR per year. The parameters are responsible for better explo-
ration and exploitation throughout the iterations.
The pseudocode for the algorithm is given below.
Algorithm 1.

Parameters: :

9
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
PR
ED
Fig. 5. Foraging during mating season (image drawn using InkScape version 1.2.1: https://inkscape.org/).

Input: greater cane rats’ population, maximum iteration, :


Output: optimal search result
Process: GCRA optimization
Calculate the �tness of each GCR
CT

Select the �ttest GCR as the dominant male :


Update the global best solution (Gbest)
Update the remaining GCR based on the position of using Equation 3
For iter = 1: max_iter
Evaluate :
If :
Exploration:
RE

Update current search agent position based on Equation (4)


Check boundary constraints
Else
Exploitation:
Update current search agent position based on Equation (9)
Check boundary constraints
End If
Evaluate the �tness of each GCR based on a new position
R

Update search agent based on Equation (5)


Update Gbest
Select a new dominant male :
CO

End For
Return Gbest
End

3.5. Computational complexity

The Big-O notation is used to express the computational complexity of the proposed algorithm. The computational complexity is
the relationship between the input, input size, process, and time it takes to bring out the output. This computational complexity is pre-
sented in terms of both space and time.

3.5.1. Time complexity


The time complexity is measured in terms of the number of greater cane rats (n), the problem's dimensions (d), the maximum
number of iterations ( ) and the number of function evaluations (fe). Therefore, the time complexity of GCRA is given as fol-
lows:

10
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

10.

Equation (10) gives the components of the time complexity of GCRA. The details demand of the components are given as follows.

• Problem de�nition requires time.


• The GCRA requires time to create the random population.

F
• The function evaluations require time.
• The time required for solution update is .

OO
Substituting the details of each component in Equation (11) leads to the general time complexity given in Equation (11).

11.

Since , , , and , Equation (12) can be simpli-

PR
�ed to Equation (12).

12.

Equation (12) means that GCRA has a linear time complexity, which implies that GCRA is a computationally ef�cient algorithm.
This attribute of GCRA in�uenced the selection of comparative algorithms such as AOA, DMO, WOA, and ADMO. They all have linear
time complexity. The goal here is to show the superiority of the exploration and exploitation mechanism of the proposed GCRA. The
remaining comparative algorithms (LSHADEcnEpSin, LSHADE, LSHADE_SPACMA, and UMOEA) are high-performing methods in
ED
CEC competitions.

3.5.2. Space complexity


The amount of memory space required by an algorithm for the effective function of the algorithm is referred to as space complex-
ity. The complexity is contingent upon the size of the population and the dimension of the optimization problem. Hence, Equation
(13) provides the space complexity of GCRA.
CT

13.

3.6. GCRA optimization analysis

As the evaluation of the function advances, the greater cane rats gravitate towards the universal solution, which could be food or
water sources. The exploration phase is realized when they exit various shelters dispersed across their territory to scavenge and create
RE

trails. It is assumed that the alpha male retains the information about these trails, and as a result, other rats adjust their location based
on this data. Furthermore, the males are aware of the breeding season and can isolate themselves from the group. The assumption is
that once the group is separated during this season, the foraging activities are concentrated within areas of abundant food sources,
which aids the exploitation. Therefore, the intelligent foraging trails and mating season actions are mathematically modeled to
achieve the GCR algorithm design and perform the feature selection activities.
As seen from the algorithm listing in Algorithm 1, the proposed GCRA starts by randomly generating the positions of the greater
R

rat population within the problem search boundary. Equations (4), (5), (8) and (9) aim to move the GCR's initial position around the
search space, looking for a global solution. The position of the GCR is evaluated using the �tness function, and the �ttest GCR is the
one whose position returns the minimum or maximum value of the �tness function. The optimization processes, except for the initial
CO

creation of the population, are repeated many times, determined by the maximum number of iterations. The �ow chart representing
the optimization process of the GCRA is shown in Fig. 6.

4. The experimental setup, results, and discussion

This section presents the experimental setup for the 22 classical benchmark functions, CEC 2011, and CEC 2020 problems, six (6)
classical engineering problems, and the comparative results of GCRA and other state-of-the-art algorithms. The implication of the ob-
tained results is also discussed and analyzed.

4.1. Experimental setup, classical benchmark functions, CEC 2020, CEC 2011, and engineering problems

The classical test functions consist of functions with different characteristics, like the shape of the landscape and the number of lo-
cal and optimal solutions. They are usually classi�ed into three categories; functions with one global solution are called unimodal

11
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
PR
ED
Fig. 6. The proposed GCRA algorithmic �owchart design.

functions (F1–F7). The second category, called multimodal functions (F8–F13), has multiple global solutions and high dimensions. Fi-
nally, the third category is multimodal but has a �xed low dimension (F14–F22). The CEC 2020 test suite consists of 10 functions that
make �nding the global optimum dif�cult. The detailed descriptions of both classical and CEC 2020 test functions are presented in
CT

Tables A1 and A2in the appendix, respectively.


There are 22 challenging real-world optimization problems in the CEC 2011 test suite. This set of problems was used to test the
performance of GCRA. The performance of GCRA was compared with other state-of-the-art algorithms that have been used to solve
this set of problems. The nature of these problems is such that they have many local optima in various regions and different shapes
and dimensions. These functions and the six (6) engineering problems offer the best ways to test the ability of GCRA to avoid local op-
tima (exploration) and �nd optimal or near-optimal solutions (exploitation). Details about CEC 2011 test functions can be found in
RE

Ref. [74].
The algorithms and test functions were executed using MATLAB (R2020b). The experimental setup involved a 64-bit Windows 10
OS, an Intel Core [email protected] CPU, and 16G RAM. An assessment was conducted to determine the sensitivity of GCRA to varia-
tions in population size and the number of iterations. It shows that setting the number of greater cane rats to 100 and iterations to
1000 yielded the best results. As such, for each problem in the test suite, the population size and the maximum number of iterations
are set to 100 and 1000, respectively. Also, the results are collated after 25 independent runs of the algorithms using four perfor-
mance indicators (‘best,’ ‘worst,’ ‘mean,’ and ‘standard deviation’). The parameter settings of the algorithms used in this study are
R

given in Table 5.
Fig. 7 shows the graphical representation of the characteristics of some classical benchmark functions used in this study. These
benchmark test functions represent some form of optimization modeled by mathematical functions. They come with parameters, di-
CO

mensions, and constraints that are optimized to obtain the best possible solution. The optimal solution is mixed up with many sub-
optimal solutions and then scattered around the problem search space. The problem search space or landscape consists of many
hills and valleys with different complexities. The optimization algorithm's job is to �nd the optimal or near-optimal solution
quickly. Furthermore, the robustness and ef�ciency of metaheuristic algorithms are measured based on their ability to achieve
global search and local search leading to convergence toward the best possible solution.
The basic characteristics of the classical benchmark function are the modality, valleys, basins, dimensionality, and separability
[9]. The modality de�nes the number of global and local solutions in the problem search space. There are two types of modality: the
unimodal and multimodal functions. F7 is unimodal because it has one valley where the only global solution is located. This type of
function is believed to be easy to solve, and the exploitation ability of metaheuristic algorithms is evaluated using these functions.
F8–F13 are high dimensional (dynamic), and F14 is �xed multimodal functions because they have many solutions, of which only one
is the global optimal, and their dimension is not �xed, it can vary. Finding the global solution is a daunting task hence, the ex-
ploratory capability of metaheuristic algorithms is tested using these functions.

12
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 5
Parameter settings.
Algorithm Ref Name of the parameter Value of the parameter

AOA [45] 5
0.05
LSHADEcnEpSin [75] freq_inti 0.5
pb 0.4
ps 0.5

F
CPSOGSA [76] <p1, <f > 2 2.05
LSHADE [77] p_best_rate 0.11
memory_size 5

OO
arc_rate 1.4
LSHADE_SPACMA [78] L_Rate 0.8
Pbest, Memory size (H), and Arc_rate Same as LSHADE
The threshold for SPA activated max_nfes/2
Probability Variable (FCP) 0.5
UMOEA [79] Par.MinPopSize 4
Par.prob_ls 0.1
PS2 4+�oor(3*log(Par.n))

PR
PS1 Par.PopSize
WOA [80] a 2-t*((2)/Max_iter)
C 2*r2
A 2*a*r1-a
DMO [41] Number of babysitters 3
ADMO [81] Predation rate (pr) 0.5
Birth rate (br) 0.7
ED
CT
R RE
CO

Fig. 7. 2-D representation of characteristics of some classical benchmark function.

4.2. Results and discussion

This subsection outlines the �ndings from the experiments conducted in this study. Initially, we delve into the results and discus-
sions related to GCRA. This is followed by a comparative analysis of GCRA with other cutting-edge algorithms. As previously men-
tioned, four performance metrics (best, worst, mean, and standard deviation) are employed to encapsulate and discuss the outcomes
after 25 separate executions of the algorithms. These were applied to both the 22 classical functions, 22 real-world benchmark prob-
lems, and 6 classical engineering problems. The population size and other speci�c algorithm metrics were kept consistent as de�ned
in Section 4.1.

13
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

4.2.1. Comparative result of GCRA for classical benchmark functions


Table 6 presents the results of GCRA and ten other state-of-the-art metaheuristic algorithms. The results show that the GCRA re-
turned the optimal global solution for all of the 22 benchmark functions at least once, as shown by the value of the ‘Best’ indicator,
and consistently found the optimal solution for 12 out of the 22 functions over 25 runs, as shown by the value of the ‘Mean, Worst,
and Std’ indicators. The performance of the ADMO closely follows that of GCRA, with UMOEA, LSHADE_SPACMA, LSHADEc-
nEpSin, GSK, and LSHADE all showing similar competitive performance.
Specifically, the GCRA showed consistent performance for the unimodal problems (F1–F7). These functions are excellent in testing
the exploitation ability of algorithms. The GCRA successfully found global solutions for these functions, which con�rms the exploita-

F
tion ability of GCRA. Also, the GCRA performed exceptionally for the multimodal functions (F8–F22), which are good in testing the
exploration ability of the metaheuristic algorithm. Occasioned by the number of these functions, the GCRA successfully found global
solutions, it can be concluded that the exploratory ability of GCRA is excellent.

OO
Utilizing Friedman's test for ranking, it was observed that the proposed GCRA secured the top position with the least mean value,
closely trailed by ADMO and UMOEA. The Wilcoxon's signed rank test was employed to juxtapose the outcomes of all considered al-
gorithms, as depicted in Table 7. The superior performance of GCRA is further substantiated as it notably surpasses DMO, AOA, WOA,
and CPSOGSA across all evaluated functions, as evidenced by the high value of the positive ranks returned. Moreover, the number of
ties yielded by the comparisons of GCRA with ADMO, UMOEA, LSHADE_SPACMA, LSHADEcnEpSin, GSK, and LSHADE suggests the
competitive nature of their results. The �ndings suggest that GCRA notably excelled over half of the algorithms, while its superiority
over the remaining �ve was less pronounced. Consequently, it can be inferred that GCRA serves as an ef�cient instrument for tackling

PR
this array of optimization issues, striking a balance between exploration and exploitation.

4.2.2. Results for GCRA for CEC 2020 functions


This subsection showcases the GCRA's performance in resolving the CEC 2020 complex functions. A comparison of GCRA's perfor-
mance with ten other algorithms is made, with the outcomes detailed in Table 8. Out of the ten functions in the test suite, GCRA iden-
ti�ed the optimal solution for four. It notably surpassed the rival algorithms used in this research, securing the top rank. The AOA and
WOA algorithms demonstrated the poorest performance for this set of problems. Judging by the average �tness value and standard
deviation, GCRA recorded the lowest �gures among all algorithms, underscoring its stability in addressing these issues.
ED
4.2.3. Results for GCRA for CEC 2011 problems
The GCRA was employed to solve the 22 real-world problems outlined in the CEC 2011 test suite, with the �ndings detailed in
Table 9. Unlike the benchmark problems de�ned in other CEC competitions, this problem set does not provide an optimal value or so-
lution. The table reveals that for F3, F4, F8, and F10, GCRA consistently found the same value across 25 independent algorithm runs.
This consistency suggests that the discovered value could be the sought-after global solution for these problems. It's also noted that
CT

the 'Best' indicator's value is consistently lower than the 'Worst' and 'Mean' indicators' values. This pattern indicates that GCRA could
discover a variety of solutions, effectively exploring the search space and avoiding local optima. The small 'Std' values signify GCRA's
stability around the results. This stability and performance af�rm GCRA's effectiveness, robustness, and stability in optimizing these
problems. The next step involves comparing GCRA's performance with other cutting-edge algorithms to further assess its superiority
and robustness.
Moreover, to prove the convergence behavior of the proposed GCRA metaheuristic optimizer, several evaluation metrics were em-
RE

ployed in two-dimensional space to clearly illustrate the spectacular unique characteristics of the GCRA performance evolution. These
2D evaluations that are depicted in Fig. 8 include search history, search trajectory, average �tness, and convergence curve behavior.
The graphs equally demonstrate the quantitative results of GCRA exploration, exploitation, and self-improvement of the initial popu-
lation. Also, note that the trajectory and convergence curves shown in the third and �fth columns are often used to evaluate the con-
vergence performance and capability of the GCRA optimization method to handle complex or robust optimization problems.
The convergence curves of all the functions depicted here, which have one global optimum, are very smooth and drop rapidly
R

compared to those with rough surfaces such as F7, F8, and F9. Therefore, because of this smoothness, the convergence can reach a sta-
ble value only after the �rst few iterations. This proves the effectiveness of the movement strategy for exploitation in the GCRA [82].
However, in the case of the remaining functions, whose curves are not smooth, the skill of exploration is more biased than the skill of
CO

exploitation. However, this does not imply that the convergence curves cannot, in all cases, accurately approximate the global opti-
mum value in the �nal iteration stage of the algorithm iterative process.

4.2.4. Comparative results for CEC 2011 problems


Table 10 compares the results of GCRA with some of the state-of-the-art algorithms. In this table, two performance indices
(‘Mean’ and ‘Std’) were used to compare the results. The choice of algorithms is motivated basically by their performance in CEC
competitions, solving other optimization problems, and the availability of source code online. For instance, the LSHADE, LSHADEc-
nEpSin, LSHADE_SPACMA, and UMOEA were high performers in different CEC competitions. Selected algorithms from other classi-
�cations or categories of metaheuristic algorithms were considered for comparison. The WOA, DMO, and ADMO are candidate rep-
resentatives of the swarm-based algorithm, the gaining-sharing knowledge (GSK) algorithm [43] represents the human activity-
based algorithms, and AOA and CPSOGSA represent physical-based algorithms.
The presented results show that GCRA competed with the high performers over all the problems. Specifically, GCRA returned the
same values for all the indicators with LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, GSK, and UMOEA for F3, F4, F8, and F10. This
scenario con�rms that the value obtained is the optimal solution for the four problems. The results obtained by GCRA were very com-

14
Table 6
Comparative results for classical benchmark functions.
Function Dim Global Value GCRA ADMO LSHADEcnEpSin LSHADE AOA CPSOGSA LSHADE_SPACMA W

F1 30 0 Best 0 0 0 0 0 0 0 2.
Worst 0 0 0 0 0 0 0 8.
Average 0 0 0 0 0 0 0 1.
SD 0 0 0 0 0 0 0 1.
Rank 1 1 1 1 1 1 1 2
F2 30 0 Best 0 0 0 0 0 0 0 1.
Worst 0 0 0 0 0 0 0 0.
Average 0 0 0 0 0 0 0 7.
SD 0 0 0 0 0 0 0 4.
Rank 1 1 1 1 1 1 1 3
F3 30 0 Best 0 0 0 0 0 0 0 0.
Worst 0 0 0 0 0 0 0 0.
Average 0 0 0 0 0 0 0 0.
SD 0 0 0 0 0 0 0 0.
Rank 1 1 1 1 1 1 1 2
F4 30 0 Best 0 0 0 0 0 0 0 0.
Worst 0 0 0 0 0 0 0 0.
Average 0 0 0 0 0 0 0 0.
SD 0 0 0 0 0 0 0 0.
Rank 1 1 1 1 1 1 1 4
F5 30 0 Best 0 2.62E-05 5.5019 0 4.6732 0.20837 0.25725 0.
Worst 0.000035 0.001631 8.7312 8.9902 5.8353 262.2 4.5866 15
Average 0 0.000237 7.1658 7.0876 5.1486 34.932 3.4081 5.
SD 0.00043 0.000306 0.91571 3.6114 0.28308 74.046 1.2584 4.
Rank 1 2 9 9 5 10 4 6
F6 30 0 Best 0 0 0.002008 1.3943 0.0048 0 0 1.
Worst 0 0 0.77724 2.25 0.022207 0 0 1.
Average 0 0 0.22968 1.8804 0.012455 0 0 5.
SD 0 0 0.2326 0.18309 0.004317 0 0 5.
Rank 1 1 5 7 4 1 1 2
F7 30 0 Best 0 1E-06 8.75E-07 4.38E-07 1.14E-07 1.46E-06 1.6E-06 2.
Worst 2.21E-06 2.21E-05 0.000113 5.25E-05 7.89E-05 5.09E-05 6.91E-05 7.
Average 7.01E-06 8.01E-06 2.69E-05 1.49E-05 2.53E-05 1.93E-05 1.96E-05 2.
SD 4.05E-06 4.95E-06 2.92E-05 1.09E-05 2.33E-05 1.45E-05 1.59E-05 2.

O
Rank 1 2 10 3 9 5 6 8
F8 30 −4189.8 Best −4189.8 −4189.7 −3551.7 −2392.2 −4071.4 −3558.1 −3479.2 −3
Worst −4189.8 −4020.9 −2455.3 −1854.9 −2847.5 −2194.5 −2808 −2

O
Average −4189.8 −4147.5 −2876.8 −2126 −3530.5 −2886 −3111.8 −3
SD 2.78E-12 57.212 279.42 120.6 272.09 348.4 194.69 28
Rank 1 2 6 3 11 7 9 10

R
F9 30 0 Best 0 0 0 0 0 6.9647 2.9857 2.
Worst 0 0 15.395 0 0 48.753 16.914 13

P
Average 0 0 1.913 0 0 20.43 10.215 6.
SD 0 0 3.9178 0 0 10.122 3.7495 2.
Rank 1 1 2 1 1 8 6 4
F10 30 0 Best 0 0 0 0 0 0 0 0.

E D
C T
R E
O R
C
Table 6 (continued)
Function Dim Global Value GCRA ADMO LSHADEcnEpSin LSHADE AOA CPSOGSA LSHADE_SPACMA W

Worst 0 0 3.197 0 0 0 0 0.
Average 0 0 0.10662 0 0 0 0 0.
SD 0 0 0.58369 0 0 0 0 0.
Rank 1 1 3 1 1 1 1 2
F11 30 0 Best 0 0 0 0 0 0.044258 0.012316 0.
Worst 0 0.014773 0.23598 0 0 0.74097 0.15251 0.
Average 0 0.000492 0.01984 0 0 0.26065 0.071743 0.
SD 0 0.002697 0.044677 0 0 0.18155 0.033792 0.
Rank 1 2 4 1 1 9 7 6
F12 30 0 Best 0 0 0.000395 0.061063 0.000217 0 0 0
Worst 0 0 0.16786 2.4347 0.63623 1.8684 0 2.
Average 0 0 0.03538 0.46583 0.023788 0.30086 0 2.
SD 0 0 0.038961 0.44594 0.11573 0.53298 0 6.
Rank 1 1 4 8 5 0 1 2
F13 30 0 Best 0 0 0.001955 0 0.46975 0 0 0
Worst 0 1.31E-08 0.63082 0.9 0.99431 0 0 1.
Average 0 8.03E-10 0.29821 0.09 0.82108 0 0 3.
SD 0 3.07E-09 0.14465 0.27462 0.14167 0 0 3.
Rank 1 2 7 6 9 1 1 3
F14 2 1 Best 1 0.998 0.998 0.998 2.9821 0.998 0.998 0.
Worst 0.998 0.998 4.9505 1.0057 12.671 20.153 0.998 15
Average 0.998 0.998 1.9895 0.99858 10.561 5.3308 0.998 4.
SD 0 0 1.1342 0.001822 3.1698 4.7139 0 3.
Rank 1 1 5 3 9 8 1 7
F15 0 0.0003 Best 0.0003 0.000307 0.000308 0.000694 0.000327 0.000355 0.000307 0.
Worst 0.000307 0.000307 0.001606 0.001512 0.10929 0.020363 0.020363 0.
Average 0.000307 0.000307 0.000522 0.000965 0.010701 0.005947 0.002087 0.
SD 1.94E-13 1.94E-13 0.000324 0.000341 0.021268 0.008843 0.004994 0.
Rank 1 1 2 5 11 10 7 8
F16 2 −1.0316 Best −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1
Worst −1.0316 −1.0316 −1.0316 −1.0315 −1.0316 −1.0316 −1.0316 −1
Average −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1.0316 −1
SD 6.32E-16 6.39E-16 2.55E-07 3.12E-05 6.67E-08 6.32E-16 6.78E-16 1.
Rank 1 2 8 10 7 1 3 4
F17 2 3 Best 3 3 3 3 3 3 3 3
Worst 3 3 3.001 3.0001 30 3 3 30

O
Average 3 3 3.0002 3 11.1 3 3 3.
SD 6.23E-16 8.33E-16 0.000259 2.69E-05 12.584 1.26E-15 6.23E-16 4.

O
Rank 1 2 9 8 11 3 1 10
F18 3 −3.86 Best −3.86 −3.8628 −3.8628 −3.8617 −3.8608 −3.8628 −3.8628 −3
Worst −3.8628 −3.8628 −3.8547 −3.8544 −3.8498 −3.8628 −3.8628 −3

R
Average −3.8628 −3.8628 −3.8608 −3.8555 −3.854 −3.8628 −3.8628 −3
SD 2.71E-15 2.71E-15 0.003022 0.002082 0.002684 2.54E-15 2.71E-15 2.

P
Rank 1 1 6 3 4 1 1 1
F19 6 −3.32 Best −3.322 −3.322 −3.3208 −3.1652 −3.2102 −3.322 −3.322 −3
Worst −3.322 −3.322 −2.8376 −1.9177 −3.0065 −3.2031 −3.2031 −3
Average −3.322 −3.322 −3.2445 −3.0569 −3.1086 −3.219 −3.2625 −3

E D
C T
R E
O R
C
Table 6 (continued)
Function Dim Global Value GCRA ADMO LSHADEcnEpSin LSHADE AOA CPSOGSA LSHADE_SPACMA W

SD 2.28E-15 2.28E-11 0.12036 0.24041 0.055956 0.041107 0.060463 0.


Rank 1 1 7 10 9 2 5 4
F20 4 −10.1532 Best −10.153 −10.153 −9.9702 −8.7714 −7.5309 −10.153 −10.153 −1
Worst −10.153 −10.153 −2.5645 −0.88199 −2.2566 −2.6305 −2.6305 −2
Average −10.153 −10.153 −5.5877 −5.5321 −4.2697 −6.4731 −7.4761 −5
SD 5.41E-16 5.41E-15 2.849 1.4164 1.0722 3.3954 3.4128 3.
Rank 1 1 8 9 10 4 5 3
F21 4 −10.4028 Best −10.403 −10.403 −10.376 −8.4361 −7.2375 −10.403 −10.403 −1
Worst −10.403 −10.403 −2.6892 −0.90808 −1.7517 −1.8376 −2.7519 −2
Average −10.403 −10.403 −5.8629 −5.9236 −4.1539 −7.2929 −8.6769 −7
SD 3.61E-15 3.63E-15 2.9202 1.4845 1.2648 3.6856 2.9752 3.
Rank 1 1 7 9 10 6 4 5
F22 4 −10.5363 Best −10.536 −10.536 −10.393 −10.118 −7.8174 −10.536 −10.536 −1
Worst −10.536 −10.536 −2.4055 −5.1285 −2.1909 −2.4217 −2.4217 −1
Average −10.536 −10.536 −5.7166 −6.371 −4.4501 −7.0219 −8.8234 −7
SD 2.6E-15 3.76E-14 2.5849 1.3998 1.7161 3.8783 3.2147 3.
Rank 1 1 7 8 10 6 4 5
Mean rank 4.5 4.48 6.34 6.95 7.18 5.68 5.48 6.
Rank 1 2 8 6 9 6 4 9

O O
P R
E D
C T
R E
O R
C
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 7
Comparative results of Wilcoxon's test for classical benchmark functions.
Algorithm N Mean Rank Sum of Ranks P-value Asymp. Sig. (2-tailed)

ADMO - GCRA Negative Ranks 2 4.50 9.00 −0.314b 0.753


Positive Ranks 4 3.00 12.00
Ties 16
Total 22
LSHADEcnEpSin - GCRA Negative Ranks 2 7.00 14.00 −2.201b 0.028

F
Positive Ranks 11 7.00 77.00
Ties 9
Total 22

OO
LSHADE - GCRA Negative Ranks 2 3.50 7.00 −2.312b 0.021
Positive Ranks 9 6.56 59.00
Ties 11
Total 22
DMO - GCRA Negative Ranks 1 4.00 4.00 −2.900b 0.004
Positive Ranks 12 7.25 87.00
Ties 9
Total 22

PR
LSHADE_SPACMA - GCRA Negative Ranks 2 3.50 7.00 −1.540b 0.123
Positive Ranks 6 4.83 29.00
Ties 14
Total 22
UMOEA - GCRA Negative Ranks 2 3.50 7.00 −1.540b 0.123
Positive Ranks 6 4.83 29.00
Ties 14
Total 22
WOA - GCRA Negative Ranks 2 9.00 18.00 −2.166b 0.03
ED
Positive Ranks 12 7.25 87.00
Ties 8
Total 22
AOA - GCRA Negative Ranks 2 4.50 9.00 −0.845b 0.398
Positive Ranks 5 3.80 19.00
Ties 15
Total 22
CT

CPSOGSA - GCRA Negative Ranks 2 6.50 13.00 −1.778b 0.075


Positive Ranks 9 5.89 53.00
Ties 11
Total 22
GSK - GCRA Negative Ranks 2 3.50 7.00 −2.691b 0.007
Positive Ranks 11 7.64 84.00
Ties 9
RE

Total 22
b. Based on negative ranks.

petitive for the rest of the function. However, below-par performance was recorded by the AOA, DMO, and WOA for this set of prob-
lems, competing only for F4.
The convergence behavior of the algorithms as they solve this set of problems is shown in Fig. 9. The GCRA and the high perform-
ers showed fast and steady convergence during the �rst few iterations. It remained steady around the best solution afterward. This
R

scenario implies effective exploration early in the iteration and exploitation later during the iteration. In all cases, maintain a balance
between exploration and exploitation. Specifically, the GCRA shows that quality solutions are found with a few iterations. However,
the exploration and exploitation phases are continuous throughout the optimization process.
CO

4.2.5. Results for classic engineering optimization problems


Optimization algorithms’ role is crucial in the engineering domain, as they help balance the operational characteristics of objects
to reduce work costs. In this context, the outcomes of optimization for 6 classical examples are presented. The comparison of the
GCRA method with other meta-heuristic strategies and some standard results obtained from analytical methods (if available). This
comparison is intended to underscore the ef�ciency and effectiveness of the GCRA method in tackling intricate engineering problems.
The �ndings could offer valuable perspectives for the future use of optimization algorithms in the �eld of engineering.

4.2.5.1. Three-bar truss problem. The objective of optimizing the 3-bar truss engineering problem, illustrated in Fig. 10, is to reduce
the weight of the 3-bar structure while it supports a total load P exerted vertically downward. The static volume of the loaded 3-
bar truss is constrained by the stress of each bar. The 3-Bar Truss Design (3-BTD) has two design variables, namely the cross-
sectional areas, , as indicated in Fig. 10. Equation (14) presents the mathematical model for the 3-BTD [83].

18
Table 8
Comparative results for CEC 2020 test functions.
Function Global Value GCRA AOA LSHADEcnEpSin LSHADE ADMO CPSOGSA LSHADE_SPACMA WOA

F1 100 Best 100 1.02E+07 104.2 1.81E+09 100.69 100.47 100.96 142.
Worst 101.39 4.12E+09 11441 1.29E+10 12694 5621.7 5494.9 7359
Average 100.4 7.19E+08 2853.6 6.54E+09 4722.1 1692.5 876.9 2795
SD 0.3515 1.20E+09 2966.3 2.57E+09 4289.2 1682.6 1302.9 2097
Rank 1 9 5 11 3 2 4 7
F2 1100 Best 1130.2 1451.2 1126.9 1359.4 1594.2 1118.5 1115.2 1160
Worst 1375.1 2666.7 2567.9 2707.7 2832 1823.1 2263.4 1724
Average 1241.4 2194.6 1709.7 2021.1 2034.2 1453.2 1590.8 1552
SD 63.833 288.25 305.26 288.74 298.97 171.22 281.95 131.
Rank 1 10 6 8 9 2 5 4
F3 700 Best 714.01 747.21 720.96 769.12 720.77 711.97 714.7 717.
Worst 724.97 835.07 799.36 834.54 883.67 729.72 730.06 726.
Average 719.69 778.54 745.26 797.74 761.4 720.38 720.67 722.
SD 2.963 19.441 22.3 15.613 31.878 3.7772 4.1885 2.14
Rank 1 10 7 11 8 2 3 4
F4 1900 Best 1900.2 1903.1 1900.8 2571.8 1900.1 1900.5 1900.5 1901
Worst 1901.8 27556 1916 2.00E+05 1906.5 1901.5 1905.1 1901
Average 1901.2 3264.3 1905.3 52046 1901.8 1901 1901.5 1901
SD 0.3165 5087.8 2.9321 57296 1.2935 0.28017 0.93587 0.22
Rank 1 8 6 9 4 1 2 3
F5 1700 Best 1710.1 3674.5 1868.4 3679.4 2268.2 1957.2 2682.9 3655
Worst 1797.9 5.71E+05 5.67E+05 3.10E+05 1.22E+05 18436 2.18E+05 7984
Average 1751.3 69217 84962 1.55E+05 19498 6326.5 39375 2425
SD 16.644 1.63E+05 1.69E+05 70051 30567 4401.8 57716 1959
Rank 1 9 10 11 5 2 8 7
F6 1600 Best 1600.2 1600.9 1600.5 1601.1 1600.5 1600.5 1600.5 1600
Worst 1600.5 1659.5 1619.5 1628.8 1727.1 1659.5 1618 1600
Average 1600.5 1606.2 1603.7 1615.1 1626.1 1617.2 1603.2 1600
SD 0.001129 12.062 5.708 8.062 34.97 20.581 5.8731 0.07
Rank 1 8 7 9 11 10 6 2
F7 2100 Best 2101.1 4674 2102.1 3334.3 2891.9 2101.8 2121.4 2208
Worst 2122.5 27880 26801 10767 23780 5640.2 24664 5785
Average 2108 11226 9730.3 6361.5 9699.7 2741.6 9666.2 3097
SD 4.5969 6225.9 7716.8 2263.6 7186.9 746.11 7064.8 1025

O
Rank 1 11 10 4 9 2 8 3
F8 2200 Best 2200 2238.5 2238.3 2456.4 2301.1 2301.1 2222.3 2236
Worst 2304.2 2544.7 2321.4 3388.6 4219.3 2305.1 2304.2 2301

O
Average 2278.3 2362.6 2302.8 2770.9 2428.8 2302.4 2299.4 2295
SD 43.878 71.069 21.088 237.13 424.81 1.0982 14.607 15.2
Rank 1 7 5 9 10 4 3 2

R
F9 2400 Best 2500 2521.9 2500 2679.8 2500 2500 2500 2599
Worst 2739.7 2830.3 2830.6 2975.7 2804.9 2753.9 2773.4 2753

P
Average 2559.5 2733.4 2781.8 2823.7 2743.4 2717.7 2722.1 2732
SD 83.741 97.457 56.986 66.405 83.431 74.071 75.875 36.4
Rank 1 5 10 11 7 2 3 4
F10 2500 Best 2602.3 2901 2600.1 2908.4 2897.7 2898.1 2897.7 2900

E D
C T
R E
O R
C
Table 8 (continued)
Function Global Value GCRA AOA LSHADEcnEpSin LSHADE ADMO CPSOGSA LSHADE_SPACMA WOA

Worst 2897.8 3055.1 3024.3 3433.7 3024.2 2946 2948.9 2946


Average 2887.9 2955.5 2938.1 3121.4 2931.6 2924.7 2930.5 2918
SD 53.863 38.61 72.682 113.74 30.553 23.104 23.126 16.1
Rank 1 9 8 11 7 4 5 2
Mean rank 1.10 9.00 7.70 9.80 7.80 3.30 4.75 3.75
Rank 1 10 7 11 8 2 5 3

O O
P R
E D
C T
R E
O R
C
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 9
Results of GCRA for CEC 2011 problems.
Function Best Worst Mean Std

F1 0.00E+00 8.25E-02 4.06E-03 1.51E-02


F2 −2.83E+01 −2.71E+01 −2.84E+01 2.01E-01
F3 1.15E-05 1.15E-05 1.15E-05 6.08E-11
F4 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F5 −3.78E+01 −3.59E+01 −3.68E+01 7.33E-01

F
F6 −2.99E+01 −1.82E+01 −2.70E+01 4.04E+00
F7 8.70E-01 1.30E+00 1.14E+00 1.02E-01
F8 2.19E+02 2.19E+02 2.19E+02 0.00E+00

OO
F9 3.00E+05 3.01E+05 3.00E+05 1.83E+00
F10 −6.80E+00 −6.80E+00 −6.80E+00 0.00E+00
F11 5.09E+04 5.30E+04 5.22E+04 6.27E+02
F12 1.06E+06 1.07E+06 1.06E+06 1.41E+03
F13 1.54E+04 1.54E+04 1.54E+04 2.61E-06
F14 1.79E+04 1.83E+04 1.80E+04 3.09E+01
F15 3.27E+04 3.27E+04 3.27E+04 5.87E-01
F16 1.23E+05 1.27E+05 1.24E+05 9.07E+02

PR
F17 1.85E+06 1.90E+06 1.89E+06 1.27E+04
F18 9.35E+05 9.44E+05 9.41E+05 3.00E+03
F19 9.42E+05 9.59E+05 9.47E+05 4.58E+03
F20 9.36E+05 9.49E+05 9.40E+05 3.29E+03
F21 1.26E+01 1.79E+01 1.51E+01 1.27E+00
F22 1.19E+01 1.31E+01 1.27E+01 1.49E+00

14
ED
Subject to
CT
RE

range: [83]
Table 11 showcases the outcomes of the optimization process performed on this problem using prospective metaheuristic algo-
rithms.

4.2.5.2. Gear train problem. Optimizing the gear train design problem is indeed an interesting one in the �eld of mechanical engi-
R

neering. It's an unconstrained discrete design optimization problem, where the goal is to minimize the ratio of the output to the in-
put shaft's angular velocity. The design variables in this problem are the number of teeth on each gear, represented as follows:
CO

The mathematical model for the GTD problem would be represented in Equation (15) [85]. The representation of this problem is
shown in Fig. 11.

15

where.

The obtained results from different candidate metaheuristic algorithms are presented in Table 12.

21
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
PR
ED
CT
R RE
CO

Fig. 8. Qualitative results for some of the studied benchmark problems from F1–F13.

4.2.5.3. Welded beam design problem. The Welded Beam Design (WBD) problem, a well-known optimization issue, was employed to
assess the effectiveness of GCRA and various other optimization algorithms. As shown in Fig. 12, the WBD problem involves design-
ing a welded beam under several constraints, specifically shear stress (τ), beam blending stress (θ), bar buckling load (Pc), and beam
end de�ection (δ). to subjected various constraints namely, shear ( ) and beam blending ( ) stress, bar buckling load ( ) beam end
de�ection ( ). The objective is to reduce the production cost of the design, as illustrated in equation (16). The design variables are
where l is the length, h is the height, t is the thickness, and b is the weld thickness of the bar [86]

22
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
PR
ED
CT
R RE
CO

23
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Fig. 8. (continued)

16

F
OO
PR
The interval for the design variables:
ED
where
CT
R RE
CO

The parameters for WBD are set as follows. [86].

In the same vein, the obtained results are presented in Table 13. These results were also compared to the results obtained by ana-
lytic means available in the literature shown in Table 14.

4.2.5.4. Pressure vessel design problem. The PVD problem pertains to the design of a pressure vessel, as illustrated in Fig. 13. The opti-
mization procedure for PVD is governed by four design variables: the inner radius (R), the head thickness (Tℎ), the length of the ves-

24
Table 10
Comparative results for CEC 2011 problems.
Algorithms LSHADE_SPACMA

Function Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std

F1 4.06E-03 1.51E-02 4.56E-03 1.64E-02 1.10E+01 1.61E+00 1.61E+00 3.34E+00 1.76E+01 5.78E+00 1.12E+01 6.6
F2 −2.84E+01 2.01E-01 −2.84E+01 2.17E-01 −1.46E+01 2.33E+00 −2.60E+01 1.72E+00 −2.48E+01 1.84E+00 −2.83E+01 3.9
F3 1.15E-05 6.08E-11 1.15E-05 6.98E-11 1.15E-05 8.90E-12 1.15E-05 8.75E-13 2.03E-01 1.23E-02 1.15E-05 6.7
F4 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.0
F5 −3.68E+01 7.33E-01 −3.58E+01 7.43E-01 1.00E+30 2.89E+14 −3.24E+01 1.22E+00 −2.39E+01 2.18E+00 −3.59E+01 6.6
F6 −2.70E+01 4.04E+00 −2.60E+01 4.44E+00 1.00E+30 2.89E+14 −2.63E+01 1.46E+00 −1.71E+01 2.36E+00 −2.91E+01 2.0
F7 1.14E+00 1.02E-01 1.15E+00 1.08E-01 1.06E+00 7.92E-02 1.13E+00 1.54E-01 1.61E+00 1.73E-01 1.27E+00 8.8
F8 2.19E+02 0.00E+00 2.20E+02 0.00E+00 2.20E+02 0.00E+00 2.20E+02 0.00E+00 2.22E+02 6.93E+00 2.51E+02 1.4
F9 3.00E+05 1.83E+00 3.01E+05 1.88E+00 1.69E+05 1.01E+04 4.72E+05 8.86E+04 1.66E+06 7.50E+04 3.01E+05 3.0
F10 −6.80E+00 0.00E+00 −6.80E+00 0.00E+00 −2.17E+01 1.20E-01 −2.11E+01 2.01E-01 −1.12E+01 7.86E-01 −9.02E+00 5.2
F11 5.22E+04 6.27E+02 5.24E+04 6.47E+02 5.21E+04 5.48E+02 9.23E+05 2.55E+05 3.97E+05 1.19E+05 8.55E+06 2.4
F12 1.06E+06 1.41E+03 1.07E+06 1.44E+03 1.08E+06 9.44E+03 3.32E+06 6.23E+05 4.68E+06 4.60E+05 1.00E+30 1.4
F13 1.54E+04 2.61E-06 1.54E+04 2.63E-06 1.54E+04 1.39E+00 1.54E+04 1.97E-01 1.55E+04 2.79E+01 1.00E+30 1.4
F14 1.80E+04 3.09E+01 1.81E+04 3.99E+01 1.81E+04 3.37E+01 1.85E+04 3.71E+01 1.92E+04 2.58E+02 7.00E+29 4.8
F15 3.27E+04 5.87E-01 3.27E+04 5.91E-01 3.28E+04 1.43E+01 3.28E+04 3.16E+01 3.32E+04 1.28E+02 1.00E+30 1.4
F16 1.24E+05 9.07E+02 1.27E+05 9.39E+02 1.28E+05 8.99E+02 1.30E+05 7.07E+02 1.46E+05 7.97E+03 1.00E+30 1.4
F17 1.89E+06 1.27E+04 1.90E+06 1.57E+04 1.90E+06 1.02E+04 1.92E+06 1.64E+04 1.56E+09 1.47E+09 1.00E+30 1.4
F18 9.41E+05 3.00E+03 9.41E+05 3.01E+03 9.43E+05 3.27E+03 9.47E+05 3.68E+03 3.08E+06 9.94E+05 9.40E+05 1.1
F19 9.47E+05 4.58E+03 9.48E+05 4.98E+03 9.45E+05 2.37E+03 1.22E+06 7.50E+04 4.24E+06 2.17E+06 9.44E+05 1.8
F20 9.40E+05 3.29E+03 9.40E+05 3.27E+03 9.40E+05 2.31E+03 9.52E+05 9.28E+03 3.66E+06 1.75E+06 9.40E+05 2.0
F21 1.51E+01 1.27E+00 1.51E+01 1.25E+00 1.51E+01 5.97E-01 1.43E+01 2.56E+00 1.00E+30 1.48E+14 4.06E+01 4.6

O
F22 1.27E+01 1.49E+00 1.27E+01 1.48E+00 1.43E+01 2.13E+00 1.66E+01 1.15E+00 3.89E+01 8.73E+00 4.21E+01 3.7

R O
P
E D
C T
R E
O R
C
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
PR
ED
CT
RE

Fig. 9. Convergence analysis.


R
CO

Fig. 10. The 3-BTD.

26
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 11
Results for 3-bar truss problem.
Algorithms GCRA2 LSHADEcnEpSin GOA LSHADE HHO LSHADE_SPACMA WOA CPSOGSA RFO [84]

x1 0.209515 0.21949849 0.214256 0.234256 0.239498 0.244256176 0.48867 0.226256 0.75356


x2 0.178373 0.178372818 0.192457 0.198424 0.198373 0.39245187 0.507569 0.193452 0.55373
Best 106.93 −47727 106.93 1.00E+30 106.93 1.00E+30 106.93 107.56 268.512
Worst 106.96 −38638 106.93 1.00E+30 106.93 1.00E+30 106.93 123.89 NA
Average 106.94 −42419 106.93 1.00E+30 106.93 1.00E+30 106.93 112.07 NA

F
Stan.Div 0.006604 3168.9 2.22E-10 2.89E+14 2.11E-14 2.89E+14 5.22E-07 4.0699 NA

OO
PR
ED
Fig. 11. The GTD

Table 12
Results for gear train problem.
Algorithms GCRA2 LSHADEcnEpSin GOA LSHADE HHO LSHADE_SPACMA WOA CPSOGSA RFO [84] PBO [82]
CT

x1 55 48 49 34 53 43 49 49 52.1581 50.04797
x2 16 19 19 13 26 16 19 19 23.01251 23.32433
x3 16 19 16 20 15 34 19 34 16.1401 14.82582
x4 43 52 43 53 51 43 49 49 47.2105 47.88919
Best 2.70E-12 5.57E-15 2.70E-12 0.73226 2.70E-12 0.73226 2.70E-12 1.17E-10 0.08709 1.37 x 10-15
Worst 2.31E-11 4.33E-10 0.003105 0.73226 3.64E-09 0.73226 3.30E-09 2.73E-08 NA NA
Average 1.29E-11 5.93E-11 0.000155 0.73226 7.64E-10 0.73226 1.27E-09 9.43E-09 NA NA
RE

Stan.Div 1.05E-11 1.17E-10 0.000694 0 9.62E-10 0 1.03E-09 1.10E-08 NA NA


R
CO

Fig. 12. The WBD

27
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 13
Results for welded beam design problem.
Algorithms GCRA2 LSHADEcnEpSin GOA LSHADE HHO LSHADE_SPACMA WOA CPSOGSA RFO [84] PBO [82]

x1 0.20523 0.247499 0.20573 0.2057 0.205563 0.205396 0.20523 0.20556 0.21846 0.23258
x2 0.20573 0.250284 0.20573 0.2057 0.236204 0.205806 0.20573 0.20581 3.51024 3.49706
x3 3.26225 2.536813 3.47049 3.4705 3.484293 3.4702 3.47049 3.4705 8.87254 8.91026
x4 9.03704 10 9.03662 9.0366 9.03662 9.036624 9.03704 9.03662 0.22491 0.21194
Best 1.6952 −1.23E+06 1.6967 2.55E+22 1.7121 2.55E+22 1.7457 1.7166 1.86612 1.79863

F
Worst 1.6952 −54931 2.5211 2.55E+22 2.0009 2.55E+22 3.647 2.2837 NA NA
Average 1.6952 −5.62E+05 1.8894 2.55E+22 1.785 2.55E+22 2.1113 1.8567 NA NA
Stan.Div 5.68E-07 5.71E+05 0.30372 0 0.064769 0 0.50797 0.13269 NA NA

OO
Table 14
Standard results from classic or analytical approaches in solving welded beam design problems available in the literature [84].
Method x1 x2 x3 x4 F(x)*

[87] Siddall 0.2444 6.2189 8.2915 0.2444 2.38154


[88] Ragsdell 0.2455 6.1960 8.2730 0.2455 2.38594

PR
[88] Random 0.4575 4.7313 5.0853 0.6600 4.11856
[88] Simplex 0.2792 5.6256 7.7512 0.2796 2.53073
[88] David 0.2434 6.2552 8.2915 0.2444 2.38411
[88] Approx 0.2444 ED 6.2189 8.2915 0.2444 2.38154
CT

Fig. 13. The PVD

sel's cylindrical section (L), and the shell thickness (Ts). The goal is to reduce the expenses related to raw materials and welding in
the construction of the pressure vessel, as depicted in Equation (17).
RE

17
R
CO

The design variables speci�ed within the range are listed below. [85]

Table 15 shows the results of applying candidate metaheuristic optimization to this problem. These results were also compared to
the results obtained by analytic means available in the literature shown in Table 16.

28
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 15
Results for pressure vessel design problem.
Algorithms GCRA2 LSHADEcnEpSin GOA LSHADE HHO LSHADE_SPACMA WOA CPSOGSA RFO [84] PBO [82]

x1 0.446319 0.579194 0.462095 10 10 10 0.69441 0.461506 0.81425 0.81327


x2 0.230237 0.241237 0.240237 10 10 10 0.253932 0.240237 0.44521 0.43702
x3 40.33173 40.37747 40.35339 53.67151 56.07885 53.47238 52.93147 40.32483 42.20231 42.04601
x4 200 200 199.5304 71.64606 56.40497 72.98132 76.66704 200 176.6215 176.756
Best 5367.9 3872.3 4527.3 1.68E+16 4544.1 1.68E+16 4529.4 4530.5 6113.32 6057.547

F
Worst 15076 5096.1 4527.3 1.68E+16 5579.7 1.68E+16 7539.4 5578.8 NA NA
Average 5567.7 4923 4527.3 1.68E+16 5230.1 1.68E+16 5615 5050.1 NA NA
Stan.Div 792.5 424.47 2.82E-10 0 355.28 0 1120.5 369.61 NA NA

OO
Table 16
Standard results from classic or analytical approaches in solving pressure vessel optimization problems available in the literature [84].
Method x1 x2 x3 x4 F(x)*

[89] Lagrange multiplier 1.125 0.625 58.291 43.69 7198.0428


[1] Branch-bound 1.125 0.625 47.7 117.701 8129.1036

PR
4.2.5.5. Compression spring design problem. The CSD problem shown in Fig. 14, focuses on minimizing the weight of a spring. The
spring type considered here is a tension/compression spring. It has three design parameters namely; the wire diameter (d), number of
active coils (P), and mean coil diameter (D).
Assume that , the objective function is given in Equation 18

18
ED
subject to

The intervals for the design variables are. [90]


CT

The results obtained for GCRA and other metaheuristic algorithms are shown in Table 17. These results were also compared to the
results obtained by analytic means available in the literature shown in Table 18.

4.2.5.6. Cantilever beam design problem. Cantilever beams are structural elements that extend freely from one end while being rigidly
�xed at the other end as shown in Fig. 15. They �nd applications in various engineering �elds, including civil, mechanical, and aero-
R RE
CO

Fig. 14. The CSD

Table 17
Results for compression spring design problem.
Algorithms GCRA2 LSHADEcnEpSin GOA LSHADE HHO LSHADE_SPACMA WOA CPSOGSA RFO [84] PBO [82]

d 0.13915 0.148316747 0.13631 2 2.00008 2 1 0.13785 0.05189 0.05102


D 1.3 1.3 1.22064 1.2 1.2 1.2 1.2 1.26129 0.36142 0.35756
N 11.8924 15 13.2299 2 2 2 3 12.5361 11.58436 11.6994
Best 3.662 −1.48E+09 3.6619 99.986 3.6619 99.986 3.6619 3.6619 0.01321 0.01275
Worst 3.7239 −1.20E+08 3.6619 99.986 3.7311 99.986 3.7313 3.7256 NA NA
Average 3.6882 −4.33E+08 3.6619 99.986 3.7016 99.986 3.6843 3.6766 NA NA
Stan.Div 0.02281 4.23E+08 3.95E-16 2.92E-14 0.024346 2.92E-14 0.026358 0.02561 NA NA

29
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 18
Standard results from classic or analytical approaches in solving compression spring optimization problems available in the literature [84].
Method x1 x2 x3 F(x)*

[91] Constraint correction 14.25 0.3159 0.05 0.0128334


[92] Mathematical optimization 9.1854 0.39918 0.053396 0.0127303

F
OO
PR
ED
Fig. 15. The CBD

space engineering. The goal is to minimize the weight of the cantilever beam, given the decision variables are related to the dimen-
sions of the beam. Equation (19) shows the objective function.

19
CT

Subject to. [78]


RE

Lastly, the results obtained by the metaheuristic algorithms used for this study are presented in Table 19. These results were also
compared to the results obtained by analytic means available in the literature shown in Table 20.

Table 19
Results for cantilever beam design problem.
R

Algorithms GCRA2 LSHADEcnEpSin GOA LSHADE HHO LSHADE_SPACMA WOA CPSOGSA RFO [84]

x1 5.389588 6.01513 5.689588 5.694297 6.01878 6.01513 5.689588 5.694297 6.00845


CO

x2 5.020725 5.025255 4.41889 5.30344 5.025255 4.41889 5.020755 5.025255 5.30485


x3 4.361692 4.253986 4.119345 4.49587 4.261692 4.253986 4.119345 4.49587 4.49215
x4 3.212994 3.312994 3.314226 7.754706 3.312994 3.314226 7.754706 3.49896 3.49842
x5 2.040863 2.037547 2.15278 2.15428 2.15428 2.040863 2.037547 2.15278 2.14463
Best 1.3705 −35.162 1.3004 1.56E+16 1.3004 1.56E+16 1.3005 1.3004 13.34954
Worst 1.689 −31.235 1.3004 1.56E+16 1.3005 1.56E+16 1.3011 1.3004 NA
Average 1.5127 −31.935 1.3004 1.56E+16 1.3004 1.56E+16 1.3006 1.3004 NA
Stan.Div 0.09707 0.9136 3.14E-16 2.052 6.74E-06 2.052 0.000195 3.71E-16 NA

Table 20
Standard results from classic or analytical approaches in solving cantilever beam optimization problems available in the literature [84].
Method x1 x2 x3 x4 x5 F(x)*

[93] Generalized Convex Approximation 6.01 5.3 4.49 3.49 2.15 13.3442
[93] Moving Asymptotes 6.01 5.3 4.49 3.49 2.15 13.3442

30
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

4.2.6. Results discussion of the optimization of engineering problems


For the tree bar truss problem, the GCRA, GOA, HHO, and WOA found the lowest value of the cost function at least once. Through-
out 30 runs of the algorithms, the GOA returned the lowest average value of the cost function closely followed by the proposed GCRA
and red fox optimization algorithm (RFO). We noticed that the LSHADE and its two variants used in this study returned abnormal re-
sults. This can be attributed to the parameter used, with better �ne-tuning, we believe they will leave to their potential. Considering
the welded beam problem, similar observations can be seen, with GCRA returning the lowest average value of the cost function over
30 runs. The GOA was very competitive too, closely following the GCRA. The RFO and polar bear optimization algorithm (PBO) were
placed 5th and 6th respectively. Comparing these results to those of Table 14 (literature results), it can be seen that they are close to

F
those of Siddall, Ragsdell, and David's methods.
Furthermore, in the compression spring problem, the RFO and PBO returned results close to the standard results from classic/ana-
lytical approaches in the literature shown in Table 18. It can observed that the GCRA, GOA, HHO, WOA, and CPSOGSA returned far

OO
above the lowest in Table 18, however, the GCRA closely followed the RFO and PBO. For the cantilever beam design problem, the
GCRA, GOA, HHO, WOA, and CPSOGSA returned results far below the RFO and those in Table 20. Looking further, it can be noticed
that the proposed GCRA closely followed the RFO, which is an advantage for the GCRA as a promising algorithm.
In the pressure vessel problem, the GOA found the minimum value of the cost function, with the GCRA trailing closely behind.
However, the RFO's result is more aligned with the results in Table 16, demonstrating the GCRA's competitiveness against other meth-
ods. In general, the techniques used in this problem achieved lower cost function values than the Lagrange multiplier and branch-
bound methods presented. For the gear train problem, the RFO returned the best result, followed closely by the GCRA and GOA. The

PR
top results were obtained by the RFO and GCRA, with a negligible difference between them. It's important to note that the results ob-
tained from all heuristics vary significantly in general.
These experiments demonstrate the competitiveness of the proposed GCRA, consistently ranking among the top two in all engi-
neering problems tackled in this study. While it may not be the absolute best among the evaluated heuristics, it's noteworthy that the
GCRA performs comparably to renowned classical algorithms in a similar number of experiments, which is an impressive outcome.

4.3. Statistical analysis


ED
The results in Section 4.2 provide significant statistics that offer insights into the exploration and exploitation capabilities of the
proposed algorithm. However, these alone are insuf�cient to validate the algorithm's effectiveness. Additional statistical tests using
'mean' values are required to demonstrate the robustness and ef�cacy of the proposed algorithm. Both Friedman's and Wilcoxon's
signed-rank tests were employed for further analysis of the results.
The outcomes of Friedman's test are displayed in Table 21. In the context of Friedman's test, a lower mean rank indicates superior
performance. Furthermore, Friedman's test presumes that "the mean distribution of the obtained result is identical." The significance
CT

level is typically set at α = 0.05. As per Table 8, the p-value = 0.000 is less than α, leading to the rejection of the hypothesis. The
GCRA ranks �rst as it has the lowest mean rank. The ADMO follows closely behind the GCRA, then the LSHADEcnEpSin, and LSHADE.
Fig. 16 illustrates the performance ranking.
Following Friedman's test, Wilcoxon's test was conducted to provide a pairwise performance comparison between GCRA and the
other algorithms. The �ndings of this test are summarized in Table 22. The results indicate that the GCRA significantly outperforms
all the algorithms except the LSHADEcnEpSin and GSK across all considered problems. This conclusion is based on the high R+ val-
RE

ues achieved by the GCRA. Specifically, at α = 0.05, the GCRA significantly outperformed 8 out of the 10 algorithms. These results
further af�rm the GCRA's performance in terms of searchability, stability, and ef�ciency in solving CEC 2011 real-world optimization
problems.

Table 21
R

Friedman's test results.


Algorithm Mean Rank Ranking
CO

GCRA 2.59 1
ADMO 3.8 2
LSHADEcnEpSin 4.52 3
LSHADE 5 4
GSK 5.09 5
CPSOGSA 5.95 6
LSHADE_SPACMA 6.93 7
UMOEA 7.18 8
DMO 7.39 9
AOA 8.57 10
WOA 5.09 11
Test Statisticsa
N 22
Chi-Square 84.486
df 10
Asymp. Sig. <,001

31
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

F
OO
PR
ED
Fig. 16. Graphical representation of algorithm's performance ranking (algorithm with the least value is ranked higher).

4.4. Summary of results

The six engineering optimization problems and the CEC 2011 real-world test suite, which encompasses a broad spectrum of opti-
mization problems, were utilized to evaluate the robustness and effectiveness of the GCRA. The experimental outcomes of the pro-
CT

posed algorithm were juxtaposed with ten other cutting-edge algorithms. These algorithms include the LSHADE, LSHADEcnEpSin,
LSHADE_SPACMA, and UMOEA, which are the four high-performance algorithms from CEC competitions. Also, the GSK, a human-
based algorithm, was used too. Finally, three swarm-based algorithm representations (AOA, CPSOGSA, WOA) were used in this study.
Further analysis of the obtained results was conducted using two statistical methods, namely Friedman's and Wilcoxon's tests.
The proposed GCRA demonstrated highly competitive performance, �nding the optimal solution for most of the optimization
functions used in this study. This performance can be attributed to the effective foraging method employed by the GCR during and
RE

outside the mating season, as meticulously modeled in Equations (4), (5), (8) and (9). According to Friedman's test, the GCRA ranked
�rst, closely followed by LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, GSK, and UMOEA. Further results from Wilcoxon's signed
ranks test con�rmed the GCRA's superiority and competitiveness compared to the other algorithms used in this study across all func-
tions in the test suite.
In terms of convergence analysis, the GCRA demonstrated rapid and steady convergence in the initial few iterations for most opti-
mization problems de�ned in CEC 2011, except for F1 and F3, which exhibited delayed convergence, stabilizing only in the later
R

stages of the iterations. It was observed that the high-performing algorithms were stable around the global or near-global solutions,
necessitating few iterations for most functions.
CO

5. Conclusion and future work

Greater cane rats (GCR) exhibit territorial behavior, with instances of male-only con�icts using their noses for combat. They are
social creatures that live in groups consisting of a dominant male, multiple females, and young ones that can be more than one gener-
ation old. The GCR's foraging behavior involves cutting canes and grasses with their specially adapted upper incisors. As highly noc-
turnal animals, they are smart enough to leave trails while foraging through reeds and grass, leading to food, water, and shelter. This
intelligent behavior of the GCR is the main inspiration for this study. The exploration phase is realized when they leave various shel-
ters within their territory to forage and create trails.
It is assumed that the dominant male retains the information about these trails and other rats subsequently update their position
based on this data. Males can also recognize the breeding season and isolate themselves from the group. It is assumed that once the
group separates during this season, foraging activities are focused in areas with plentiful food sources, facilitating exploitation.
Hence, the intelligent foraging trails and actions during the mating season are mathematically modeled to design the GCR algorithm
and carry out optimization activities.

32
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Table 22
Comparative results of Wilcoxon's test.
Algorithm N Mean Rank Sum of Ranks P-value Asymp. Sig. (2-tailed)

ADMO - GCRA Negative Ranks 1 1.00 1.00 −2.982b 0.003


Positive Ranks 11 7.00 77.00
Ties 10
Total 22
LSHADEcnEpSin - GCRA Negative Ranks 3 10.67 32.00 −2.107b 0.035

F
Positive Ranks 14 8.64 121.00
Ties 5
Total 22

OO
LSHADE - GCRA Negative Ranks 3 6.00 18.00 −3.248b 0.001
Positive Ranks 17 11.29 192.00
Ties 2
Total 22
DMO - GCRA Negative Ranks 1 6.00 6.00 −3.912b <,001
Positive Ranks 21 11.76 247.00
Ties 0
Total 22
−3.273b

PR
LSHADE_SPACMA - GCRA Negative Ranks 3 5.83 17.50 0.001
Positive Ranks 17 11.32 192.50
Ties 2
Total 22
UMOEA - GCRA Negative Ranks 3 3.00 9.00 −3.461b <,001
Positive Ranks 16 11.31 181.00
Ties 3
Total 22
WOA - GCRA Negative Ranks 1 4.00 4.00 −3.977b <,001
ED
Positive Ranks 21 11.86 249.00
Ties 0
Total 22
AOA - GCRA Negative Ranks 1 4.00 4.00 −3.977b <,001
Positive Ranks 21 11.86 249.00
Ties 0
Total 22
−3.702b
CT

CPSOGSA - GCRA Negative Ranks 2 4.50 9.00 <,001


Positive Ranks 19 11.68 222.00
Ties 1
Total 22
GSK - GCRA Negative Ranks 3 9.00 27.00 −2.343b 0.019
Positive Ranks 14 9.00 126.00
Ties 5
RE

Total 22
b . Based on negative ranks.

The CEC 2011 real-world test suite, which includes a broad spectrum of optimization problems, was used to evaluate the effective-
ness and robustness of GCRA. The experimental results of the proposed algorithm were compared with 10 other cutting-edge algo-
rithms. The performance of the proposed GCRA was highly competitive, �nding the optimal solution in most of the optimization func-
tions used in this study. This performance can be attributed to the ef�cient foraging method used by the GCR during and outside the
R

mating season, meticulously modeled in Equations (4), (5), (8) and (9). Friedman's test revealed that GCRA ranked �rst, closely fol-
lowed by LSHADE, LSHADEcnEpSin, LSHADE_SPACMA, GSK, and UMOEA. Further results from Wilcoxon's signed ranks test con-
�rmed GCRA's superiority and competitiveness compared with other algorithms used in this study for all functions in the test suite.
CO

Also, in the experiments for the six (6) engineering problems, we see how competitive the proposed GCRA is, always within the
best 2 in all engineering problems solved in this study. So in general while it cannot be said that the proposed algorithm was the best
algorithm among examined heuristics, it can be said that the GCRA succeeds in a similar number of experiments as well-known classi-
cal algorithms, which is a very good result.
The proposed GCRA is straightforward to implement and has demonstrated its reliability, ef�ciency, and robustness in real para-
meter optimization. The current GCRA design has proven effective in resolving the CEC 2011 real-world test functions and six engi-
neering design problems. Future work will aim to modify the GCRA to make it suitable for solving other constrained multi-objective
optimization problems, both discrete and continuous, and other practical optimization problems in �elds like engineering and sched-
uling. Another promising area of research will be the integration of machine learning and arti�cial intelligence into individual GCRs
to facilitate their evolution in subsequent generations. A comprehensive theoretical and parametric study of the GCRA is another ben-
e�cial direction. Lastly, hybridization might be a valuable strategy for enhancing the performance of the GCRA.

33
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

Ethical Approval

NA.

Availability of data and materials

All data generated or analyzed during this study are included in this article.

F
CRediT authorship contribution statement

Jeffrey O. Agushaka: Conceptualization. Absalom E. Ezugwu: Conceptualization. Apu K. Saha: Conceptualization.

OO
Jayanta Pal: Conceptualization. Laith Abualigah: Conceptualization. Seyedali Mirjalili: Conceptualization.

Declaration of competing interest

The authors declare that they have no known competing �nancial interests or personal relationships that could have appeared to
in�uence the work reported in this paper.

PR
Acknowledgment

The authors wish to acknowledge the funding support by the North-West University postdoctoral fellowship research grant (NWU
PDRF Fund NW.1G01487).

Appendix A.

Table A1
ED
Classical benchmark functions
ID Type Function Dimension Bounds Global

F1 Unimodal 30 [-100,100] 0
F2 Unimodal 30 [-10,10] 0
F3 Unimodal 30 [-100,100] 0
CT

F4 Unimodal 30 [-100,100] 0
F5 Unimodal 30 [-30,30] 0

F6 Unimodal 30 [-100,100] 0

F7 Unimodal 30 [-128,128] 0
RE

F8 Multimodal 30 [-500,500] -

F9 Multimodal 30 [-5.12,5.12] 0

F10 Multimodal 30 [-32,32] 0


R

F11 Multimodal 30 [-600,600] 0


CO

F12 Multimodal 30 [-50,50] 0

Where

F13 Multimodal 30 [-50,50] 0

34
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

ID Type Function Dimension Bounds Global

F14 Fixed-dimension multimodal 2 [-65,65] 1

F15 Fixed-dimension multimodal 4 [-5,5] 0.00030

F16 Fixed-dimension multimodal 2 [-5,5] −1.0316

F17 Fixed-dimension multimodal 2 [-2,2] 3

F
OO
F18 Fixed-dimension multimodal 3 [-1,2] −3.86

F19 Fixed-dimension multimodal 6 [0,1] −3.2

F20 Fixed-dimension multimodal 4 [0,1] −10.1532

F21 Fixed-dimension multimodal 4 [0,1] −10.4028

F22 Fixed-dimension multimodal 4 [0,1] −10.5363

PR
Table A2
CEC 2020
Type Number Functions

Unimodal 1 Shifted and Rotated Bent Cigar Function (CEC 2017 [4] F1) 100
Function
ED
Basic 2 Shifted and Rotated Schwefel's Function (CEC 2014 [3] F11) 1100
Functions 3 Shifted and Rotated Lunacek bi-Rastrigin Function (CEC 2017 [4] F7) 700
4 Expanded Rosenbrock's plus Griewangk's Function (CEC2017 [4] f19) 1900
Hybrid 5 Hybrid Function 1 (N = 3) (CEC 2014 [3] F17) 1700
Functions 6 Hybrid Function 2 (N = 4) (CEC 2017 [4] F16) 1600
7 Hybrid Function 3 (N = 5) (CEC 2014 [3] F21) 2100
Composition 8 Composition Function 1 (N = 3) (CEC 2017 [4] F22) 2200
CT

Functions 9 Composition Function 2 (N = 4) (CEC 2017 [4] F24) 2400


10 Composition Function 3 (N = 5) (CEC 2017 [4] F25) 2500
* Search range: [-100,100]D.

References

[1] E. Sandgren, Nonlinear integer and discrete programming in mechanical design optimization, J. Mech. Des. 112 (2) (1990) 223–229.
RE

[2] T. Dokeroglu, E. Sevinc, T. Kucukyilmaz, A. Cosar, A survey on new generation metaheuristic algorithms, Comput. Ind. Eng. 137 (2019) 106040.
[3] O.J. Adeleke, A.E.S. Ezugwu, I.A. Osinuga, A new family of hybrid conjugate gradient methods for unconstrained optimization, Statistics, Optimization &
Information Computing 9 (2) (2021) 399–417.
[4] M. Braik, A. Hammouri, J. Atwan, M.A. Al-Betar, M.A. Awadallah, White Shark Optimizer: a novel bio-inspired meta-heuristic algorithm for global
optimization problems, Knowl. Base Syst. 243 (2022) 108457.
[5] S. Mirjalili, A.H. Gandomi, S.Z. Mirjalili, S. Saremi, H. Faris, S.M. Mirjalili, Salp Swarm Algorithm: a bio-inspired optimizer for engineering design problems,
Advances in engineering software 114 (2017) 163–191.
[6] A. Faramarzi, M.H. Afshar, Application of cellular automata to size and topology optimization of truss structures, Sci. Iran. 19 (3) (2012) 373–380.
R

[7] A.E. Ezugwu, J.O. Agushaka, L. Abualigah, S. Mirjalili, A.H. Gandomi, Prairie dog optimization algorithm, Neural Comput. Appl 34 (22) (2022) 20017–20065.
[8] L. Abualigah, D. Yousri, M. Abd Elaziz, A.A. Ewees, M.A. Al-qaness, A.H. Gandomi, Aquila Optimizer: a novel meta-heuristic optimization Algorithm,
Comput. Ind. Eng. 157 (2021) 107250.
[9] X.S. Yang, Metaheuristic optimization, Scholarpedia 6 (8) (2011) 11472.
CO

[10] M. Braik, M.H. Ryalat, H. Al-Zoubi, A novel meta-heuristic algorithm for solving numerical optimization problems: ali Baba and the forty thieves, Neural
Comput. Appl. 34 (1) (2022) 409–455.
[11] M.H. Sulaiman, Z. Mustaffa, M.M. Saari, H. Daniyal, S. Mirjalili, Evolutionary mating algorithm, Neural Comput. Appl. (2022) 1–30.
[12] S. Islam, S.B.S. Mugdha, S.R. Dipta, M.D.S.S. Arafat, H. Alinejad-Rokny, I. Dehzangi, MethEvo: an accurate evolutionary information-based methylation site
predictor, Neural Comput. Appl. (2022) 1–12.
[13] Y. Peng, A. Song, V. Ciesielski, H.M. Fayek, X. Chang, PRE-NAS: Predictor-Assisted Evolutionary Neural Architecture Search, 2022 arXiv preprint , p. arXiv:
2204.12726.
[14] P. Kumari, S.K. Sahana, Heuristic initialization based modi�ed ACO (HIMACO) mimicking ant safety features for multicast routing and its parameter tuning,
Microprocess. Microsyst. (2022) 104574.
[15] P. Subashini, T.T. Dhivyaprabha, M. Krishnaveni, Synergistic �broblast optimization, in: Arti�cial Intelligence and Evolutionary Computations in
Engineering Systems, 2017 Singapore.
[16] A. Salehan, A. Deldari, Corona virus optimization (CVO): a novel optimization algorithm inspired from the Corona virus pandemic, J. Supercomput. 78 (4)
(2022) 5712–5743.
[17] C.M. Rahman, T.A. Rashid, A new evolutionary algorithm: learner performance based behavior algorithm, Egyptian Informatics Journal 22 (2) (2021)
213–223.
[18] X. Chen, Y. Liu, X. Li, Z. Wang, S. Wang, C. Gao, A new evolutionary multiobjective model for traveling salesman problem, IEEE Access 7 (2019)
66964–66979.
[19] H. Salimi, Stochastic fractal search: a powerful metaheuristic algorithm, Knowl. Base Syst. 75 (2015) 1–18.

35
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

[20] P. Civicioglu, Backtracking search optimization algorithm for numerical optimization problems, Appl. Math. Comput. 219 (15) (2013) 8121–8144.
[21] J.O. Agushaka, A.E. Ezugwu, Advanced Arithmetic Optimization Algorithm for solving mechanical engineering design problems, PLoS One 16 (8) (2021)
e0255703.
[22] A. Sharma, A. Sharma, J.K. Pandey, M. Ram, Swarm Intelligence: Foundation, Principles, and Engineering Applications, CRC Press, 2022.
[23] S. Li, H. Chen, M. Wang, A.A. Heidari, S. Mirjalili, Slime mould algorithm: a new method for stochastic optimization, Future Generat. Comput. Syst. 111
(2020) 300–323.
[24] T. Dutta, S. Bhattacharyya, S. Dey, J. Platos, Border collie optimization, IEEE Access 8 (2020) 109177–109197.
[25] J. Xue, B. Shen, A novel swarm intelligence optimization approach: sparrow search algorithm, Systems Science & Control Engineering 8 (1) (2020) 22–34.
[26] M. Braik, A. Sheta, H. Al-Hiary, A novel meta-heuristic search algorithm for solving optimization problems: capuchin search algorithm, Neural Comput. Appl.
33 (7) (2021) 2515–2547.
[27] O.N. Oyelade, A.E.-S. Ezugwu, T.I. Mohamed, L. Abualigah, Ebola optimization search algorithm: a new nature-inspired metaheuristic optimization

F
algorithm, IEEE Access 10 (2022) 16150–16177.
[28] B. Al-Khateeb, K. Ahmed, M. Mahmood, D.N. Le, Rock hyraxes swarm optimization: a new nature-inspired metaheuristic optimization algorithm, Comput.
Mater. Continua (CMC) 68 (1) (2021) 643–654.

OO
[29] W.J. AL-kubaisy, M. Yousif, B. Al-Khateeb, M. Mahmood, D.N. Le, The red colobuses monkey: a new nature–inspired metaheuristic optimization algorithm,
Int. J. Comput. Intell. Syst. 14 (1) (2021) 1108–1118.
[30] B. Abdollahzadeh, F.S. Gharehchopogh, S. Mirjalili, African vultures optimization algorithm: a new nature-inspired metaheuristic algorithm for global
optimization problems, Comput. Ind. Eng. 158 (2021) 107408.
[31] T. Rahkar Farshi, Battle royale optimization algorithm, Neural Comput. Appl. 33 (4) (2021) 1139–1157.
[32] J.S. Chou, D.N. Truong, A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean, Appl. Math. Comput. 389 (2021) 125535.
[33] D. Chen, Y. Ge, Y. Wan, Y. Deng, Y. Chen, F. Zou, Poplar optimization algorithm: a new meta-heuristic optimization technique for numerical optimization and
image segmentation, Expert Syst. Appl. 200 (2022) 117118.
[34] F.A. Hashim, A.G. Hussien, Snake Optimizer: a novel meta-heuristic optimization algorithm, Knowl. Base Syst. 242 (2022) 108320.
[35] T.S. Ayyarao, N.S.S. RamaKrishna, R.M. Elavarasan, N. Polumahanthi, M. Rambabu, G. Saini, B. Khan, B. Alatas, War strategy optimization algorithm: a new

PR
effective metaheuristic algorithm for global optimization, IEEE Access 10 (2022) 25073–25105.
[36] F.A. Hashim, E.H. Houssein, K. Hussain, M.S. Mabrouk, W. Al-Atabany, Honey Badger Algorithm: new metaheuristic algorithm for solving optimization
problems, Math. Comput. Simulat. 192 (2022) 84–110.
[37] J.S. Pan, L.G. Zhang, R.B. Wang, V. Snášel, S.C. Chu, Gannet optimization algorithm: a new metaheuristic algorithm for solving engineering optimization
problems, Math. Comput. Simulat. 202 (2022) 343–373.
[38] M. Alimoradi, H. Azgomi, A. Asghari, Trees social relations optimization algorithm: a new Swarm-Based metaheuristic technique to solve continuous and
discrete optimization problems, Math. Comput. Simulat. 194 (2022) 629–664.
[39] L. Abualigah, M. Abd Elaziz, P. Sumari, Z.W. Geem, A.H. Gandomi, Reptile Search Algorithm (RSA): a nature-inspired meta-heuristic optimizer, Expert Syst.
Appl. 191 (2021) 116158.
ED
[40] J.O. Agushaka, A.E. Ezugwu, L. Abdualigah, Gazelle Optimization Algorithm: a novel nature-inspired metaheuristic optimizer for mechanical engineering
applications, Neural Comput. Appl. 35 (5) (2022) 4099–4131.
[41] J.O. Agushaka, A.E. Ezugwu, L. Abualigah, Dwarf mongoose optimization algorithm, Comput. Methods Appl. Mech. Eng. 391 (2022) 114570.
[42] A. Mohammadi, F. Sheikholeslam, S. Mirjalili, Nature-Inspired Metaheuristic Search Algorithms for Optimizing Benchmark Problems: Inclined Planes System
Optimization to State-Of-The-Art Methods, Archives of Computational Methods in Engineering, 2022, pp. 1–59.
[43] A.W. Mohamed, A.A. Hadi, A.K. Mohamed, Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired
algorithm, Int. J. Mach. Learn. Cybern 11 (7) (2020) 1501–1529.
[44] M.A. Al-Betar, Z.A.A. Alyasseri, M.A. Awadallah, I. Abu Doush, Coronavirus herd immunity optimizer (CHIO), Neural Comput. Appl. 33 (10) (2021)
5011–5504.
CT

[45] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, A.H. Gandomi, The arithmetic optimization algorithm, Comput. Methods Appl. Mech. Eng. 376 (2021)
113609.
[46] M. Dehghani, E. Trojovská, P. Trojovský, A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving
training process, Sci. Rep. 12 (1) (2022) 1–21.
[47] E.F. Veysari, A new optimization algorithm inspired by the quest for the evolution of human society: human felicity algorithm, Expert Syst. Appl. 193 (2022)
116468.
[48] H. Emami, Stock exchange trading optimization algorithm: a human-inspired method for global optimization, J. Supercomput. 78 (2) (2022) 2125–2174.
RE

[49] F.A. Zeidabadi, M. Dehghani, Poa: puzzle optimization algorithm, Int. J. Intell. Eng. Syst 15 (2022) 273–281.
[50] S. Abdulhameed, T.A. Rashid, Child drawing development optimization algorithm based on child’s cognitive development, Arabian J. Sci. Eng. 47 (2) (2022)
1337–1351.
[51] A.G. Hussien, M. Amin, M. Wang, G. Liang, A. Alsanad, A. Gumaei, H. Chen, Crow search algorithm: theory, recent advances, and applications, IEEE Access 8
(2020) 173548–173565.
[52] F.A. Hashim, E.H. Houssein, M.S. Mabrouk, W. Al-Atabany, S. Mirjalili, Henry gas solubility optimization: a novel physics-based algorithm, Future Generat.
Comput. Syst. 101 (2019) 646–667.
[53] A. Faramarzi, M. Heidarinejad, B. Stephens, S. Mirjalili, Equilibrium optimizer: a novel optimization algorithm, Knowl. Base Syst. 191 (2020) 105190.
[54] F.A. Hashim, K. Hussain, E.H. Houssein, M.S. Mabrouk, W. Al-Atabany, Archimedes optimization algorithm: a new metaheuristic algorithm for solving
R

optimization problems, Appl. Intell. 51 (3) (2021) 1531–1551.


[55] J.L.J. Pereira, M.B. Francisco, C.A. Diniz, G.A. Oliver, S.S. Cunha Jr, G.F. Gomes, Lichtenberg algorithm: a novel hybrid physics-based meta-heuristic for global
optimization, Expert Syst. Appl. 170 (2021) 114522.
[56] F. Asef, V. Majidnezhad, M.R. Feizi-Derakhshi, S. Parsa, Heat transfer relation-based optimization algorithm (HTOA), Soft Comput. 25 (13) (2021) 8129–8158.
CO

[57] L. Rodriguez, O. Castillo, M. Garcia, J. Soria, A new meta-heuristic optimization algorithm based on a paradigm from physics: string theory, J. Intell. Fuzzy
Syst. 41 (1) (2021) 1657–1675.
[58] S. Talatahari, M. Azizi, M. Tolouei, B. Talatahari, P. Sareh, Crystal structure algorithm (CryStAl): a metaheuristic optimization method, IEEE Access 9 (2021)
71244–71261.
[59] T.O. Ting, X.S. Yang, S. Cheng, K. Huang, Hybrid metaheuristic algorithms: past, present, and future, Recent advances in swarm intelligence and evolutionary
computation (2015) 71–83.
[60] M. Balasubbareddy, Optimal power �ow solution using ameliorated ant lion optimization algorithm, Int. J. Mech. Eng. 13 (1) (2022) 1060.
[61] B. Mallala, D. Dwivedi, Salp swarm algorithm for solving optimal power �ow problem with thyristor-controlled series capacitor, Journal of Electronic Science
and Technology 20 (2) (2022) 100156.
[62] A. Askarzadeh, A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm, Comput. Struct. 169 (2016)
1–12.
[63] S.N. Chegini, A. Bagheri, F. Naja�, PSOSCALF: a new hybrid PSO based on Sine Cosine Algorithm and Levy �ight for solving optimization problems, Appl. Soft
Comput. 73 (2018) 697–726.
[64] X.S. Yang, Nature-inspired Metaheuristic Algorithms, Luniver press, Cambridge, U.K, 2010.
[65] B. Morales-Castañeda, D. Zaldivar, E. Cuevas, F. Fausto, A. Rodríguez, A better balance in metaheuristic algorithms: does it exist? Swarm Evol. Comput. 54
(2020) 100671.
[66] E.G. Talbi, Metaheuristics: from Design to Implementation, John Wiley & Sons, 2009.

36
J.O. Agushaka et al. Heliyon xxx (xxxx) e31629

[67] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comput. 1 (1) (1997) 67–82.
[68] T. Britannica, Cane rat, Encyclopedia Britannica (9 February 2018) [Online]. Available: https://www.britannica.com/animal/cane-rat. (Accessed 28
September 2022).
[69] O.A. Mustapha, E.E. Teriba, O.S. Ezekiel, A.M. Olude, A.K. Akinloye, J.O. Olopade, A study of scientific publications on the greater cane rat (Thryonomys
swinderianus, Temminck 1827), Animal models and experimental medicine 3 (1) (2020) 40–46.
[70] J. Matthews, The Value of Grasscutters, World Ark, 2008, pp. 23–24.
[71] M. Van der Merwe, Discriminating between thryonomys swinderianus and thryonomys gregorianus, Afr. Zool. 42 (2) (2007) 165–171.
[72] E.K. Adu, K.G. Aning, P.A. Wallace, T.O. Ocloo, Reproduction and mortality in a colony of captive greater cane rats, Thryonomys swinderianus, Temminck,
Trop. Anim. Health Prod. 32 (1) (2000) 11–17.
[73] G. Dhiman, M. Garg, A. Nagar, V. Kumar, M. Dehghani, A novel algorithm for global optimization: rat swarm optimizer, J. Ambient Intell. Hum. Comput. 12 (8)
(2021) 8457–8482.

F
[74] S. Das, P.N. Suganthan, Problem De�nitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World
Optimization Problems, Nanyang Technological University, Kolkata, 2010.
[75] N.H. Awad, M.Z. Ali, P.N. Suganthan, Ensemble sinusoidal di�erential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017

OO
benchmark problems, in: 2017 IEEE Congress on Evolutionary Computation (CEC), 2017.
[76] S. Rather, P. Bala, Hybridization of constriction coef�cient based particle swarm optimization and gravitational search algorithm for function optimization, in:
International Conference on Advances in Electronics, Electrical, and Computational Intelligence (ICAEEC- 2019), 2019.
[77] R. Tanabe, A.S. Fukunaga, Improving the search performance of SHADE using linear population size reduction, in: 2014 IEEE Congress on Evolutionary
Computation (CEC), 2014.
[78] A.W. Mohamed, A.A. Hadi, A.M. Fattouh, K.M. Jambi, LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark
problems, in: 2017 IEEE Congress on Evolutionary Computation (CEC), 2017.
[79] S.M. Elsayed, R.A. Sarker, D.L. Essam, United multi-operator evolutionary algorithms, in: 2014 IEEE Congress on Evolutionary Computation (CEC), 2014.
[80] S. Mirjalili, A. Lewis, The whale optimization algorithm, Advances in engineering software 95 (2016) 51–67.
[81] J.O. Agushaka, A.E. Ezugwu, O.N. Oyelade, O.A. Akinola, A.K. Saha, Advanced dwarf mongoose optimization for solving CEC 2011 and CEC 2017 benchmark

PR
problems, PLoS One 17 (11) (2022) e0275346.
[82] D. Połap, M. Woźniak, Polar bear optimization algorithm: meta-heuristic with fast population movement and dynamic birth and death mechanism, Symmetry 9
(10) (2017) 203.
[83] T. Ray, P. Saini, Engineering design optimization using a swarm with an intelligent information sharing among individuals, Eng. Optim. 33 (6) (2001)
735–748.
[84] D. Połap, M. Woźniak, Red fox optimization algorithm, Expert Syst. Appl. 166 (2021) 114107.
[85] E. Sandgren, NIDP in mechanical design optimization, J. Mech. Des. 112 (2) (1990) 223–229.
[86] C. Coello, Use of self-adaptive penalty approach for engineering optimization problems, Comput. Ind. 41 (2) (2000) 113–127.
[87] J.N. Siddall, Analytical Decision-Making in Engineering Design, Prentice Hall, 1972.
ED
[88] K. Ragsdell, D. Phillips, Optimal design of a class of welded structures using geometric programming, Journal of Engineering for Industry 98 (3) (1876)
1021–1025.
[89] B. Kannan, S.N. Kramer, An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to
mechanical design, J. Mech. Des. 116 (2) (1994) 405–411.
[90] M.J. Kazemzadeh-Parsi, A modi�ed �re�y algorithm for engineering design optimization problems. Iranian Journal of Science and Technology, Transactions of
Mechanical Engineering 38 (2) (2014) 403.
[91] J.S. Arora, Introduction to Optimum Design, Elsevier Academic Press, 2004.
[92] A.D. Belegundu, A Study of Mathematical Programming Methods for Structural Optimization, 1983.
[93] H. Chickermane, H. Gea, Structural optimization using a new local approximation method, Int. J. Numer. Methods Eng. 39 (5) (1996) 829–846.
CT
R RE
CO

37

You might also like