SeaLionOptimizationAlgorithm Last PDF
SeaLionOptimizationAlgorithm Last PDF
net/publication/344324283
Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem
CITATIONS READS
0 1,769
1 author:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
A Systematic Formulation & Modelling for the Patient Admission Scheduling Problem (PASP) View project
All content following this page was uploaded by Nidhal Kamel Taha El-Omari on 29 September 2020.
Summary Keywords:
There is a shred of ample evidence that optimization is an enormous Artificial Intelligence (AI), Artificial Neural Network (ANN), Global
field that pervades essentially every aspect of our day-to-day life optimization, Maximum Flow Problem (MFP), Metaheuristic
ranging from academic and engineering fields, going to industrial and Algorithms, Optimization, Sea Lion Optimization (SLnO) Algorithm,
agricultural segments, passing through social domains, and ending Swarm Intelligence.
with commercial and business sectors. Evidently, the philosophy of
optimization has emerged out of the utmost need for finding the best
available solution among a set of candidate ones, without which our
1. Introduction
life will lose its vitality.
Every aspect, visible or invisible, of our life, encompasses
Over the last few decades, a worthy amount of interest has been inside its folds a wide range of optimization problems. Not just
focused on finding solutions for a wide range of intractable that, every real-world system, or even a portion of a system,
optimization problems by scientists and researchers from diversified can be abstracted as an optimization system and encapsulates
domains not only for academic and research objectives but also due
internally one or more optimization problems. In the most
to the existence of a wide variety of real-life applications. They
indeed see the remarkable resemblance between the swarms, for basic sense, the primary interests of optimization algorithms
instance, and the behavior of a human in solving problems and trying (OAs) are pivoting around making something more and more
to come up with new goal-oriented operating methods to tackle many effective to the greatest possible extent by recursively
important real-world problems. Nature Inspired Computing (NIC), as searching to provide more refined and scalable solutions for the
its name implies, is the fusion of nature, by itself, and Artificial problem of interest [1][2][3]. This implies that there is an
Intelligence (AI) to solve various global optimization problems. ultimate need for reformulating the considered problems in
Furthermore, swarm optimization is considered as the most terms of optimization which, in turn, has two faces of the same
representative of these nature-inspired algorithms. Motivated by coin: first, generating a set of candidate solutions for the
applying natural phenomena to metaheuristics and trying to simulate
desired problem which is referred to as the “search space”,
the harmonious behaviors of creatures in solving problems
particularly the joint hunting behavior of the sea lions, the aim of the second and more importantly, assessing the achieved
research work reported in this paper is twofold. On the one hand, performance of these available solutions based on some quality
many theoretical and practical aspects of heuristic and metaheuristic measures which should be previously defined [4][3][5]. These
approaches, from classical to novel approaches, are discussed and quality-measures or as so-called fitness functions, objective
covered. On the other hand, this nature-inspired paper addresses a functions, or goodness levels are quantifiable and revolve
pioneer metaheuristic optimization algorithm in the context of around maximizing some desired features and/or minimizing
finding the optimal solution for the Maximum Flow Problem (MFP). the undesirable ones until a predefined optimization goal is
To be more precise, this paper elaborates on using the Sea Lion achieved [6]. Generally speaking, neither generating these
Optimization (SLnO) Algorithm for solving the Maximum Flow
solutions from the search pool nor devising their objective
Problem (MFP), hence the name “SLnO-MFP”.
functions is a simple task to do; hence, most real-life
After the proposed solution SLnO-MFP algorithm is analyzed and optimization problems are too challenging to solve [4].
the experimental tests are conducted on various real-case datasets, the Inevitably, there is a pressing necessity for a search
reported practical results are represented, discussed, and compared methodology that is used to get in-depth information that
using the same datasets with other algorithms, including the Whale
originally exists in one way or another within the promising
Optimization Algorithm (WOA) and Ford-Fulkerson (FF) algorithm,
which have been used to solve the same problem of interest. As the search space of the problem domain. In terms of this, there are
accomplishment achieved in this valuable research is efficient and broad ranges of searching algorithms and a number of features
robust, the proposed algorithm is proved to be a senior-level for which to categorize them. Some are classified according to
alternative to the optimization problem and, in turn, can be efficiently the searching strategies, some are classified by the searching
used to solve various optimization problems having a fairly large- scope and area, some are classified upon the optimality of their
scale data such as the underlying problem (i.e. MFP). output, some are classified by their ways in generating
DOI: 10.22937/IJCSNS.2020.20.08.5
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 31
solutions, and so forth. Each of them is inspired, implicitly or objective, this paper is trying to reach the maximum flow
explicitly, by an existing natural world phenomenon or a capacity of the network at which the flow can be transmitted
certain sort of metaphor [6]. Nonetheless, all of them seek more reliably from the starting source node “s” to the target
around the same goal of improving effectiveness. node “t”. This rate is greatly related to the same network “G”,
which, undoubtedly, depends heavily on both the number of
Broadly speaking, there are actually many types of networks nodes “n” and the number of edges “m” [7][8]. Going forward,
that are routinely faced through daily life-cycle. These various this problem itself is known as the Maximum Flow Problem
types, which have enormous practical importance in our life, (MFP) where the search space is actually represented by a
include the following real-life examples: Internet, telephone, graph, an example of which is shown in Fig. 1 where the first
cell, highways, rail, water, sewer, electrical power, oil, and gas, value that is associated with every edge represents the actual
to name just a few. Depending on each one, every network has flow value and the second value represents the maximum
a material flowing from point to point [7][8]. While each point capacity value; this will be explained later. The comparison of
is referred to as a node, each connected path between any two these networks are often defined by the amount of flow, the
nodes is called a route or an arc [7][8]. In analogous to its type, number of edges and vertices they have, as well as whether
each material has a corresponding unit to measure the flowing their flows are bidirectional or not. The values of “n” and “m”
capacity such as time, price, distance, quantity, and other units of this example are equal to seven and ten, respectively. It is
[7][9]. Based on this, optimization is a solution procedure in worth remarking that neither side of “m” and “n” is dominant
which one aims to systematically enhance the material flowing over the other, both are important in driving the size of the
through a given network [4]. From a different perspective, this problem. In this regard, both the number of nodes “n” and the
enhancement can be either maximize the goodness or minimize number of edges “m” with their capacities determine the
the badness of a stated solution [4]. Without loss of generality, complexity of the network “G” and, for that the runtime
the maximization problems are considered instead of the complexity associated with any instance generated from “G”
minimization throughout this paper. In case of the need for increases rapidly with the dimension of the network graph, i.e.
considering minimization problems, the same methodology is the numbers of vertices and edges [10][7].
simply used after reversing the sign of calculation. To this
9, 14
b
a
10, 15
6, 25
s e t
4, 13
c d
As shown in Fig. 1, MFP is much related to the essence of the the metaheuristic optimization algorithm in solving many
objects' movement through the desired network where four optimization problems without having to have an in-depth
milestones are there: source place called a source or a start exhaustive understanding of the same algorithms.
node “s”, a destination node called “t”, one or more connected
routes between the source and the target, and an associated The critical thing to keep an eye on is that nodes can't transfer
positive number to represent the flow between every directed any matter more than their previously defined capacity value
route that connects any two nodes [11][8]. In order to simplify [11][7]. This term might be generalized in the long-run as that
the problem of interest, one can think of this problem as using any network can't transfer more than its computed capacity
pipes of different sizes for carrying liquid from a source node value. Whenever there is more than one feasible track to be
to a destination one. Or, one can think of this problem as using chosen between the source and the destination nodes, the MFP
conduits to link between the start and the destination nodes. is regarded as a player with a considerable role in enabling a
Anyway, the important thing is the availability of several greater movement of these objects or matters. And so, the
intermediate connecting paths, called by routes or tracks, intended objective is to choose the most optimal route between
which can be followed in carrying the flow between “s” and “t”. the source “s” and the destination “t” that has the maximum
It is vitally important to mention that MFPs are part of the throughput, i.e. flow [7]. Even though the focus view varied
graph paradigm that doesn't require capturing every low detail depending on each network, one fundamental three-sided
of the problem under discussion. The rationale behind this cornerstone remained fairly viewed in a broad sense: speed up
relates to the fact that they are mainly based on the flow to the looked-for value and reduce the time and cost.
abstract concepts where deep knowledge of the problem at Simply, this objective can be redefined as the more flow the
hand is no longer required. And so, the ordinary user can use network has, the more efficient and effective it will be.
32 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
To this aim, the core motivation behind this research article is 2.1 Types of Searching Algorithms
to get the maximum flow capacity at which this flow might be
transferred reliably between the two special extremes: the As such, there is definitely a broad range of techniques
source node “s” and the destination node “t”. However, this proposed in today’s progressive and growing arena of
amount is closely associated to the same network “G”, which, optimization; each of which has its own capability, strength,
in turn, rely upon the various intermediate paths between these weakness, objective space, detailed specifications, constraints,
two extremes and on both the number of nodes “n” and the requirements, and other fundamental relevant features of
number of edges “m” by which the size of the network is searching. Since most of them claim a progressive style to
defined accordingly. implement, the right decision for the most reliable algorithm to
encompass a given problem selection is no longer a simple
This research is concerned with solving optimization problems
mission to be carried out. Bearing in mind that most of the
and describes a new metaheuristic optimization algorithm
complicated hard problems can be framed within the
(namely, Sea Lion Optimization Algorithm for Solving the
optimization borders in which one strives harder in an effort to
Maximum Flow Problem, SLnO-MFP) which, as a member of
either minimize or maximize the achieved results within
the swarm-intelligence family, mimicking the hunting behavior
limited resources [12][13], Fig. 2 is directly interrelated to the
of the sea lions. According to the acquaintance of the author of
question of which algorithm to choose and which notions to
this research, there is no previous literature on the usage of
implement for a given problem of interest. In terms of reaching
SLnO for solving the MFP and this finding has not been
the most right one of the optimization algorithms, this figure
reported yet in the optimization literature.
deliberate and then formulates the different critical factors
In line with the said objectives and in order to lay the which influence people’s decisions in picking up one of them.
foundation of this paper, an outline of this paper is structured However, there are various distinct elements between them
as follows. After this section justifies the importance of this which overlap with each other. Viewed in a broad sense, the
research and provides background knowledge on the maximum classification of these algorithms encloses, but is not limited to,
flow problem for the novice readers, Section 2 explores the the following overlapped categories:
depth of literature to introduce both the heuristic and the
· Problem Functionality: In terms of functionality,
metaheuristic optimization paradigm. Section 3 surveys the
algorithms can be classified into two major types: problem-
literature within the research area to gain a concise overview of
specific and problem-independent. While the former
the other related work and, moreover, it ferrets out the
provides that the algorithm, as well as the code, is just
objectives of the underlying problem and discusses some of the
tailor-made for a specific type of problems, the latter
different algorithms which are proposed to solve this problem
assures that the same code is used for various varieties of
intelligently. The model formulation and the different
problems, namely it is a generic-implementation code.
assumptions related to the problem of interest are provided and
While the focus of the latter search strategies is mainly
discussed in Section 4. In order to build an authentic depiction
concentrated on using the algorithms without any problem-
of the considered problem, the formulation of the theoretical
dependent knowledge, the former look as if they were
and mathematical foundation of this proposed solution is
tailor-made for a narrow range of problems. [14]
introduced in Section 5. Section 6 is where the real work
begins; it takes a closer look at the current algorithm developed · Searching Scope: From a classification point of view, the
in this contribution and then walks through all the various searching scope is used as well. Different than the global
stages which would be required to implement. While the ones that are commonly used in finding the global optimum
conducted experiments and their detailed intensive analysis are quality solutions, local search techniques might become
discussed in Section 7, Section 8 concludes the project work of stuck in the so-called local-optimum values of the solution
this research. Lastly, to close the discussion of this research space when no better solution is observed in the present
article, Section 9 offers some ample research scopes and existing neighborhoods. Even though both techniques are
introduces a fairly wide range of promising research striving to find a solution that more optimizes the cost
opportunities in furthering the aims and objectives of the criterion among a set of candidate ones, the distinction
research. between them is that the global search looks at the entire
problem space as a single entity when trying to find the best
possible candidate solution [15]. Despite that all of the
global optimum solutions are never guaranteed concerning
2. Background of Metaheuristic Optimization
solution level of quality, finding global optimums in global
Algorithms search ones could be more counted on.
With the purpose of providing a self-explanatory paper, this In the pursuit of being on the best of what it has already
section establishes preliminary knowledge of the background been achieved so far, most current-state-of-the-art
pertaining to the concepts of the optimization paradigm and optimization algorithms fall within the realm of iteratively
draws an inclusive image for both the current and future status improving the outcomes in the course of the search
of metaheuristic research. Therefore, the following subsections process. Thus, by considering their ways of generating
discuss the different categories of optimization algorithms. solutions, further additional taxonomy can be possible for
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 33
the exploration of the solution space effectively. Fig. 2 takes on a longer exploratory time range that is mainly
exhibits that the searching process falls into quadruple driven by the trial-and-error aspects. [10][17][18]
pivot points: stochastic, exploration versus exploitation,
- The progress of the searching process will be
iterated (i.e. iterative), and guided:
continuing iteratively with the same searching
- The searching has a stochastic nature when the set of procedure until one or more of the predefined
random variables is employed in extracting a new evaluation functions, called termination conditions or
generation, called a potential solution [9][16]. Each stopping criteria, imposed by the user are reached, and
generation's new values of these variables are chosen accordingly, the best possible candidate solution is
stochastically in harmony with the general paradigm. produced. Without that, the progress of the searching
On the other side, some problems are evolving mechanism will obviously be in an infinite loop
stochastically at different points in time which makes [10][17]. The followings are some of the possible
the optimization hard to grasp and solve. For instance, termination conditions that may be imposed by the user
the passengers' numbers of airlines occur stochastically to terminate the series of iterations: the maximum
which calls the airlines for implementing the statistical number of iterations steps previously defined (i.e. no.
theories, including stochastic analysis and probability of runs) is reached, the maximum allowable CPU
distributions, in forecasting the numbers of passengers. computational run-time (i.e. max-CPU-time), coming
across some significant evidence that an optimal
- Exploration and exploitation are two supplementary
solution has been achieved, the maximum number of
activities used to explore the search space. The first one
iteration attempts that comes amid two successive
is the activity through which the search algorithm tries
developments is reached, the maximum number of
to explore as much broad search space as possible to
iteration attempts has reached without noticing any
evade falling in the local-optima traps. Whereas the
difference or making any forward positive progress for
second one is representing the activity in which the
the problem of interest, or there is no way to get more
searching algorithm tries to develop the finest
developments for the problem of interest (i.e. there is
discovered solutions through some targeted
no progress) [10][17]. Related to the first termination
approaches. While exploitation is the optimized
condition, the applications of the Artificial Neural
outcome derived out of exploration, the progress of the
Networks (ANNs) use the name “epoch” to represent
search for more solutions continues in both cases to
an iteration step (i.e. time-step) [10][18]. In-line with
find more-optimal ones. [10][17][18]
the second termination condition, it's very crucial to
- For highly efficient exploiting the former iteration (also balance between the quality of given feasible solutions
called trial or time-step) to the greatest possible extent, and the overall computation time frame that is utilized
the outcomes in the guided search are frequently in generating these solutions and decide accordingly
improved over the course of the iteration steps where how to set up the timeout bound [10][18]. On the other
the newly generated trials are influenced by the older hand, the last termination condition represents the case
ones. The basic idea of that is wrapping around that after many generations, the solutions start
deliberating knowledge and extracting patterns from approaching each other in the hope that this
the former good-quality searching iterations in an effort approaching is a good indicator that the final achieved
to harmony guide the searching process in the solution is closer to the ideal solution of the problem
subsequent iteration steps hoping to be in the optimal under research [10][18]. In the report of this, there is an
solution direction, hence the notion of “guided” is used essential need for suitable criteria to define the quality
to continuously approximate the goal based on of the acceptable solution and to decide according to
replacing the old solutions with the new successful that whether to stop the searching activity or not
ones. From a more general angle, the useful [10][17][18]. If the procedure fails to reach a visible
information related to the optimas of the preceding solution or a practical compromise within the timeout
iteration steps is stored on to get benefit from them in bound, it is inevitably stopped [10][18].
the succeeding ones. That is, the key philosophy of
· Output Optimality: The problems are basically
deriving more successful trials is coupled with building
categorized into two different types of models:
up a well-stocked knowledge store containing the past
Deterministic (i.e. exact) and Stochastic [9]. While the first
trials. [10][17][16]
one is directly associated with the problems in which the
Unlike the conventional local search that stops when it different-used variables are known in advance with
gets stuck by any local optima, the guided local search certainty and before solving them, the second model is
makes the best use of the available features of the indicating to the cases where the associated variables
current optima to escape away and then form another involved a degree of uncertainty [9][18]. Upon the
more optimum feasible solution. In the guided search optimality of their output through the different runs,
absence, an optimization algorithm (OA) inevitably algorithms are generally categorized into two fields:
deterministic and nondeterministic. Within this context, the
34 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
“Deterministic Algorithm”, also known as exact algorithms, problem can be initially solved from scratch in polynomial
describes the cases where the same algorithm at all times of run-time by not less than one algorithm [10]. However, in
the repeated runs will definitely produce one and only one the other case, the problem at hand needs non-polynomial
same output, called solution, for the given particular input time to be solved which often means too long
data. As this grip on the reality of a single certain outcome, computational time to be tolerated [10]. If the processing
it is obvious that their exclusive orphan solution is the time is narrow and usually it is that, there is a sorely need to
provably optimal one, and nothing else. Furthermore, since sacrifice the seeking of the solution optimality at the
such solutions can be extracted within a fair rational time, expense of near-optimal solutions [20].
they are used only for problems of small-scale instances as
· Problem Hardness: When the last two categorizations are
they defined under the term “Deterministic”. Conversely,
integrated together, another categorization is emerged out
complex large-scale instances can't be generally resolved
of them. In view of this, the problems themselves can be
within a rational time by using these classical exact
categorized according to their complexity: Polynomial
approaches. This is caused by the fact that proceeding
problems and NP-hard problems. Even though the former
ahead in the real-world with all possible solutions, as is the
one is directly related to the problems whose computational
case of exact or precise approaches, is sometimes time-
time is growing polynomially with the problem size, the
consuming and hard to sustain in terms of resource
latter is relevant to the problems whose time-growth rates
availability and utilization. From a computational aspect,
are often growing exponentially with the size of the
enumerating and checking all potential candidate solutions
problem. Notwithstanding that the computational time of
for optimality satisfaction is systematically impossible for
the NP-hard problems might not strictly with exponential
the majority of large-scale optimization problems
increases in all cases, but they are definitely not
especially those involve hundreds and even thousands of
polynomially. As opposite to NP-hard problems, the time
variables. [19][20]
of the former category is firmly constrained by a
Going forward, the “non-deterministic algorithm”, by polynomial function based mainly on the problem size. For
contrast, means that the same algorithm may show some instance, suppose that “q” is the problem size, then all the
alternative behaviors from run to run even though the input followings are polynomial functions “q2”, “q3”, “q4”, “q5”,
data are the same. Form another perspective, this etc. Quite the opposite, there isn't any known polynomial
discrepancy in the behavior is attributed to the fact that the algorithm that is capable to solve the problems that lie
calculation is subject to some norm of randomization and, under the latter category. As a matter of fact, most
for that, the outputs have a fluctuating-stochastically optimization problems are classified under the second
behavior and may vary from one run to another. With category and they are describable as non-deterministic
regard to this norm of uncertainly, all the generated polynomial-time hardness (NP-hard) problems which
behaviors are considered as valid outcomes and every one address the case that a solution for the problem under
of them may be the optimal solution or close to the sole consideration can be achieved within a polynomial time by
optimal one. And so, every execution for any algorithm using a nondeterministic computer without providing proof
belonged to this type hides a degree of “uncertainty” or of optimality. [22][23][13][10]
“randomness” behind its output. Definitely, this extent of
“uncertainty” is only limited to agreed sets of rules that The followings are some of the key factors that are related
should be defined before. [9][17][21][5] to NP-hard problems [22][10]:
Beyond that, this category is commonly used when the - It is often the case that most of the NP-Hard problems
tackled problem tolerates multiple possible outcomes are very easy to define and described but so hard to be
where all of them are considered as valid ones through the framed or/and solved as optimization problems.
solution space without providing proof of optimality.
Within the fact that they may perform differently within - Searching space: These problems are usually of huge
dimensions, namely, they involve a fairly wide range of
various routes, the non-deterministic algorithms are
possible solutions so that they are usually very hard to
extensively used in finding estimated solutions; this is
be tackled.
specifically true when we coming across credible evidence
that revealing the optimal solution among a set of possible - Solution's quality level: The good-quality assurance or
candidate ones is beyond the capability of exact algorithms excellence of the calculated results is not guaranteed.
and, perhaps more importantly, an optimal solution is too But, if the optimal solution has not been reached, it
costly to be attained especially in terms of time function. doesn’t mean that a “good” one isn't achieved. All in
[14][5] all, this is much related to the nature of the underlying
problem. In terms of time-growth complexity, the
· Time complexity: By considering their time-growth
perceived relationship between the optimal solution
complexity, a further taxonomy can be possible for these
algorithms; Fig. 2 illustrates that algorithms can be forked (i.e. the best candidate solution) and the size of the
into two subfields: polynomial and non-polynomial. problem under consideration is exponential. As soon as
the problem size begins to mount, the computational
Polynomial, as the name implies, means that the tackled
time needed for further refining the candidate solutions
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 35
is growing at an exponential pace. In such scenarios, difficult to solve. To this end, these problems are
these algorithms need a relatively long processing time generally calling for exponential resources to reach the
for driving the optimal solution and, as a result, these optimum quality solution or the near-optimum one.
problems are generally not resolvable within a rational
amount of computational time. In a variety of cases, - Despite the fact that some of these problems are said to
extracting an approximation to the optimal solution is as having a polynomial-time algorithm to solve but as a
also hard to be achieved within a rational period. matter of fact, no anyone at all sets apart what the
algorithm is!
- Exhaustive search: In practice, brute-force examining In all these contexts, these problems are also referred to
of all candidate solutions may be placed into the realm as long-term problems. To state a truth, the largest
of the sheer impossibility. fraction of real-world optimization problems falls into
- Time-growth complexity: Since this norm of problems the NP-hard class or at least in the sense of NP-hard.
is ordinarily large-scale and, as a result, demanding [22][10]
some “expensive” time computations, they are often
Polynomial
Time Complexity
Nonpolynomial Iterated (Iterative)
Population Based
single-solution methods
Number of Starting Solutions
smhtiroglA fo seirogetaC
population-based methods
single-objective
Number of Objectives
multi-objectives
Deterministic (exact)
Otput Optimality Approximation Methods
Nondeterministic
Heuristic Techniques
Problem-Specific
Problem Functionality
Generic implementation
Discrete domain
Polynomial Problems
Problem Hardness Continuous domain
NP-hard Problems
· The Number of Objectives: Depending on the nature of · The Number of Starting Solutions: On conformity with
the problem, some problems have a single-objective the problem domain and in order to come up with these
function while there are many others that have goals with classifications, single-solution (also referred to as trajectory)
multi-objective functions, referred to as multi-objective versus population-based searches may be considered as an
optimization. In reality, the latter case requires to be additional alternative classification element. The following
incorporated with a weighted average to reflect the nature core points are listed here to compare and contrast the two
of the several objectives' existence. So, a multi-parameter searching strategies:
vector is used for their fitness functions. [6]
36 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
- The single-solution category contains methods that new hybrid technique that can be used to solve problems from
start by choosing one solution randomly and then the same domains or the others.
enhanced it in the course of the search process. Since
these methods contain only one solution in every one
of the iterations, they are also called single-point or 2.2 Approximation and heuristic approaches
trajectory methods. Simulated Annealing (SA) and
Due to the stated limitations of the classical exact approaches
Tabu Search (TS) methods are the most leading
in supporting most complex optimization problems, scientists
examples of this category On the contrary of starting
and experts from both research and industrial communities
with a single nominal solution, the population-based
think intensely for finding possible alternative approaches that
category starts initially by generating a set of multiple
are developed to support and capture efficiently the solution of
random solutions, and then these solutions are
the optimization problems within a fully acceptable time
enhanced extensively towards more superior search
border even if there aren't some high levels of certainty. To this
areas throughout the series of iteration steps. The
end, approximation and heuristic approaches are eventually
enhancement of the population-based strategy is
evolved for finding the optimum or at least close-to-optimum
emanated either by recombination of more than one
solutions regardless that these approaches have no assurance
solution into a single one or reforming each solution by
for the computational time or the accuracy of the in reaching
the use of a given strategy adopted especially to impose
the optimal solution. Ground truth, the quality level of the
exploration and exploitation of the search space. [10]
approximated methods is ordinarily under the terms of
[20][24][16][18]
predefined boundaries that are not far off the exact solutions. In
- A higher exploration power is attained in the contrast to this, the quality level of the heuristics methods is
population-based towards finding out the overall global not guaranteed in exploring the global optimum solutions or to
solution rather than staying on local ones. On the other be within these predefined boundaries; however, the exact
hand, the nature of the single-solution category is results might be caught in some exceptional situations. Unlike
considered as more exploitation oriented. [20][24][16] the approximate ones, heuristic approaches may have included
- Since the abstracted knowledge about the search space some chancy errors that are incapable of being anticipated. [25]
in the population-based approaches is shared between [1][14][10][20]
many possible solutions, there may a sudden and To further control optimization problems, there is an utmost
widespread shift in the direction of the optimal solution need for a higher-level of heuristic especially when one be
[16]. faced with some extensive searches that have one or more of
- The recombined solutions of the population-based the following feasible constraints and obstacles: [14][26][13]
methods are normally based on big-guided steps while [9][27]
these steps in the single-solution methods are
commonly smaller-guided and, of course, the · Information constraint: Incomplete, limited, imperfect, or
movement of each of the two alternatives for more conflicted pieces of information upcoming from different
productive solutions and bettered outcomes is only causes and sources.
within its corresponding own search space. Despite · Resources constraints: Restricted by limited computation
these solutions' improvements, these big and small capacity or with resource availability and utilization.
guided steps are at the expense of the danger of being
close to or missing good solutions where this danger is · Time constraint: Guaranteeing the computational time to
higher in the population-based approaches of that of be within the stipulated time is an ever-growing concern for
the single-solution approaches.[10][16] the decision-makers and all the stakeholders in both the
industry and academia communities.
- By the population-based methods, multiple possible
solutions collaborate with one another to go beyond · Problem difficulty: The tackling problem is, to some
local-optima traps [16]. Namely, all new solutions are extent, a difficult optimization one that is comparatively
built on previous ones and provide inspiration for hard to solve.
future ones. · Quality constraints: In some occasional cases, the search
process may be caught by some local-optima traps without
With respect to the fact that there are no clear-cut boundaries having the ability to bypass them. However, it is an
that are determining where the above-named categories start important issue to look beyond these local optimas in the
and stop, there might be many varieties of hybridized hoping of finding the global optima.
categories. For instance, a further forked group may compose
of some approaches which deal directly or implicitly with the · Knowledge constraint: A shortage of sufficient
graphs. From a different point of view, the positive capabilities knowledge to design the equivalent well-organized solving
of two or more approaches may be fused together to form a methods.
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 37
Crucially, all of the above-stated critical issues are worthy 60%
enough to address the necessity for “high-level heuristics” 54%
especially for the cases where capturing every low-level detail 50%
of the considered problem is hard to attain in a reasonable
amount of time. Due to these challenges and with the 40%
advancement in alternative modeling, scientists over the past
few years are constantly trying their best to come up with new 30%
goal-oriented operating methods to solve these important real-
world issues. They find their enlightenment and guidance by 20%
14%
abstracting the structure and function of nature's laws, by itself, 11%
and the so remarkable behaviors of the different creatures in 10% 7%
5%
7%
solving problems; hence, “metaheuristic” algorithms arose into 2%
the vision among which nature-inspired algorithms are actually 0%
the largest fraction of them. The next subsection introduces the
basic concepts of metaheuristic algorithms. [10][2][28][16]
2.3 Metaheuristic
Fig. 3 illustrates the most ten leading areas that are imitated by Fig. 4. The top six leading metaheuristics disciplines
most metaheuristic algorithms. It is obvious from this figure
that insects are the most popular imitated area among these In addition to that these nature-inspired optimization
areas where (23%) of the total publishing metaheuristic algorithms are relatively easier to implement as compared to
literature are concentrated on mimicking the living ways and the conventional optimization techniques used earlier, they can
the survival systems of insects. The next area has (17%) which be adopted and implemented in widely varied fields of
is inspired by the natural evolution of Darwin's theory of problems covering multidisciplinary fields and objectives.
evolution and survival (i.e. survival-of-the fittest). Then, the Above and beyond that these optimization algorithms have
next one is with animals (whales, wolves, fish, cats, monkeys, more abstract concepts and relying on the usage of simple
bats, and many others) which has (16%), and so on up to the concepts of the higher-level strategies (hence the term “meta”),
percentage (4%). To state a relevant truth, the social behavior they are heuristic, stochastic in their nature, and, perhaps more
of bees followed by ants are the most top favorite insects that importantly, they are categorized under the iterative
are foremost imitated and reported while searching the related optimization techniques. Besides that they encompass highly-
metaheuristic literature. [20][27][29] scalable intelligent methodologies and problem-independent
algorithmic frameworks, they normally revolve around adding
25% 23% flexibility to the ways of utilizing control parameters that can
be customized and tuned to well suit the nature of the problem
20% under consideration. It is worthwhile considering that these
17%
16% techniques can eventually be implemented so that the complex
15% working details are simply abstracted away from the end-users
[21]. This high reliability and simplicity that metaheuristics
9% offer are the principle behind their broad diffusion and finding
10% 8% 8%
7%
them in numerous successful applications. [10][28][16]
4% 4% 4%
5% Even though the global optimality of the final metaheuristic
solutions among the multiple possible alternatives is not
0% guaranteed or proven to be optimal, these techniques may be at
least worthy enough to be trusted in extracting the
approximated solutions within reasonable computational time.
In its absence, many problems that may be solved with
metaheuristics will be inevitably unsolvable. This is especially
true for the hard problems in which their exact solutions are too
hard to be achieved within rational computation time. As a
Fig. 3. The top ten leading metaheuristic areas
matter of fact, the price to be paid for the time complexity or as
On the other hand, the drawing of Fig. 4 states that (93%) of so-named scalability improvement is mainly at the expense of
the available reported metaheuristics are distributed among six approximation of the optimal matching and, therefore, a fair
disciplines where more than half of them are classified as balancing as empirical as possible between time and quality is
nature-inspired optimization algorithms; they are also termed definitely a determining factor and a radical issue. [10][20]
as bio-inspired or bio-based metaheuristic [20][27][29]. [28]
38 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
Table I highlight the abovementioned notions related to Optimization (PSO) algorithms are considered as other
metaheuristic optimization techniques. As they are coined to common examples. [17][18][32]
utilize the power of nature, the source of inspiration for every
Furthermore, these algorithms remain the most fertile
metaheuristic algorithm has an attractive story behind.
research area in the field of metaheuristics. In comparison
Motivated by this and as shown in Fig. 5, they can be
swarm-based with evolution-based algorithms, the former
categorized upon their common features into at least nine basic
has some more advantages over the latter. Since
categories: [14][15][20][27][2][28][30][17][31][26]
evolutionary approaches have relatively more operators
· Evolution-inspired algorithms: These algorithms attempt than swarm-based, they are more difficult to apply.
to imitate the rules and laws of the natural evolution of the Different than evolution-based approaches that immediately
biological world. Regardless of their nature, these discard any obtained piece of information related to the old
evolutionary-based optimization algorithms are regarded as iteration once a new population is generated, swarm-based
generic population-based metaheuristic algorithms. The algorithms normally keep these valuable pieces
search process of this norm of algorithms has two focal of information over the subsequent iterations. [17][28][32]
stages; exploration and exploitation. The exploration phase
precedes the exploitation phase which can be regarded as · Physics-based algorithms: These algorithms are mainly
the process of exploring in detail the search space. At the coined to simulate the physical phenomena in the world.
exploration stage, the progress of the search process is Gravitational Search Algorithm (GSA) is one of the best-
launched with a randomly generated population which is known examples of this category. GSA is formulated on
then evolved over a number of subsequent generations. The both the law of gravity and the law of motion. Harmony
most applicable point of these heuristics is that the next Search (HS), and Simulated Annealing (SA) are other
generation of individuals is shaped by collecting the best dominant examples of this category. [18][28]
individuals and then integrating them together. Through this · Chemical-based mechanisms (CBM): The natural process
integration, the population is enhanced over the succeeding that involves transforming unstable ingredients into stable
generations. On the basis of this, the optimizer of the ones is named as a chemical reaction. During these
exploration stage includes some design parameters that interactive operations, excrescent energy exists due to the
have to be randomized as much as possible to globally sequence of elementary interactions between these
explore the promising solution search space. [17][18][32] molecules. But at the end of these transformations, the
Because of the stochastic-based nature included in the unstable molecules are converted to stable ones and,
optimization process, picking up the right parameters for an naturally, with low energy stability. In this regard, scientists
adequate balancing between the exploration-exploitation focus their efforts on trying to find algorithms that imitate
dilemmas is a serious challenge and perhaps the most the chemical interactions among molecules that happen
critical challenge facing the development stages of any during the chemical reactions and usually lead to chemical
metaheuristic algorithm [33]. The most popular leading changes. Chemical Reaction Optimization (CRO) proposed
examples of this category are Genetic Algorithms (GA), by Lam and Li (2010) is one of the best-known examples of
Genetic Programming (GP), Biogeography-Based this category of algorithms. [11][21][34][29]
Optimizer (BBO). [17][18][32][28] · Stochastic optimization (SO) Algorithms: The
To sum up, it is vitally important to realize that decision formulation of these optimization algorithms includes not
making by using metaheuristics implicit involves a only generating random variables to be used in the progress
fundamental selection between “Exploration” by which of the searching process but also using methods that have
more information is assembled that might direct us to more arbitrary (i.e. random) iterate steps. However, the outcome
superior forthcoming decisions or “Exploitation” by which success of the iteration steps couldn't be guaranteed. The
the finest decision is made in the light of the existing followings include broad examples of these algorithms:
knowledge. stochastic hill-climbing, swarm algorithms, evolutionary
algorithms, genetic algorithms, simulated annealing, to
· Swarm Intelligence (SI) algorithms: These optimization mention but a few. [29][10]
algorithms are evolved out from the collective intelligence
and the communication channels that can be observed in the · Probabilistic-based Algorithms (PA): These algorithms
social behavior of the biological populations in nature. They are so named because the probabilities play a significant
are used to solve most of the optimization problems that role in making decisions within the different runs (i.e.
arose on the metaheuristics' horizon over recent years. A iteration steps). Simulated Annealing (SA) is mostly the
typical example of this category is the Sea Lion oldest example of this type of algorithms. [10][18][35][32]
Optimization (SLnO) algorithm that imitates the hunting · Artificial Immune Systems (AIS): As a sub-field of
activities of the sea lions. Artificial Bee Colony (ABC), Ant biologically-inspired computing, these artificial intelligence
Colony Optimization (ACO), and Particle Swarm algorithms are mainly concerned with imitating the
biological immune processes of the human immune system
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 39
towards solving a broad category of different optimization for the candidate solutions improvement during the course
problems from engineering, information technology, and of successive iterations [29][32].
mathematics. [10][28]
Yet, regardless of the fact that there is a fairly wide range of
· Artificial neural networks (ANNs): These algorithms are
heuristic and metaheuristic approaches that were proposed so
one of the information processing paradigms and a subfield
far in the spectrum of the optimization paradigm, there is still
of biologically-inspired computational intelligence family.
immense room for improving and/or investing the available
Inspired by the manner that biological neural systems
ones or at least coming up with new viable algorithms and
process data and based on the principle that self-learning is
techniques like the one described in this paper. This is
acquired from experience, these artificial networks need to
especially true if the following silent points are under the
be trained enough by using a set of examples to create
vision: [13][12][9][4][21][20]
adequate knowledge that can be used later for solving a
wide variety of optimization problems in real life. [36][18] · Most optimization problems that arose on the horizon over
[37][29] recent years are often very hard to be tackled by the
· Human-based algorithms: Because there are laws conventional models and, hence, they require new ways of
governing all the internal operations of the human being, all thinking to be solved. Anyway, creativity comes from the
the contained internal activities of these complicated well recognizing of the problems that require further
systems operate functionally without any problems. innovative usages of the optimization algorithms.
Attractive by this motivation, scientists from all domains try · As the scope of the optimization problems is growing
to simulate the ways in which these subsystems work and extremely in size and heterogeneity, the number of
come up with new goal-oriented operating methods to solve optimization problems residing on diversified domains of
many important real-world problems. Thus, the algorithms our life is exponentially larger than most scientists and
of this category emulate the intelligence and the social researchers have ever proposed.
behaviors of the human being and their associated activities.
Teaching Learning Based Optimization (TLBO) [2][29], · Since not all metaheuristics are reported as being successful
Interior Search Algorithm (ISA)[2][29], Colliding Bodies ones, there remains a relatively substantial research gap that
Optimization (CBO)[2][29], and Harmony Search needs to be filled between the small number of accepted
Algorithm (HSA) [30][3][38][29] are broad examples that metaheuristic methods, from classical to novel approaches,
are classified under this category. and the vast number of day-to-day optimization problems
that are increasingly duplicated.
From a broader perspective and under one scheme, these · Since many of the conventional optimization algorithms
optimization techniques can also be categorized by some wider used earlier may no longer be sufficient to upkeep the new
classifications as the followings: needs of today’s attitudes, it is vitally important to
· They can be categorized as being either exact (i.e. reengineer them or make a permutation for them with new
enumerative) or approximated methods [2][10]. applicable and practical alternatives.
· Under another scheme, they can be also categorized · In the optimization paradigm, it seems so strange and
according to whether they are used in forming other hybrid somehow unfamiliar to find a single algorithm that
metaheuristics or not [29]. Regarding this hybridization, performs well on most optimization problems, especially
evolutionary and nature-inspired algorithms are the most that a large fraction of them have their own circumstances,
algorithms that have been extensively hybridized with each requirements, constraints, and implementations scenarios.
other over the last few years to solve a wide variety of · It is regarded as axiomatic that the system that has been
optimization problems [10]. constructed to meet the high needs of scalability and
· Another further broader categorization can be as reliability has more opportunities to stay functional for a
conventional metaheuristics, like Genetic Algorithms (GA) longer time. So, there may be counterintuitive variations in
which is the most famous and prevalent one, and the new the solution quality between the algorithms that had been
generation ones, such as the proposed algorithm of this implemented and evaluated only on just small or medium
paper [29][27]. Compared to the classical ones, these benchmark instances of the problem and the algorithms that
modern algorithms usually require lower computational had been tested on all benchmark instances of various sizes
time and memory, fewer setting parameters to fit the including complex large-scale ones.
problem, and moreover easier to implement [10][20]. · There is a clear and distinguishable variance between both
· Another completely different but it is a common theorizing that is largely based on the theoretical-academic
classification scheme is the availability or not of local world and the implementation that is conducted upon real-
search mechanisms within their stages. Since local searches world cases. An analogy with this, there may be some
usually give the best chances for approaching the best considerable gaps between the metaheuristic theories and
candidate solution, this facility gives more feasible chances their corresponding real-world implementation. This
mismatch between both of them is, of course, caused by the
40 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
fact that some metaheuristics are relying on just theoretical base or practical evidence, they could be as a matter of
or abstract possibilities without applying them viably with theoretical tests and, for that reason, building knowledge
the real-world applications or that some of them have been about simulated data could be neither viable nor feasible.
tested and then evaluated using only low-to-mid-range data This, in a way or another, maybe behind finding some
without exposing them hard on large-range data. Close missing environments to conduct experiments on.
related to this, a considerable fraction of these researches
have been originally initiated for only research purposes Apart from the foregoing mentioned discussion, all
without being for real-life applications. metaheuristic optimization approaches are alike on average in
terms of their performance. The extensive research studies in
More importantly, some metaheuristics are just carried out
this field show that an algorithm may be the topmost choice for
inside research labs where some of them have been
some norms of problems, but at the same, it may become to be
constructed based on hypothetical projections with only an
the inferior selection for other types of problems. On the other
academic or theoretical vision that may be far away from
hand, since most real-world optimization problems have
the factual situations. That is, carrying out lab-problems
different needs and requirements that vary from industry to
merely without being exposed to diverse real and hard tests
industry, there is no universal algorithm or approach that can
is subject to guesswork and experimentation may lead to
be applied to every circumstance, and, therefore, it becomes a
unexpected results. On top of all that, nearly the majority
challenge to pick up the right algorithm that sufficiently suits
of these labs are subject to some financial constraints with
these essentials [21][29][12].
rare to no external support available. Inevitably, this lack
of certainty may rarely lead to unfaithful decisions and What's more, the metaheuristic research community frequently
hence far-reaching problems. uses the two terms “Heuristics” and “heuristic methods”
interchangeably to simply give the same meaning. Like so, the
· Some metaheuristics were conducted upon simulated data
two terms “metaheuristics” and “metaheuristic methods” are
that are nearly different from the relevant real-life ones and
also used interchangeably. Furthermore, it is recalled that the
they may not initially design for fully exploiting the real
following optimization terms will be used to refer to the same
environment. Since they are firmly governed by purely
metaphor: algorithm, method, and technique.
theoretical standpoints without having a strong empirical
Tabu Search
Stochastic
Algorithms (SA)
Stochastic Hill Climbing
Fig. 5. The various strategies for solving optimization problems and some of their main representatives.
Since the seminal paper of FF and throughout the years, substance which is referred to as chemical reaction (CR). In
researchers and scientists continue their goal-oriented research fairness, this chemical process is considered as a natural
toward finding various techniques and methods that are process and nothing else.
revolved around the mission of capturing more optimal
· A collision is the exact cause of any CR. In this regard, the
solutions for the considered problem. Based on that, the lines
molecules are considered as being the manipulated agents
of advances have been initiated and a sequence of numerous
and, therefore, CR is a multi-agent paradigm. Each agent
algorithms was suggested, presented, and released. Because
has a number of features, some of which are fundamental to
there's a large body of research studies on this line of research
the CR operations. However, other features may be easily
and to offer a coherent narrative as an alternative of annotated
attributed to the agent.
bibliography, this section is inherently selective to a certain
extent, and so there are some of a fair-bit less relevant topics · Taking for granted that the CR event is only triggered by
that haven't been presented. Some of these salient studies are, the sequence series of collisions in-between molecules (i.e.
therefore, presented in the following subsections. agents) and nothing else, some rhythmic interactions
between these agents may occur that lead to some changes
upon these agents themselves. On the other hand, CRs can
3.1 Chemical Reaction Optimization (CRO) be commonly categorized into four elementary schemes
that are viewed in Table II. Additionally, stability is the
In a broad sense, scientists frequently noted that the nature of
primary objective of any CR in which involves changes in
both chemical interactions, named as Chemical Reactions
the molecules.
(CRs), and optimization paradigms have high-level common
attributes and details. At the starting point, the following · The following triple rules are directly related to energy.
noteworthy points highlight the common phenomena in First, energy already presents and can't be made. Second,
between: [13] energy may inter-change from one shape to one or more
other. Third, energy can't be smashed. Related to the
· Elements are substances, also better-known as materials,
second rule, collisions usually lead to the rearranging of
which cannot be reduced to be simpler by the usual
energies among molecules, but there may be a collision in
chemical means.
which no energy is transferred.
· In its simplest terms, the basic unit in any CR is the
· Both CRs and optimization undergo a sequence of step-by-
molecule.
step events. They both strive in finding the optimal solution
· When a substance is transformed from an unstable case to a or at least the near-optimal one.
more stable one, a chemical change will occur to this
Based on manipulating the above-mentioned observations, referred to as Chemical Reaction Optimization (CRO)
especially the step-wise process of searching, many researchers algorithms. [13]
aim to relate the chemical reactions with the optimization
For satisfiability, the core of the overall CRO-based algorithms
paradigm and, as a result, try to embed all the common
is primarily all about the followings [13]:
concepts and properties between them in new optimization
algorithms. Consequently, they proposed many general- · Compared with the other classical algorithms, CRO offers
purpose metaheuristic potential algorithms that emulate by the some flexibility to be customized and controlled by the
natural process of chemical reactions. Then, they successfully users themselves to fine suit their specific needs or to be
utilized these algorithms with the intention of resolving a broad easily adapted to address particular problems.
range of both discrete and continuous engineering problems · In order to reach or at least approach the global optima,
which cannot be underestimated. In all cases, it is important to CRO-based algorithms are a self-adapted to reflect the
take into account that these chemical-reaction-inspired problem domain.
algorithms are often population-based and have a high ability
to be adapted to cover other problems. They are commonly · In reality, CRO has the ability to solve some optimization
problems which have not earlier been successfully tackled
44 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
by other metaheuristic algorithms or that have been environment, the same implementation environment was used
classified as having some run-time complicity issues. related to the computer type and model, operating system, the
function evaluation limit of the stopping criterion, and all the
· Including C++ and Java, CROs can be easily coded using
other standard measures. In most of the cases, their pilot
object-oriented programming (OOP) languages. In this
experiments gained the best result and that is why this
context, the molecules are defined as classes and the
proposed algorithm is among the current best algorithms which
elementary reaction types are defined by the methods.
can be used to solve QAP. Because it can be used as a generic
· For solving a particular problem, multiple CROs can be searching algorithm to formulate several NP-hard problems,
implemented simultaneously without any trouble. their algorithm is part of importance and remarkableness.[21]
· Since each CRO keeps up its particular relevant population After the antecedent algorithm successfully solves a variety of
size, it will not have to remain pending at any certain optimization problems, Lam and Victor (2012) presented
instant until any other CRO accomplishing its certain tasks. another expanded research for solving a wide variety of
engineering problems, such as the quadratic assignment
To be truthful, the CRO algorithm was originally devised up by problem, multimodal continuous problems, ANN training, and
Lam and Li (2010) for the purpose of fixing the combinatorial other optimization problems. During their notable research,
optimization problems [21]. Just within less than two years, they build a roadmap framework and theoretical guidelines
CRO was applied successfully to resolve a considerable recommending other users on how to customize and tune the
number of optimization problems, outperforming several other CRO's setting parameters to match the nature of the other
existing algorithms in the majority of the experimental results problems. Besides that, their effective research is considered as
[13]. Through the said research, they put the generic a tutorial and practical procedure that encourages other
formulation for any optimization problem. In terms of this, researchers in exploiting CRO in solving their research
they mathematically define the minimization objective function problems. In other words, their research study is
“f ” by utilizing Equation 1: [21] an inspiration for every optimization research which comes
()=0
along. [13]
() subject to (1)
()≤0 The study by Barham et al. (2016) introduced another
Where the following points analyze the elements of this noteworthy CRO algorithm which is conducted using JAVA
equation: programming language. This CRO achieves an overall
complexity of “O(I E2)”, where “I” and “E” indicate the
· “R”, “E”, and “I” represent the real number set, the index number of iterations and arcs of the directed-weighted graph,
set for equalities, and the index set for inequalities, respectively. They prove that the number of iterations has
respectively. assured evidence towards capturing additional optimal
solutions and approaching the most optimal ones. [34]
· = { , , , … , } and = { , , , … , } are the
vectors of variables and constraints, respectively. “n” and
3.1 Whale Optimization Algorithm (WOA)
“m” are the problem dimension and the total number of
constraints, respectively. On the relatively species-rich sea, humpback whales need a
· If a negative sign is added to “f”, then it will be the developed strategy in their hunting for together. These whales
maximization objective function. types actively hunt small fish or krill, following them
according to tight enough coherent strategy. This foraging
In order to evaluate the performance of the proposed solution, social process for self-maintaining is a unique interaction that
the simulation code has been implemented using the Microsoft hasn't been detected in other creatures yet. It is interesting to
Visual C++ programming language. The algorithm has been note that this type of social creature has no teeth and above that,
applied successfully in solving 23 large-scale instances in it has a very narrow throat and so, this is the rationale behind
which their considered datasets were categorized as being NP- that it couldn't swallow large prey as a whole. However, this
hard of the type Quadratic Assignment Problem (QAP). They type of whales has an amazing policy in attacking a great group
observed that the proposed algorithm achieved the objective of small prey and catching them, the studies find. This unique
drastically. Then, an ample computational investigation was to the concept foraging policy is called “bubble-net feeding”
carried out in which the simulation results of the proposed and it is based upon a multi-stage coordinated mechanism for
algorithm were evaluated and compared to the best performing capturing as much as possible fish at once. Once they are
three metaheuristics recommended in the literature at that time, teaming up together, they dive brilliantly beneath a large group
relating to the solutions' quality level and their computational of prey and then all begin cleverly in bubbling out to produce a
execution times. These three competitors are: Fast Ant System net made of bubbles and forcing prey to be inside. To make
(FANT), an Improved Annealing Scheme (ISA), and Robust sure that the net of bubbles surrounds all the prey, they should
Taboo Search procedure (TABU). Moreover and to offer more reinforce all the net’s weak points and, accordingly, they splash
objective among the other three compared metaheuristics and their flippers (i.e. fins) at these weak parts. By this witty tactic,
to eliminate any issues related to the variations in the execution a large group of prey is trapped tightly inside a well-organized
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 45
fence isolated from outside, and so the only remaining event to distribution level; it is about 70% of the total circuit loss. Due
do is swallowing all of them by the helping of their flippers to this, increasing the overall energy efficiency of any
that swiftly direct fish headed for their mouths. [26][40][29] distribution path is the hardest part of setting up any electric
power system. The study by Reddy et al. (2107) is based on
Whale Optimization Algorithm (WOA) is proposed by
WOA by clearly decreasing the generating plants' losing power
Mirjalili and Lewis (2016) [26] which is considered as a new
during this distribution. In the long run, this study can be used
competitive swarm-based optimization algorithm that evolved
not only in reducing the voltage and the high power loss but
mainly out of abstracting the fascinating hunting behavior of
also in lowering the cost and producing stable, efficient
the humpback whales. With this article, the researchers have
voltages by optimizing the placement and sizing the distributed
successfully created a mathematical model to match the
generators (DGs). [38]
humpback whales' feeding strategy upon which many NP-Hard
optimization problems have been solved. This mathematical Back-and-forth, Masadeh et al. (2018) suggested the
model begins initially with a population of various stochastic “MaxFlow-WOA” algorithm that is based mainly on the Whale
solutions, each of which is generated by a search agent (i.e. a Optimization Algorithm (WOA). The proposed solution was
humpback whale). Whenever the best solution is determined compared to Ford-Fulkerson's MF with respect to the accuracy
among the other ones, all the other search agents should of the results and the average computational run-time where it
arrange their current locations accordingly. On the other hand, acquired “O (E2)” as the overall time complexity. According to
this research also addresses the case where these animals may the authors’ experimental analysis, the impressive experimental
make a random search moving towards finding other better results give sufficient sound evidence and reinforce the
positions instead of remaining stuck with one of the current conclusion that “MaxFlow-WOA” is an effective metaheuristic
search solutions. [26][29] for solving the Maximum Flow Problem (MFP). [1]
In order to test and evaluate the algorithm, WOA was Combining the WOA and the rapid and the big advances in the
implemented empirically by solving 35 real-life optimization distributed parallel applications of metaheuristics, a parallel
problems of practical importance, 29 of them are mathematical whale optimization (Parallel-MaxFlow-WOA) algorithm is
and the remainders are structural design. Furthermore, WOA developed by Masadeh et al. (2020) to solve the MFP. But the
was verified to evaluate its performance using classical truth, this algorithm is considered as a more powerful and an
benchmark functions that are usually utilized in the expanded version of the sequential MaxFlow-WOA. It works
optimization literature. Each one of the considered experiments by segmenting the search space (i.e. the network graph) into
was iterated thirty times. After WOA is compared with other four segments, all of which are computed in conjunction with
conventional techniques, the gaining results were relatively each other. Then, the best maximum flow of these segments is
competitive. Due to its considerable success, this algorithm selected. The algorithm was tested on different datasets that
becomes popular from then on. Day by day, a sequence of have between 50 to 1000 vertices and the number of edges
similar research was conducted based on WOA. [26] between 502498 to 50024998. Then, the algorithm's solution
quality was evaluated for each dataset. Compared to the FF
Any electric power system has been considered to entail three
sequential algorithm, the proposed algorithm achieved a
functional zones. First of all, the electricity generation by
tangible (3.79) reduction in the overall computational running
which the energy is transformed from the available resources
time by running the segments on four-independent-parallel
into electric power. Secondly, the transmission of bulk electric
processors. As this result is a great enhancement of the
power over long distances by using high-voltage networks.
computing time, the first noticeable impression of this four-part
Thirdly, the distribution which is related to providing the end
segmentation stimulates the authors to strongly recommend
consumers' low-voltage service points from the high-voltage
applying this proposed algorithm using the distributed systems
networks. On the condition that the energy is consumed
architectures, at least to gain their parallelism powerful benefits.
directly by those end consumers as soon as it is changed into
Table III shows a computational complexity comparison
electric power in the first functional stage, the important point
between the FF, Sequential-MaxFlow-WOA, and Parallel-
related to the entire system is that there is an electrical power
MaxFlow-WOA. [40]
loss and the largest portion of this is routinely occurring at the
Table III. Complexity comparison between FF, Sequential-MaxFlow-WOA, and Parallel-MaxFlow-WOA
Within the animal kingdom, an animal itself may be the Table IV. Grey wolves' dominance structure
predator and/or the eaten prey of the others. Grey wolves as
General Solution
predators are mostly known to prey on a variety of large, Name Duties
Reaction Pattern Hierarchy
hoofed animals such as bison, mountain goats, moose, and Design the hunting
other different kinds of deer. These wolves, also formerly The dominant
plan, the place to
known as “Canis Lupus”, are living in packs with an average leader, i.e. the The best
Alpha sleep, the time to
one with the solution
size between five and twelve; they have a very strict leadership sleep or wake up, and
highest liability.
hierarchy or perhaps not surprisingly, one of the most so on.
fascinating social behaviors one has ever seen. Mirjalili et al. The Alpha's decision-
(2014) tried to mathematically simulate this dominance making assistance
hierarchy structure and the sovereignty levels during the social that rules the other
hunting practice and, for that, a going-on-down-hierarchy lower-level members.
algorithm is designed that is closely related to this. This Besides it is the
The second
pack's educator, it
algorithm is called Grey Wolf Optimization (GWO). As best
The deputy supplies alpha with
illustrated in both Fig. 6 and Table IV, the dominance level in candidate
leader and the any constructive
this suggested tactic reclines from the top towards the bottom Beta solution (i.e.
commander feedbacks rendered to
and can be carried out at four-hierarchical commanding levels: the second
alternative. carry out and support
Alpha which is the dominant leader the one with the topmost level in the
the Liability. Since it
hierarchy).
liability, Beta which is Alpha's assistance for decision-making, is the commander
Delta which controls Omega, and Omega which are under the alternative, it will be
domination of the other wolves. Granted, each team member is appointed as a leader
classified as being one of these four levels. [16] in the case of Alpha's
dysfunctionality.
The general care and
safeguard
They lead responsibilities are
Alpha omega, i.e. attached to them. The third
members of Hunters, scouts, best
Delta
Beta omega are experts, ex-alphas, candidate
Alpha's assistance for dominated by ex-betas, and solutions
decision-making
delta. caretakers are
belonging to this
Omega category.
which controls Delta
The
The working class which is under the
underneath
Omega domination of the other wolves (i.e.
Delta Subordinates level in the
Subordinates).
These subordinates are under the hierarchy.
domination of the other wolves.
employed in testing the algorithm in terms of the its leadership and encloses by five-to-twelve wolfs.
followings: Accordingly, the graph is segmented into clusters and each
five-to-twelve vertices are grouped together as one cluster.
- Exploration: in relevant to this point, the algorithm
Then, they proposed a three-stage algorithm that matched up
provides either a merit result when it has provided very
with their corresponding fishing scenario: searching for prey,
competitive outcomes compare to FEP and DE or a
encircling prey, and attacking prey. The time-growth
distinction grade when it has outperformed GSA and
complexity of this proposed algorithm is “O(|n| + |E|2)”, where
PSO, as well.
“E” indicates the number of arcs (i.e. the number of edges
- Exploitation: the algorithm provided outstanding between wolves) and “n” stands to the number of vertices (i.e.
performance and presented highly competitive wolves). After the computational run time of this algorithm
outcomes in the matter of exploiting the optimum was compared with the well-known Ford-Fulkerson' algorithm
solution with the least possible computational cost. by using the same datasets, the achieved outstanding results
were significantly more optimal. [24]
- Local optima avoidance: At this point, the algorithm
extremely has outperforms the others in at least half of 3.3 Metaheuristics' Parallelism
the benchmark functions and has also provided
successful competitive results in the second half. As the systems of the clustered parallel data processing can be
- Convergence: the behavior of the algorithm proves employed for producing high-quality results and, at the same
that it has the ability to eventually convergences at a time, surpassing the calculation speed, the artificial intelligence
point in the search space. (AI) specialists and other interrelated participants exploited this
distributed architecture in carrying out their High-Performance
To tell the whole truth, the ability of the algorithm in Computing (HPC) codes to process large-scale problems. To
making and controlling a fair balancing between the two address this, the same network graph is divided into a number
cornerstones of any metaheuristic algorithm, exploration of subgraphs based on the existing number of processors. Each
and exploitation, has resulted in successfully going subgraph contains a number of augmenting paths. Then the
beyond most local-optima traps. calculation of the whole network graph is distributed among a
· An important step forward was to proceed in evaluating the number of processors where each subgraph is computed alone
algorithm on a wide range of three large and difficult by a single processor. On the grounds of this, all the processors
instances of engineering design problems that are quite cooperate together to solve the problem simultaneously. [11]
popular among researchers around the world. These [12][41]
challenging problems are tension/compression spring, In this context, some Jordanian researchers use IMAN1
welded beam, and pressure vessel designs. The algorithm supercomputer which is located in Jordan to conduct their
indicated a high-performance capability in solving these experiments. It was assembled using 2260 Sony PlayStation3
problems. (PS3) devices that are linked together via a fiber-based network.
· The said authors also took into account proving the Combined, this supercomputer provides multiple resources,
performance of the algorithm in an authentic application high-end integrated clusters, and an open parallel and
and, in turn, the algorithm was inspected in an optical distributed computing environment. Besides that this
engineering problem which is referred to as optical buffer supercomputer has an extraordinary efficiency such as its
design. This problem is directly related to one of the capability in driving 25 trillion operations per second and
internal key elements of the optical CPUs. All over again, serving thousands of concurrent clients with sub-millisecond
the algorithm is also able to solve this real-world latency, it has also many supporting powerful tools meeting
application and can be widely adopted by the optical CPU both industrial and academic computing needs performing
industry millions of simultaneous input/output actions. [42]
With the intention of solving the MFP by utilizing the modern
To put it in a nutshell, all the above-mentioned experimental technology of IMAN1, a parallel genetic algorithm (PGA) is
analyses and the comparisons that have been made with the suggested by Surakhi et al. (2017) which is an extension to the
other approaches have guaranteed that the GWO algorithm is serial version of the algorithm [41]. Based on a real distributed
able to offer more competitive results and it is applicable in system, they exploited the HPC cluster architecture in
many challenging problems, especially those that have designing the stages for each one of the iterations to work
unknown search spaces. simultaneously in conjunction with the others. After the
network's directed-weighted graph is segmented into a set of
Since then, the pyramid of the grey wolves' commanding levels subgraphs, all the different augmenting routes from the start
has garnered a lot of attention from the researchers and, so, node “s” going to the target node “t” are altogether computed
another GWO-based research has been held by Masadeh et al. concurrently through the using of the so-popular message
(2017) for solving the maximum flow problem (MFP), named passing interface (MPI) library which is a standard library used
MaxFlow-GWO. The authors utilized the K-means technique for multi-core multi-thread parallel execution development. As
to divide the grey wolves into groups, called clusters, each has each subgraph has its own local Maximum Flow (MF) solution,
48 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
all the local MFs contribute together in generating the overall working process when catching information, these artificial
global MFP solution. And so, the total maximum flow value nervous models are used particularly to solve the optimization
for every one of the iterations can be produced by the simple problems that have an extreme number of inputs in which
summation of these augmenting routes. Consequently, this most of them are usually unknown. The neurons are imitated
value is subject to be enhanced from iteration to another one. in these artificial models as nodes connected with each other
Compared with the sequential version of this algorithm, the to form an artificial model viewed as a network of nodes. The
needed time was rationed by 50% and they have achieved more important point is that every connection connects two nodes is
worthy results in terms of accuracy, speed, and time efficiency. associated with a given numerical value that represents its
weight, called the neuron's activation value. These connection
In their ways of seeking high-level computation with respect to
weights are determined by feeding the training data set to the
the quality of the solutions and the response time, IMAN1 as a
input layer and adjusting these weights recursively over the
supercomputing center is used yet again by Alkhanafseh et al.
course of the training process' iterations. Since the acquired
(2017) in solving some of the chemical reactions problems. As
knowledge is learned accumulatively from the collected data,
an improvement of the standard Chemical Reaction
these neural networks need to be trained sufficiently on a
Optimization (CRO) algorithm, the authors proposed a
fairly large set of data that is relevant to the problem domain.
modified version of the serial CRO algorithm where the main
During the training process or as a so-called learning process,
problem graph was segmented into a set of subgraphs
these weights are fine-tuned based on the extracted
distributed on multiple processors; each one was responsible
knowledge; this step is regarded as the most crucial one to
for computing one augmenting path. In comparison with the
accomplish high recognition accuracy. From a purely practical
sequential CRO, good enhancement was significantly achieved
standpoint, the training process should be repeated until the
by applying the parallel version in terms of the quality of
network is capable of learning and adaptive to the different
solutions and the overall computational running times. The
inputs. [36][18]
time needed (i.e. time complexity ) to solve the maximum flow
instance for the parallel implementation of the algorithm is The objective of the learning process is to analyze,
“O(NEF * P)” where “N”, “E”, “F”, and “P” are the count of summarize, extract, and elicit the associated knowledge from
vertices (i.e. nodes) for each subgraph of the flow network, the the training data set. All these foregoing errands entail a deep
count of edges for each augmenting path between the source understanding of the basic structures of the training data set.
and the sink vertices, the maximum flow value from the source Even though exploring great amounts of data with the purpose
vertex to the sink vertex, and the number of concurrent of retrieving some relevant knowledge could be a frustrating
processors that are used in execution the algorithm, and complicated task, the acquired knowledge could be latter
respectively. [11] used successfully in detecting patterns and trends for the sake
of classifying information.
Going with the new vision of the smart digital world, El-Omari
within two research articles, (2019) and (2020), ends up that Since ANNs were introduced in the pattern recognition field
the CC digital-priceless environment is truly the most tolerable and optimizing nonlinear functions that work recursively, El-
place for hosting many complicated algorithms especially that Omari (2008) developed a new robust technique that intends
it has actually emanated out from unlimited and never-dying to optimize the segmentation of the compound images based
diverse resources for ever-sooner and inexpensive calculation on modeling the solution sample space using the ANN
and, moreover, it has the world's best price-per-performance paradigm. By building prior knowledge utilizing four
ratios for the HPC operations. The author also pointed out that interconnected ANN's as a one-model component, an image
there are genuinely thousands to millions of virtual machines can be segmented by this brilliant approach into labeled and
(VMs) that are dynamically generated and demolished to serve coherent regions of four classes: pictures, graphics, texts, and
numerous customers in easily ending their very sophisticated backgrounds. Then each region is manipulated individually
or even unmanageable tasks. Accordingly, the author pointed, according to its characteristics with the most effective
at least implicitly, to another further step forward by building compression method that either a new one or one off-the-shelf.
the needed optimization algorithms remotely as web-based In that research work, there were another three proposed
innovative services within the CC environments. From over techniques to meet the preceding stated segmentation
here, it is foreseen in the next few coming years that global objective; all of them revolve around a vivid central point by
solving for the optimization problems based on utilizing CC modeling the ANN but each one has its own layered-structure
might become a wildly-popular simple practice among topology and characteristics related to the applied evaluation
ordinary users. [12][43] criteria (i.e. activation function), the number of the hidden
layers and the number of nodes (i.e. neurons) in each one of
3.4 Artificial Neural Network (ANN) the hidden layers.
ANNs are a family of computational intelligent models that A fifth hybrid approach is further proposed in that research
strive to simulate the process of exchanging messages among work as a result of hybridizing these four stated approaches.
neural networks' biological systems that especially exist inside However, each of these proposed approaches has it is certain
the human and the animals' brains. In analogy to the brains' trade-offs such as speed, reliability, accuracy, efficiency, and
ease of use. In fairness, the main concern of these proposed
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 49
crawling, parallel implementation, multi-targets, and parallel benchmark for the metaheuristics comparison in the
implementations. [31] optimization area. On part of comparative performance
measurement in that 20-year span, this prominent study drew
3.6 Hyper-heuristic Framework an objective comparison between the metaheuristics according
to the following five critical issues: [29]
A number of researches have been made in an effort to find · How many setting parameters are required to go efficiently
solutions related to the optimization of the execution time with the optimization task of the compared algorithm?
issues. In this regard, the study by Welch and Miller (2014) Since setting every input parameter requests more time and
offered a two-level model for optimization algorithms effort to be adjusted and tuned to go with the problem
turnaround. The primary premise of this much-appreciated nature, the algorithm with the lesser parameters is, without
model is to separate the functionalities of the metaheuristic a doubt, the better choice to use. From a deeper viewpoint,
optimization algorithm from the functionalities of the problem the number of parameters in any metaheuristic is directly
itself. Broadly, this is the notion of the hyper-heuristic proportional to its complexity where the algorithm with
framework where a set of intelligent metaheuristics can be fewer parameters needs slighter variations to work well in
classified upon their shared common features into different solving other problems and, as a result, to dominate over a
types of hyper-heuristics and then combined together through larger variety of applications.
the hybridization of the hyper-heuristics framework. Then,
rather than exploring the search space of the candidate possible · Which are the stages of the metaheuristic algorithm that
solutions for a given problem, the framework of the hyper- have the ability to balance properly between the exploration
heuristics automatically fabricates an algorithm that could and exploitation strategies? As a consequence of the
professionally find a better solution by one or more of the metaheuristics' stochastic nature, optimal balancing
already metaheuristics that have been classified and stored. between these two stages becomes unquestionably a
Besides that this hybridization leads to new approaches to challenge to meet.
emerge, this combining of the positive capabilities of the
· Has the metaheuristic been used in developing other
different metaheuristics gives more chances for capturing
hybridized approaches? There is no question that the
better solutions. While two or more approaches that
algorithm that is used much in other hybridized approaches
participated in this hybridization can be fused together to form
is credible evidence of its well-built organized structure
one model, each one of them remains functioning as an
efficiency.
individual one. [5][29]
· Does the metaheuristic algorithm contain some local-search
From a broader standpoint, hyper-heuristic frameworks can be
mechanisms? Since local searches usually give the best
understood in such a way as if there were two abstracted parts
chances to keep approaching the best solutions, the
associated with each other and maintaining a high degree of
presence of this facility has a considerable influence on the
correlations: a top-level frontend and a lower-level backend.
candidate solutions improvement during the course of the
While the metaheuristics themselves are encapsulated to form
successive iterations.
the backend, the frontend which involves different types of
hyper-heuristics is the visible part that the optimization handler · Does the metaheuristic algorithm search the solution space
sees and interacts with. By this easy-to-implement way of globally for catching the optimal solution or just within the
using the hyper-heuristic frameworks, a higher level of local solution space?
abstraction is provided where the backend complexity is
shielded from the frontend and so some unwanted details are Granted, any metaheuristic algorithm satisfying the above
eliminated or hidden. This architecture allows those handlers features will have more stability to be staying used in the
their selves to focus their effort on solving the problems rather upcoming years. From another point of view, this analytical
than concentrating on the minutiae of the underlying details study could highlight some clues as to how to screen and select
related to how the metaheuristics should be implemented for the most adequate metaheuristic for a given optimization
solving the considered problem. In all sincerity, hyper-heuristic problem. [29]
does not revolutionize the field of metaheuristics, but it adds a
new easier and quicker face in dealing with them.[5] Furthermore, this research study drew attention to two other
important issues that may prevent some metaheuristics from
3.7 The new-generation metaheuristics being utilized. One is related to the lack of a solid analytical
foundation for validating many metaheuristics in which their
Finally, to conclude the discussion of this section, the study by performance evaluations are measured only by carrying out the
Dokeroglu et al. (2019) introduced a considerable selective classical ad-hoc statistical analysis without being based on real
survey to compare the most popular metaheuristic algorithms theoretical or mathematical foundations. Then the said study
that have been proposed in the last twenty years, namely also pointed out how it is difficult to find clear guidance or
between the years 2000 and 2020. This distinguishable survey robust frameworks to recommend anyone interested in how to
reviewed and then analyzed the new-generation metaheuristics adapt many metaheuristics to the considered problems. [29]
in such a way that it can be considered as an excellent
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 51
Related to the previously mentioned setting parameters issue · “S” represents the solution space. This vector represents the
debated in [29], the authors in the study by Lam and Li (2010) set of all possible candidate solutions for a given problem.
emphasized that the adjustments of many metaheuristics are [5][4]
just relying on the experience and prioritize of the researchers
· “s” indicates the set of values that are assigned to the vector
themselves without applying clear conceptual structures and
“X” and restricted by the vector of constraints “C”.
theoretical guidelines for how to set and tune their input
parameters according to a problem domain [21]. · “F” represents the objective function (i.e. objective
criterion) that is used to assess the quality level of a stated
solution. It represents the criteria used to pick out the
4. Maximum Flow Problem (MFP) optimal solution among the possible candidate ones. [4][7]
Since the efficient flow supplying should be guaranteed by a Second, with the aim of using SLnO in solving the problem of
well-defined distribution system, MFP is one of the pillars of interest, the input of the maximum flow problem (MFP)
computer engineering, mathematics, and computer science that should be adapted into a layout format that can be understood
has been deeply studied [22]. From a conceptual standpoint, by SLnO. Thus, the input should be converted into a graph.
MFP as a research area is indeed classified as being based on Broadly and from a mathematical modeling part, this can be
the fusion of three-well-known-overlapped branches that are understood in an abstracted aspect by using the following
directly interrelated with both Artificial Intelligence (AI) and bullet points that represent the problem instance:
Operations Research (OR). First, Computational Complexity
Analysis (CCA) which is pertaining to the cost of solving · The shape of the flow network is called “network
computational problems [22]. Second, Nature Inspired connectivity”. This network is a directed graph, “” that
Computing (NIC) which is an emerging research area that has a finite set of directed weighted arcs (i.e. edges), “”,
mainly concentrates its focus on solving various global and a non-empty finite set of vertices (i.e. nodes), “”. That
optimization problems based on exploiting efficiently is to say, the MFP can be defined as “G = (V, E)” where the
Chemistry, Physics, and Biology approaches [3][20]. Third, following points clarify this: [40][7][8][44]
Combinatorial Optimization Problem (COP) that tries to make - “ ” indicates the weighted directed network graph,
every great effort into achieving an “optimal” solution among a called a digraph.
set of possible candidate ones [34]. - “” denotes the group of nodes, also named vertices,
Before describing how the proposed algorithm solves the which are inside “”; their count is “n”. In an analogy
problem under consideration, it may be well to first consider with Fig. 1, = {, , , , , , } and n=7.
the matters of some notations and fundamental features. First - In order to carry flow, nodes are combined together by
of all, any type of these problems (i.e. MFPs) considerably the set of arcs “E”, also referred to as edges. The
includes the following common components: behavior of the network “G” is defined by the way that
· = { , , , … , } : This combination of variables is the set of nodes “V” are connected via this set “E” and
formally referred to as the set of the initialization by the strength of these connections, called weights or
parameters and so the problem dimension is represented by capacities (i.e. flow). All edges' weights are assumed to
“n”. It also indicates the vector of variables that are used to have strictly positive values and, conventionally, these
frame the designed methodology of the proposed algorithm. weights are set to be small numbers [29][2].
For its time convergence and to explore a much broader - A variable for each edge of the graph is introduced in
diversity inside the candidate solutions, the impressive the graph of MFPs. For instance, the edge between the
achieved results of any algorithm are highly interrelated two nodes “” and “” of Fig. 1 is represented by using
with the choice of this set. the variable “ ” where “ ∈ ”.
· = { , , , … , } represents the vector of constraints - The representation of the weights inside the set “E” is
(i.e. criterions or rules ) that are used to restrict the progress used to represent the search agents. Every edge “ ∈ ”
of the searching process. From another perspective, this
that is directed from a given node “i” towards another
vector is used for the purpose of limiting the various values
node “j” has a maximum of non-negative capacity “cij”.
that are assigned to the vector “X”. On the assumption that The total number of edges inside the network “G” is
the total number of constraints is defined by “m”, all these
“m”. By looking at Fig. 1, the total number of edges is
constraints are so sacred not to be violated while
10, hence m=10.
discovering the optimized solution. It is vitally important
for any solution to have complied with this vector of · Two special nodes in “G” are distinguishable and
constraints. Prompted by this, a feasible solution is designated in advance as follows: [44]
considered as a potential one if it certainly satisfies all the
- A source or a start node “s” in which flow is arriving.
indicated constraints along.
Unlike the other nodes, the node distributes flow to the
other nodes. It has only an incoming flow without
outgoing flow.
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 53
subgroup inside this hierarchy according to their age, gender, this foraging process behavioral, evolutionary, or both? Is
and the tasks that are entrusted to them. However, each the hunting process a one-team concerted effort?
member may be subject to this moving over and over through
· Collective behavior: Is the collective behavior of the sea
their lifetimes. The whole group or some of its subgroups may
lions centralized or decentralized? In other words, does the
hunt together which increases their chances of gaining more
collective behavior of the sea lions rely on only one sea lion
feeding prey. From this, a self-organized teamwork system is
that is responsible for making every single decision related
formed by grouping together and coordinating all the different
to the whole swarm, and so it is a centralized system, or
interactions within the lower level of the hierarchy.
several sea lions are responsible for making decisions, and
Compared to the other creatures feeding on the relatively so it is a decentralized system or as so-called self-organized.
species-rich seawater, each colony has a very strict coherent
· Communication process: How are sea lions
strategy in their fishing together. In the subject of the
communicating with each other? What are the guidelines
optimization literature, Sea Lion Optimization (SLnO)
for managing this communication process in the followed
algorithm is relatively one of the best-known dominant
direction? How do they communicate to update their
examples of this category which is based mainly on imitating
locations to be turned towards the new target position of the
both the hunting manners and associated direct communication
prey after any movement?
channels among any swarm of sea lions during their chase for
the preys [2][26].
Along the way of revealing these research concerns, the SIB, in
There is a continued need for all the swarm members to particular the sea lions which are considered the major
interact with each other by direct communication channels inspiration of this technique, is systemically the collective
upon seeing prey. Chasing and catching the detected prey is the behavior of the following attractive notions and patterns of
joint venture of all the related swarm members. In accordance behaviors: [46][2][13][31]
with their social hunting nature, moving towards an optimal
target implicitly exists in their primitiveness and all the · Solution search space: The theater of the events depends
associated activities are parts of their nature. Since it is a joint on the environment. It is the sea beach in the case of sea
hunting process, it is clear that the primary responsibility for lions. The whole graph is the search space (i.e. search
hunting implementation and success rests with all the agents agents) and the prey is the target node that these animals
themselves. are looking to reach and catch.
As do all the other nature-inspired paradigms, SLnO is a · Collective knowledge: Given that the teamwork of the sea
population-based algorithm that is based on using the fitness lions is greatly self-organized, global behavior is essentially
concepts to determine the quality of the solution. By this means, derived from and based on the authority of self-organized
the proposed algorithm starts by generating a population agents and their tendency in reaching the goal. Therefore,
containing multiple search agents. To assess these search the emergence of collective behavior is achieved by
agents, the fitness function of an efficient solution which is collecting all the local communications between all
based on the maximum value is computed for every search individuals. This leads to having a coin with two faces: the
⃑” who has the
agent. At that instant, the best search agent “ formation of the tuned-global-collective knowledge arises
best fitness among the group of candidate solutions is defined from the exchange of the different information among the
and all the related locations belonging to the other search sea lions and, on the other side, the different rhythmic
agents should be modified directly in-keeping-with this interactions and communications among individuals in the
modification. system lead to global-collective coordination. And so, the
depth of collective knowledge is based on this teamwork
Out of all of this, there is a pressing need to address the communication.
following appealing concerns related to the organizational
structure of their social behavior: [46][2][31] It is worth mentioning that a successful solution is
constructed by a subscription of all agents and a single sea
· Solution search space: What are their feasible regions, lion, on his own, has a lower possibility to efficiently solve
namely their solution search space? How are sea lions a problem. On the other hand, it is an undeniable fact that
responding to any vital action in the hope of finding more the collection of the sea lions composing the team has an
superior solutions? Is it an on-demand and self-adaptive overall stronger possibility of getting closer productive
strategy? How are these animals moving in the direction of results and better markedly solutions.
the optimum goal or near-optimum one?
· Natural laws: Behind the scenes, functions and operations
· Hunting process: Do the sea lions have a strategy for of the wholly-embedded components are regulated
feeding or it is a trivial-usual process and a sudden according to natural laws. For this well-defined reason,
inspiration? If there is a hunting strategy, is it a hunting-for- these high-level systems activate repetitively forever
together strategy? How does the hunting process work? Is without any troubles.
there a to-do list of basic tasks that should be initiated? Is
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 55
· Hunting stages: To track and hunt the prey in case of the · The distance function: The best candidate solution for the
sea lions, the chronological framework for the hunting sea lions is represented by the current best location that has
events goes through five stages: the minimum distance from the target prey which the
swarm has obtained yet. All the joined members should
- Detecting and tracking phase: The sea lions have faces
keep track of this location and update their locations
with elliptical cross-sections that are different than the
accordingly. Equation 9 is used here to mathematically
other mammals which have faces with circular cross-
model this behavior which is the most significant
sections. In addition to that, they have the longest
characteristic of this technique [2]:
whiskers among all the mammals which can be moved
in all directions. Depending on the feeling of these ⃑ . ⃑
= ⃑ () − ⃑
() (9)
whiskers, these animals track and determine precisely
all the related information concerning the prey such as ⃑”, “t”, “⃑
Where “⃑
”, “ ()”, and “⃑
()” represent the
location, shape, and size. [2][47]
distance vector between the sea lion and the target prey, a
Furthermore, sea lions habitually swimming randomly random vector in [0, 1], the current iteration, the position
in a zigzag course during their searching for prey across vector of the target prey, and the position vector of the sea
the sea. As well, they utilize their whiskers to get a keen lion, respectively. It is important to draw attention that the
sense of the prey. [47] vector “
⃑ ” is duplicated when it is multiplied by the
- Searching for prey (Exploration phase): When the prey number two in order to give the search space a closer
is composed of few fish, sea lions hunt individually. opportunity to explore a more optimal solution.
Otherwise, when the prey is composed of plenty of fish, · The positions vectors: The vectors of the sea lions'
they are chasing down together and hunting together as positions in any subsequent iteration “t + 1” are depending
groups. The sea lion (search agent) who successfully on the preceding iteration which is denoted by “t”. This is
detects the position of the prey is considered as the best mathematically modeled by using Equation 10 [2]:
search agent and, in turn, this lion is assigned as the
leader that commands the hunting process. This leader ⃑
( + ) = () ⃑ − ⃑
. ⃑ (10)
starts the process of hunting by telling and guiding the
other members about the prey which is considered as Where the vector “()” points to the position of the target
the current best candidate solution [2][47]. Whenever prey and the vector “⃑
” is as it has already indicated in
the best search agent is determined as a leader among Equation 9. The vector “ ⃑ ” in this equation is reduced
the other ones, all the other search agents should linearly from “2” to “0” throughout the expanse of iteration
arrange and update their current locations accordingly. steps for the reason that this reduction drives the leader of
Nevertheless, if a better prey is detected by another the sea lions into moving in the direction of the current
search agent, then this new prey is considered as the target prey and encircle them. Conversely, an increase in
new best candidate solution. In view of that new this vector means that the sea lion leader is moving away
situation, the leader, as well as the current best from the current prey. Thus, the aptitude of the current
candidate solution is replaced. position of the leader leads to the following three cases:
- If the value of “ ⃑ ” is less than one, then the search
- Vocalization phase: many sea lions from a variety of
swarms begin to group together (i.e. forming a cluster) agent is moving in the direction of the prey and the
around the prey and so the cooperative clusters are other search agents should adjust their locations
formulated. The key significant factor of this stage is according to that.
how fast their immediate reactions to the prey ⃑” is greater than one, then the search
- If the value of “
movement as soon as the prey position is determined. agent is moving away from the prey.
On the basis of that, the sea lions are chasing down
- If the value of “⃑” is equal to zero, then the optimal
together to force the prey headed for narrow balls at the
solution has attained which means that the algorithm
shallow water near the ocean's surface and the beach.
terminates at this point.
- Attacking phase (Exploitation phase): the encircling However, if the value of (|⃑|) is greater than one or less
process which is related to the process of getting than a negative one, the search agents will move obliquely
around the target prey after determining its position. away and search for a new cluster to join it.
This process is directed by the leader of the sea lions
and requires updating the search agents' positions · The Shrinking encircling mechanism: This mechanism
according to these new circumstances. relies basically on utilizing Equation 10 that has already
been mentioned in the previous point.
- The actual feeding process: When the prey becomes
close to the surface of the ocean, the feeding process is · The collective communication: Since there is a need for
started. the sea lions (i.e. agents) to contact each other and to bring
all of them closer together particularly when they are
56 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
tracing and hunting as subgroups, much of the success of exploration and exploitation. In the exploitation phase, the
the SLnO algorithm lies in the collective intelligence that is joined members modify their locations in light of the best
based on the collective communication. The more search agent's position. In the exploration phase, on the
collective communication and interaction between agents' other hand, the locations of the joined members (search
population, the more effective collective intelligence and, agents) are updated in accordance with the position of the
in turn, the system will be more efficient in solving the selected sea lion that has been chosen randomly. Therefore,
problems. the generalized mathematical formulation of this phase is
formulated by using both Equation 13 and Equation 14:
The sound as the communication language is formed by
using several vocalizations. Despite the smallness of their ⃑ ⃑.
= ⃑
⃑ () − () (13)
ears compared to their bodies, sea lions have got the ability
to clearly detect both the sounds that are in the air as well ⃑( + ) =
⃑ () − ⃑
. ⃑ (14)
as that are underwater. Sea lions use this communication ⃑ ()” is used here to point to a sea lion that is
Where “
behavior to call up other joined members who are currently
presenting on the beach to join the team immediately and selected randomly from the present population. It should
be stressed that when the vector “ ⃑” is bigger than one,
to manage the different hunting activities like tracking and
encircling prey. In this regard, Equation 11 is fabricated as this equation is used for detecting the global optimal
[2]: solution. Because of that, this algorithm is considered as
being a global optimizer.
⃑
⃑
⃑
= (11) At the early phase of the iteration steps, Equation 14
⃑
demands sea lions to randomly proceed around each other.
Where the three vectors “⃑ ”, and “ ⃑
” , “ ⃑ ” On the other hand, Equation 12 permits other sea lions to
represent the speed of sound of the leader of the sea lions, reposition themselves or move in a circular shape in the
the speed of sound in the water medium, and the speed of direction of the best search agent which draws the reason
sound in the air medium, respectively. behind that this proposed algorithm has high exploitation.
In addition to this high exploitation, this algorithm has also
Normally, sound travels faster in solids than liquids and a high exploration and the capability to go beyond local-
slower in gases than liquids [48]. Under normal conditions, optima traps.
sound travels in water nearly 4.3 times as fast as in air
[48]. Otherwise speaking, the sound of the leader needs to · Related to the graph theory, the sea lions are represented by
be reflected in the two mediums: water and air that are agents and, in turn, this algorithm is a multi-agent
” and “ ⃑
determined by “ ⃑ ”, respectively. This algorithm. The followings are beyond this point:
sound reflection of the leader of these animals is behind - The maximum flow problem is considered as one of
calling the other joined members that are inside the water the various well known basic problems of optimization
or at the sea beach. in weighted directed graphs. It is a type of network
· The circular updating of positions: this behavior is optimization problem in the flow graph theory.
mathematically formulated by using Equation 12: - The SLnO graph-based for sea lions is usually
⃑ ⃑() − ⃑
( + ) = (). () + ⃑
() (12)
bidirectional.
- The weight at each edge (arc) interconnects two
Taking into account the fact that the target fishes are the vertices (nodes) representing the flow capacity of this
best optimal solution, the following points analyze the arc.
elements of this equation:
⃑()” represents the absolute
⃑() −
- The inputs should be converted to a graph with nodes
- The term “
and weighted edges
distance value between the search agent which is the
sea lion and the best optimal solution which is the · All the operations and data-processing activities are
target prey. ordinarily goal-oriented and real-time functions.
- “” is a real random number between “-1” and “+1”. · All the team members (i.e. agents) are accelerating toward
discovering better solutions. Much, if not all, of the success
- The term “ ()” is mathematically expressed to of the team, seems to lay upon the tendency of all team
indicate that the sea lion (i.e. the search agent) starts members to hurtle past their target.
the eating process by swallowing the target fishes that
are existing at the bait ball (i.e. prey) edges. Thus, it · Since the supervision of SIB is a self-organized natural
moves in a circular shape around the best optimal system, it is a decentralized system which means that
solution (i.e. target prey). making decisions at the different hunting levels is rooted in
the team-environment not only in their leaders. In other
· The global optimizer: In order to solve the MFP problem, words, the fine-tuned vision comes from the fact that all the
the proposed SLnO algorithm involves two activates:
58 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
The main algorithm that contains the population initialization The philosophy behind clustering is the decomposing of the
of the search agents (sea lions) is depicted in Fig. 10. In this original flow problem into a number of tractable subproblems
algorithm, a bunch of initial candidate solutions is generated (i.e. local MFs). Then, a range of near-optimal solutions to
randomly. Overall, this set is so-named as a swarm. Next, two each one of these smaller subproblems is calculated. After that,
of these search agents are selected randomly, one to refer to the the collection of solutions for these subproblems is combined
source and the other to refer to the destination which is actually altogether to create a global solution, namely global MF.
the prey itself. Then, the fitness function for every search agent After the initialization stage and determining the fitness
of the population is computed. For the sake of that, the distance function, the proposed algorithm can proceed forward to the
between each sea lion and “⃑ ” is computed. According to clustering of the network graph. This clustering is used for the
the nearest “⃑
”, the sea lion will be assigned to the nearest purpose of finding the overall solution for a given network
group (i.e. cluster). graph where the global search space (global MF) is broken
down into a set of local search spaces (local MFs), each is
On the other hand, the issue of enhancement to find the best
referred to as a cluster. Each cluster contains a number of
solution for this norm of optimization problems is
separated subnetworks and each subnetwork is composed of a
considered the most worthy of this algorithm. In order to avoid
group of nodes and their edges.
falling in the local optima and to converge to the best solution
within the predefined time, this algorithm also addresses the ⃑ ” is selected randomly for each cluster.
For satisfactory, “
case where these lions may make a random search moving Then the fitness function for each search agent is computed to
towards finding other better positions instead of remaining check whether it should join any cluster or not. More
stuck with one of the current search solutions (i.e. one of the precisely, each particular agent (sea lion) is managed in the
local optimas); this what is known as “exploration”. In this sense that it is identified to which cluster it belongs. So,
feature, as soon as the best solution has been determined according to the value of the fitness function, each sea lion is
among the other ones, all the other search agents should identified whether it will join this group or another.
rearrange their current locations accordingly.
In order to get the overall global maximum flow
6.2 Fitness Function ( ), the local maximum flow ( ) is
computed for each specific cluster and then the overall
The goodness of the overall solution is evaluated thoroughly by summation of them is computed. This is illustrated in Fig. 13.
the quality of each possible position which is determined by
using the fitness function. The perfect choice of the fitness 6.4 Maximum Flow Function
function has, therefore, a great impact on the selection of the
candidate solutions and in the direct evaluation process for As mentioned in the preceding subsection, the local MFP is
identifying the solutions' qualities based on the degree of calculated for each cluster by calling MFP function which is
efficiency. Based on that, there is a need to re-compute the introduced by Ford Fulkerson (FF). MFP function relies on
fitness function for each one of the search agents in any new augmentation paths in residual graphs to find the maximum
trial. Thus, the best search agent “⃑
” is chosen and all the flow from the source to the destination (i.e. source-to-sink
locations of the other search agents should be updated path).
accordingly. The local MFP is computed for each cluster using the FF
By using its special vocalization to tell them about the prey, technique that returns the local MFP for each specific one.
⃑ ”, as a leader, send a vocal message to other sea lions
“ Then, the global Maximum Flow ( ) of the network is
that exist on the shore or under the water. In accordance with calculated using Equation 15:
that, all the sea lions that have heard the vocalization of their maxFlow = ∑
(maxFlow ) (15)
leader will join the cluster and then update their locations
toward the “⃑ ” position depending on the value of (|⃑ |). Where “ ”, and “N” represent the maximum flow
Hence, the general steps for updating the positions of these for ith cluster and the count of the clusters, respectively. On this
animals are clearly depicted in both of the figures Fig. 11 and point, the algorithm shown in Fig. 14 is used to calculate the
Fig. 12. Local Maximum Flow (i.e. ) for each cluster.
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 59
Start
Update B, C, SL
False
False
Is this
False
agent ≤ the last Select the next agent
agent?
True
End
As shown in Table V, the average speedup (Speedup ) of To make a further comparison and evaluation, the impact of the
execution is calculated by finding the row-wise summation of network sizes on the speedup of both algorithms where both
the numeric values of the speedup (i.e. last column) multiplied the first and last columns of Table V are graphically depicted in
by the network size (i.e. second column) of every experiment, Fig. 16 for both of the two algorithms.
and then dividing by the total number of the network sizes (i.e.
summation of the second column). This is expressed in 3
Equation 17:
Average run time of FF
speedup 2.5
∑
∗
0
100 200 300 400 500 600 700 800 900 1000
In a nutshell, it is clearly observed from this comparison that Network size (i.e. No. of nodes)
the proposed model SLnO-MFP performs best and gives better
performance results than FF in terms of speed and time Fig. 16. The relative speedup of SLnO-MFP in comparison with FF algorithm
efficiency; it is faster than FF by an average of (7.4902) times.
Furthermore, there is a dramatic increase in the difference in
speed between the two algorithms when large-sized network 7.2 Relative Estimation Error Rate
instances are used. It would be important to know that the
perceived complexity is remarkably behind this speedup and Table VI illustrates the estimated-theoretical (T ) and
has a strong influence on the implementation of any the actual-experimental ( T ) run time of SLnO-MFP
metaheuristic algorithms. As reflected in the plot of Fig. 15, the algorithm with a Relative Estimation Error rate (REE) which is
execution complexity is a quadratic polynomial. More calculated by using Equation 18:
precisely, it begins to mount when the number of nodes (
– )
REE = (18)
increases.
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 63
Since the “analysis run time” represents “the estimated Like the notion of Equation 17, the Mean Relative Estimation
theoretical run time”, Equation 18 can be redrafted as Error (MREE) is calculated and viewed in this table by
expressed in Equation 19: summing up products between each element of the last column
and the second column; then dividing this summation over the
(
– ℎ )
summation of the second column. This is expressed by using
=
ℎ Equation 20:
∑
∗
– MREE = (20)
= (19) ∑
According to the statistical analysis of the sixth column of Where “ ”, “ ”, and “M” represent the relative error
Table VI, it is seen that the proposed algorithm has low error of the jth dataset’s size which was computed as indicated by
rates compared to the FF algorithm. Equation 18, the jth dataset’s size of the network, and the count
of the elements in the data set, respectively.
The result of this equation, namely Equation 20, was calculated slight difference occurs due to randomness inside the equations
using the seventh column of Table VI and then viewed in the that are used in building the algorithm, like the ninth and the
last cell of the same column and table. Here again, it is fourteenth equations. Yet again, this result also ensures that
noteworthy that the proposed algorithm is able to reduce the SLnO-MFP is highly comparable in terms of the MSE.
error rate to be (0.8183). Hence, this result proves that this
Furthermore, the last cell of Table VI is related to the deviation
proposed algorithm is highly comparable in terms of the error
of the MSE, termed as (DMSE), which is calculated by taking
rate.
the square root of the MSE. It should be noted that MSE and
In an analogy with Equation 19, the average of the squared DMSE are calculated in the same way as the variance and
errors for all network sizes, which is referred to as the value of standard deviation are usually computed, respectively. So, they
the Mean Square Error (MSE), was also selected to be an have obviously the same unit type of measurement as in the
authentic validation measurement as shown in Equation 21: case of the estimated quantities of variance and standard
deviation.
∑
∗
MSE = ∑ (21)
Additionally, Fig. 17 exhibits a visualization of the
enhancement that is accomplished in this work compared to the
The result of this equation was calculated using the last column FF technique in terms of execution time. In view of this, it is
of Table VI and then viewed in the last cell of the same column relatively clear that the proposed technique has accomplished
and table. By analyzing this calculated value, it is easily seen better performance especially for resolving the networks of
that the proposed model has achieved a very worthy result large-sized instances.
where the recorded MSE value is (0.7363). Remarkably, this
64 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
7.3 Results Accuracy and Discussion the network graph in the search space where the SLnO-MFP
technique divides the network graph into a number of
In order to examine the obtained results and to make a subgraphs.
comparative analysis between the maximum flow value of the
These outstanding findings prove that the algorithm has
proposed algorithm SLnO-MFP and the FF algorithm, two
superior performance and it is a viable alternative algorithm
major factors related to the overall performance evaluation are
that can be efficiently used to solve many optimization
taken into consideration: the average execution time and the
problems of large-scale sizes, as in the case of this underlying
accuracy of the results. While the first evaluation of
problem (MFP).
performance has been mentioned in the first subsection, the
second evaluation is described here in this subsection. As a final point, the empirical evidence substantiates that this
proposed algorithm is proportionally scaling with the problem
As clearly viewed in the comparison of Table VII, the
size, in both memory and time. The interesting interpretation
maximum flow values were calculated for both techniques
behind this issue is highly associated with network complicity.
using ten experiments, each has a different network size. After
Anyway, this solution can be applied successfully to other
those experiments were conducted, the accuracy comparison
styles of optimization problems, and the outcomes presented
for both techniques was calculated by using Equation 22 as a
here have far-reaching consequences in many other domains.
measure of accuracy (Acc):
( – )
9
= 1 – ∗ 100% (22)
8
Theoretical run time
Where “ ” represents the maximum value using the Average run time (seconds)
FF technique and “ − ” indicates the maximum 7 Experimental run time
value using the suggested algorithm (SLnO-MFP). Relative error
6
In order to compute the overall validation accuracy of SLnO- MSE
MFP algorithm, the average accuracy of all the network sizes 5
in the dataset was calculated by utilizing Equation 23:
4
∑
∗
The Overall accuracy = ∑ (23) 3
Where “ ” represents the jth the dataset’s size, and the 2
“ ” is computed as indicated by Equation 22. The result of
1
this equation was calculated and then viewed in the last row of
Table VII. As represented in this comparative analysis, the 0
proposed algorithm is able to attain a high accuracy percentage 100 200 300 400 500 600 700 800 900 1000
of (94.3205%). It is observed that the main intuition behind the Network size (i.e. No. of nodes)
overall accuracy relates to the difference between the
maximum value for the proposed algorithm and the maximum Fig. 17. FF versus SLnO-MFP in terms of execution time
value for FF technique which comes from the way of catching
Table VII. The accuracy results of SLnO-MFP compared with FF algorithm
· Cloud Computing (CC): Due to the voluminous amounts [6] M. Q. Al-shammari and R. C. Muniyandi, “Optimised Tail-
of today's data and on behalf of the Internet advances, CC based Routing for VANETs using Multi-Objective Particle
applications are nowadays becoming more prevalent than it Swarm Optimisation with Angle Searching,” International
was a few years ago and, in turn, the most endurable place Journal of Advanced Computer Science and Applications
(IJACSA), vol. 11, no. 6, pp. 224–232, 2020.
for hosting and activating optimization community. Since
each cloud-based application should be premised on the [7] L. I. Ausiello G, Franciosa PG and R. A., “Max Flow Vitality
faith that a large-pool of scalable, parallel, and distributed in General and st-planar Graphs,” Networks, vol. 74, no. 1, pp.
70–78, 2019, doi: 10.1002/net.21878.
computing resources is granted to thousands and even
millions of customers by the cloud vendor, it's time to go an [8] “Ford-Fulkerson Algorithm for Maximum Flow Problem,”
important step forward to devote the efforts in integrating Geeks for Geeks, a Computer Science Portal for Geeks, 2020.
the metaphors of the metaheuristics to fully cope within the https://www.geeksforgeeks.org/ford-fulkerson-algorithm-for-
maximum-flow-problem/ (accessed Jul. 18, 2020).
CC platform. This is particularly relevant to the case of
hosting this presented algorithm. [12][43][51] [9] M. Al-Ta’ee, N. K. T. El-Omari, and W. Al Kasasbeh,
Information Systems Analysis and Design, First edit. Amman,
· Hybridization: Since it turns out that things work Jordan: Dar Al-Massira for Printing-Publishing, ISBN: 978-
differently with hybridization of two or more exact, 9957-069-483, pp.1-527, 2013.
heuristic, or metaheuristic techniques, many promising [10] S. Consoli, “The Development and Application of
outlets and opportunities for further research could be Metaheuristics for Problems in Graph Theory: A
opened by using adaptive hybridization. From another Computational Study,” School of Information Systems,
direction, the degree of this adaptively should be used as a Computing and Mathematics, Brunel University, West London,
crucial tool that goes with the problem complexity. [20] pp. 1-222, 2008.
[11] M. Y. Alkhanafseh, M. Qatawneh, and H. A. Ofeishat, “A
Parallel Chemical Reaction Optimization Algorithm for
Acknowledgments MaxFlow Problem,” International Journal of Computer
Science and Information Security (IJCSIS), vol. 15, no. 6, pp.
The author is grateful to WISE University, Amman, Jordan for 19–32, 2017.
the financial support granted to cover the publication fee of [12] N. K. T. El-Omari, “Cloud IoT as a Crucial Enabler: a Survey
this research article. Secondly, the author would like to and Taxonomy,” Modern Applied Science, vol. 13, no. 8, pp.
express his cordial thanks to Dr. Adel Hamdan, Eng. Nabeel 86–149, 2019, doi: 10.5539/mas.v13n8p86.
Abuhamdeh, and Dr. Raja Masadeh in/for the great support [13] A. Lam and O. Victor Li., “Chemical Reaction Optimization: a
and assistance rendered to carry out this research work. tutorial,” Memetic Computing, vol. 4, no. 1, pp. 3–17, 2012, doi:
Finally, special gratitude to the editor and the honorable 10.1007/s12293-012-0075-1.
anonymous reviewers of IJCSNS for their perceptive
[14] A. Prakasam and N. Savarimuthu, “Metaheuristic Algorithms
comments, valuable suggestions, and magnificent efforts that and Polynomial Turing Reductions: A Case Study Based on
helped the author to improve this paper. Ant Colony Optimization,” in International Conference on
Information and Communication Technologies (ICICT),
Karachi, Pakistan, 2015, vol. 46, pp. 388–395, doi:
References 10.1016/j.procs.2015.02.035.
[1] R. Masadeh, A. Alzaqebah, and A. Sharieh, “Whale [15] C. B. Oscar, P. P. Parra, and B. Hernández Ocana, “On
Optimization Algorithm for Solving the Maximum Flow combining numerical optimization techniques with a belief
Problem,” Journal of Theoretical and Applied Information merging approach,” in Eleventh Latin American Workshop on
Technology (JATIT), vol. 96, no. 8, pp. 2208–2220, 2018. New Methods of Reasoning (LANMR), vol. 2264, paper 5, 2018,
pp. 51–62, doi: 10.1016/j.inffus.2016.02.006. 5.
[2] R. Masadeh, B. A. Mahafzah, and A. Sharieh, “Sea Lion
Optimization Algorithm,” International Journal of Advanced [16] S. Mirjalili, S. M. Mirjalili, A. Lewis, C. Technology, and S.
Computer Science and Applications (IJACSA), vol. 10, no. 5, Beheshti, “Grey Wolf Optimizer,” Advances in Engineering
pp. 388–395, 2019, doi: 10.14569/ijacsa.2019.0100548. Software, vol. 69, pp. 46–61, 2014, doi:
10.1016/j.advengsoft.2013.12.007.
[3] P. Sindhuja, P. Ramamoorthy, and M. S. Kumar, “A Brief
Survey on Nature Inspired Algorithms: Clever Algorithms for [17] J. Brownlee, Clever Algorithms: Nature-Inspired Programming
Optimization,” Asian Journal of Computer Science and Recipes, First edit. United States: Lulu Publishing, ISBN: 10-
Technology (AJCST), vol. 7, no. 1, pp. 27–32, 2018. 1446785068, ISSN: 978-1446785065, doi:
10.5281/zenodo.3566253, pp. 1-438, 2012.
[4] P. F. Felzenszwalb and R. Zabih, “Dynamic Programming and
Graph Algorithms in Computer Vision,” IEEE transactions on [18] N. K. T. El-Omari, “A Hybrid Approach for Segmentation and
pattern analysis and machine intelligence, vol. 33, no. 4, pp. 1– Compression of Compound Images,” The Arab Academy for
51, 2010. Banking and Financial Sciences, pp. 1–201, 2008.
[5] P. Ryser-welch and J. F. Miller, “A Review of Hyper-Heuristic [19] G. Du, X. Liang, and C. Sun, “Scheduling Optimization of
Frameworks,” in Electronic Village Online - Evo20 Workshop, Home Health Care Service Considering Patients’ Priorities and
American International School of Bucharest (AISB), 2014, pp. TimeWindows,” Sustainability, vol. 9, no. 253, pp. 1–22, 2017,
1–7. doi: 10.3390/su9020253.
IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020 67
[20] K. Hussain, M. N. Mohd Salleh, S. Cheng, and Y. Shi, [35] N. K. T. El-Omari, A. H. Al-Omari, A. M. H. Al-ibrahim, and
“Metaheuristic Research: a Comprehensive survey,” Artificial T. Alwada, “Text-Image Segmentation and Compression using
Intelligence Review, vol. 52, no. 4, pp. 2191–2233, 2019, doi: Adaptive Statistical Block Based Approach,” International
10.1007/s10462-017-9605-z. Journal of Engineering and Advanced Technology (IJEAT), vol.
[21] A. Lam and O. Victor Li., “Chemical-Reaction-Inspired 6, no. 4, pp. 1–9, 2017.
Metaheuristic for Optimization,” IEEE Transactions on [36] N. K. T. El-Omari, A. H. Omari, O. F. Al-badarneh, and H.
Evolutionary Computation, vol. 14, no. 3, pp. 381–399, 2010, Abdel-jaber, “Scanned Document Image Segmentation Using
doi: 10.1109/TEVC.2009.2033580. Back-Propagation Artificial Neural Network Based Technique,”
[22] W. J. Hopp and M. L. Spearman, Factory Physics, Third Edit. International Journal of Computers and Communications, vol.
United States: Waveland Press, ISBN:13:978-1577667391, 6, no. 4, pp. 183–190, 2012.
ISSN: 10-577667395, pp. 1-746, 2008. [37] S. Alghyaline, N. K. T. El-Omari, R. M. Al-Khatib, and H. Al-
[23] Pierce Rod., “Polynomials,” Math is Fun. Kharbshh, “RT-VC: an Efficient Real-Time Vehicle Counting
https://www.mathsisfun.com/algebra/polynomials.html Approach,” Journal of Theoretical and Applied Information
(accessed Jul. 17, 2020). Technology (JATIT), vol. 97, no. 7, pp. 2062–2075, 2019.
[24] R. Masadeh, A. Sharieh, and A. Sliet, “Grey wolf optimization [38] P. D. P. Reddy, V. C. V. Reddy, and T. G. Manohar, “Whale
applied to the maximum flow problem,” International Journal Optimization Algorithm for Optimal Sizing of Renewable
of Advanced and Applied Sciences, vol. 4, no. 7, pp. 95–100, Resources for Loss Reduction in Distribution Systems,”
2017, doi: 10.21833/ijaas.2017.07.014. Renewables: Wind, Water, and Solar, vol. 4, no. 1, pp. 1–13,
2017, doi: 10.1186/s40807-017-0040-1.
[25] Wikipedia Contributors, “Bibliographic details for ‘Heuristic
(computer science),’” Wikipedia, The Free Encyclopedia, 2019. [39] L. R. Ford and D. R. Fulkerson, “Maximal flow through a
https://en.wikipedia.org/wiki/Heuristic_(computer_science) network,” The Canadian Journal of Mathematics (CJM), vol. 8,
(accessed Jul. 17, 2020). no. 3, pp. 399–404, 1956.
[26] S. Mirjalili and A. Lewis, “The Whale Optimization Algorithm,” [40] R. Masadeh, A. Alzaqebah, B. Smadi, and E. Masadeh,
Advances in Engineering Software, vol. 95, pp. 51–67, 2016, “Parallel Whale Optimization Algorithm for Maximum Flow
doi: 10.1016/j.advengsoft.2016.01.008. Problem,” Modern Applied Science, vol. 14, no. 3, pp. 30–44,
2020, doi: 10.5539/mas.v14n3p30.
[27] Z. H. Ahmed, “A Comparative Study of Eight Crossover
Operators for the Maximum Scatter Travelling Salesman [41] O. M. Surakhi and H. A. Ofeishat, “A Parallel Genetic
Problem,” International Journal of Advanced Computer Algorithm for Maximum Flow Problem,” International Journal
Science and Applications, vol. 11, no. 6, pp. 317–329, 2020, of Advanced Computer Science and Applications (IJACSA), vol.
doi: 10.14569/IJACSA.2020.0110642. 8, no. 6, pp. 159–164, 2017.
[28] R. “Mohammad T. Masa’deh, “New Sea Animal Inspired [42] R. L. Brandt, “Jordan Unveils PS3-based Supercomputer,”
Metaheuristic Approach for Task Scheduling in Cloud High Performance Computing (HPC), HPC Wire, 2020.
Computing,” Department of Computer Science, The University https://www.hpcwire.com/2013/03/07/jordan_s_25_teraflop_pl
of Jordan (UJ), pp. 1-242, 2019. aystation_3/ (accessed Jun. 12, 2020).
[29] T. Dokeroglu, E. Sevinc, T. Kucukyilmaz, and A. Cosar, “A [43] N. K. T. El-Omari, “An Efficient Two-level Dictionary-based
Survey on New Generation Metaheuristic Algorithms,” Technique for Segmentation and Compression Compound
Computers and Industrial Engineering, Elsevier, vol. 137, no. Images,” Modern Applied Science, vol. 14, no. 4, pp. 52–89,
106040, pp. 1–69, 2019, doi: 10.1016/j.cie.2019.106040. 2020, doi: 10.5539/mas.v14n4p52.
[30] Abu Doush Iyad et al., “Harmony Search Algorithm for Patient [44] CP-Algorithms, “Maximum flow - Ford-Fulkerson and
Admission Scheduling Problem,” Journal of Intelligent Systems Edmonds-Karp,” 2020. https://cp-
Harmony, vol. 29, no. 1, pp. 1–25, 2018, doi: 10.1515/jisys- algorithms.com/graph/edmonds_karp.html (accessed Aug. 01,
2018-0094. 2020).
[31] S. M. Saab, N. K. T. El-Omari, and H. H. Owaied, “Developing [45] Wikipedia Contributors, “Bibliographic details for ‘Swarm
Optimization Algorithm Using Artificial Bee Colony System,” Behaviour,’” Wikipedia, The Free Encyclopedia, 2020.
Ubiquitous Computing and Communication Journal, vol. 4, no. https://en.wikipedia.org/wiki/Swarm_behaviour (accessed Jul.
5, pp. 15–19, 2009. 05, 2020).
[32] H. Pirim, E. Bayraktar, and B. Eksioglu, Tabu Search: A [46] B. N. Vachaku, “A Reflective Swarm Intelligence Algorithm,”
Comparative Study, 1st Editio. INTECH OPEN LIMITED, IOSR Journal of Computer Engineering (IOSR-JCE), vol. 14,
ISBN: 9783902613349, ISSN: 03038467, PMID: 27022619, pp. no. 4, pp. 44–48, 2013.
1-27, 2008. [47] Wikipedia Contributors, “Bibliographic details for ‘Sea lion,’”
[33] Francisco Sáez, “Productivity Strategies: Exploration vs Wikipedia, The Free Encyclopedia, 2020.
Exploitation,” Facile Things Blog, 2020. https://en.wikipedia.org/w/index.php?title=Sea_lion&oldid=96
https://facilethings.com/blog/en/exploration-vs-exploitation 6030165 (accessed Jul. 25, 2020).
(accessed Sep. 11, 2020). [48] Wikipedia Contributors, “Bibliographic details for ‘Speed of
[34] R. Barham, A. Sharieh, and A. Sliet, “Chemical Reaction sound,’” Wikipedia, The Free Encyclopedia, 2020.
Optimization for Max Flow Problem,” International Journal of https://en.wikipedia.org/w/index.php?title=Speed_of_sound&ol
Advanced Computer Science and Applications, vol. 7, no. 8, pp. did=968529357 (accessed Jul. 24, 2020).
189–196, 2016, doi: 10.14569/ijacsa.2016.070826.
68 IJCSNS International Journal of Computer Science and Network Security, VOL.20 No.8, August 2020
Author’s Profile
Nidhal K. T. El-Omari received his B.Sc. in
Computer Science and his M. Eng. degree in
Computer Engineering in 1986 and 2005,
respectively, both from Yarmouk University, Irbid-
Jordan. In 1989, he received his Higher Diploma of
Branch Automation Officer from the Department of
Defense (DoD), Fort Gordon / Georgia-USA. In
2008, he received a doctorate in Computer
Information Systems and Image Processing from
The Arab Academy for Banking and Financial
Science (AABFS), Amman-Jordan.
He joined the Information Technology Directorate of the Jordanian
Ministry of Defense in 1986 and retired in 2009. During those 24 years,
he chaired a number of IT-related departments including the Systems
Follow-up Department, Technical Support Department, and Automation
Department. He has been at the Faculty of IT since 2009, WISE
University, during which he worked as the director of the Computer
Center, the Chair of the Department of Computer Science and Basic
Science, and the head of the Department of Software Engineering. Since
2015, he is an Associate Professor. His research interests include, but are
not limited to, image compression & segmentation, evolutionary
computation, heuristic optimization, and methodologies for building both
secure and strategical-efficient software. Dr. El-Omari has authored/co-
authored two computer books and published more than thirty research
articles and conference papers in top-quality journals and conference
proceedings. By last, he can be reached via e-mails at
[email protected] or [email protected]