The H5N1 Algorithm - T N Hoa
The H5N1 Algorithm - T N Hoa
ed
Thang Le-Xuan1, Hoa Tran Ngoc 2*, Bui Tien Thanh 2*
1 DX Laboratory, The University of Transportation and Communications Limited Company (UCT),
iew
Vietnam.
2 Faculty of Civil Engineering, University of Transport and Communications, Hanoi, Vietnam.
Abstract:
This research proposes a novel metaheuristic algorithm motivated by the H5N1 avian influenza virus
v
infection of poultry and human, taking into account its genetic structure and replication mechanism. The
re
algorithm aims to discover optimal solutions for optimization problems by mimicking the adaptive behavior
and evolutionary process of the H5N1 virus. The H5N1 algorithm is presented in-depth in this article, including
its main components and operators. Two variants of the algorithm, namely S-H5N1 and M-H5N1, are
introduced to address Single-Objective Problems (SOPs) and Multi-Objective Optimization Problems (MOPs),
er
respectively. The effectiveness of the algorithm is assessed using a set of benchmark tasks, demonstrating its
superiority over other cutting-edge optimization algorithms. The empirical findings demonstrate the H5N1
pe
algorithm's exceptional convergence and its ability to generate solutions of superior quality. The proposed
algorithm presents a highly promising approach to address complex optimization problems that pose
significant challenges to conventional algorithms.
1 Introduction
Meta-heuristic techniques have experienced a remarkable surge in popularity over the past decade. This
tn
surge can be attributed to several factors including their flexibility, gradient-free mechanisms, and ability to
avoid local optima. Metaheuristics tackle optimization problems by treating the inputs and outputs as a "black
box", obviating the need for derivative computation of the search space. This characteristic endows them with
rin
remarkable adaptability in addressing a wide range of problem domains. As stochastic optimization methods,
these algorithms leverage random operators to mitigate the risk of becoming trapped in local solutions,
particularly in the presence of real-world problems characterized by numerous local optima.
ep
Metaheuristic algorithms have primarily been classified into four main groups: Evolutionary Algorithms
(EAs), Swarm-based Algorithms (SAs), Physics-based Algorithms (PAs), and Human-based Algorithms
(HAs). Within the branch of Evolutionary Algorithms, these algorithms take inspiration from the principles of
natural evolution to tackle optimization problems. Noteworthy algorithms in this category encompass the
Pr
Genetic Algorithm (GA) [1], Evolutionary Strategies (ES) [2], Genetic Programming (GP) [3], Differential
Evolution (DE) [4] and so on. Each of these algorithms incorporates concepts derived from Darwinian
evolutionary theory, adapting them for the purpose of optimization tasks.
1
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Moving on to the realm of Swarm-based Algorithms, these approaches find inspiration in the collective
ed
behavior observed in nature, such as the flocking of birds, the organization of ant colonies, and the swarming
of bees. Prominent methodologies within this group include Particle Swarm Optimization (PSO) [5], Cuckoo
Search (CS) [6], Ant Colony Optimization (ACO) [7], [8], and Artificial Bee Swarm (ABS) [9]. These
algorithms emulate the collective behavior of species in their search for optimal solutions. For example, ACO
iew
mirrors the behavior of ants in finding the shortest path between their colony and a food source, while PSO
emulates the navigation and predation behavior of birds. Each of these swarm intelligence-based algorithms
effectively harnesses the dynamics of simple creatures' social behaviors to solve complex optimization
problems, thereby replicating nature's sophisticated problem-solving strategies in computational environments.
v
Moving into the realm of Physics-based Algorithms, these methodologies are developed based on the
re
principles of physics, incorporating concepts such as gravitation, electrostatic interaction, thermal diffusion,
and so forth. Notable algorithms within this subset include the Gravitational Search Algorithm (GSA) [10],
Charged System Search (CSS) [11], Big Bang-Big Crunch (BB-BC) [12], and Central Force Optimization
(CFO) [13]. Each algorithm within this group applies a specific physical law to guide the process of searching
er
for optimal solutions. These physics-inspired algorithms extrapolate natural phenomena into a computational
environment, embodying the fundamental principles of physics to metaphorically represent the process of
pe
optimization.
Moreover, within the domain of Human-based Algorithms, these techniques simulate human behaviors
and decision-making processes. Noteworthy algorithms within this domain include the Group Search
Optimizer (GSO) [14], Tabu Search (TS) [15], Harmony Search (HS) [16], and Teaching-Learning Based
ot
Optimization (TLBO) [17]. Each of these algorithms emulates the search, learning, and optimization processes
observed in humans. These techniques showcase a remarkable convergence of artificial intelligence and human
cognitive processes, exemplifying an adaptive search mechanism inspired by collective human learning and
tn
decision-making procedures.
However, it is crucial to acknowledge that, as demonstrated by the No-Free-Lunch (NFL) theorem [18],
no single algorithm can universally solve all optimization problems. This emphasizes the inherent limitations
rin
of any proposed algorithms, including those discussed in this study. In simple terms, different metaheuristic
algorithms tend to exhibit similar performance when confronted with diverse optimization problems. In solving
a particular set of problems, the efficacy of an algorithm does not ensure its success when applied to different
ep
test problems. The performance and suitability of an algorithm may vary depending on the characteristics of
the problem at hand. Therefore, thorough evaluation and adaptation are essential when applying algorithms to
diverse problem domains. This underscores the significance of developing innovative and specialized
algorithms that are specifically tailored for specific fields.
Pr
Based on the evidence, this research paper introduces a novel metaheuristic algorithm that draws
inspiration from the operational mechanisms of viruses. Rigorous testing on diverse optimization benchmarks
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
heralds the emergence of a novel branch of metaheuristics known as "virus-based algorithms (VAs)". These
VAs algorithms specifically leverage noteworthy characteristics and operational mechanisms observed in
viruses, encompassing their transmission dynamics, dispersal characteristics, genetic properties, and mutative
ed
capabilities that enable them to adapt to hostile environments. The paper extensively presents two versions of
the algorithm: H5N1 for SOPs and MH5N1 as an extension catering to MOPs. The operational principles and
the innovative aspects of H5N1 and MH5N1 are comprehensively detailed in Part 3. A comprehensive review
iew
of relevant literature and previous works accompanies the proposed algorithms, shedding light on the
underlying mathematical model and the inspiration behind the proposed solution. Moreover, qualitative and
quantitative results are provided to effectively demonstrate the algorithm's efficacy. Furthermore, the practical
application of this algorithm to address a range of challenging real-life problems is demonstrated.
v
2 Related works
re
In this section, an overview of cutting-edge stochastic optimization is provided, encompassing various
branches. The primary focus of this section is to highlight the difficulties and notable research carried out in
addressing SOPs and MOPs.
Where 𝑛 signifies the number of inequality constraints, 𝑚 represents the number of equality constraints,
𝑑 denotes the number of variables. The lower bound of the 𝑖𝑡ℎ variable is represented by 𝑙𝑏𝑖, while the upper
bound of the 𝑖𝑡ℎ variable is denoted by 𝑢𝑏𝑖.
rin
The primary challenge in addressing optimization problems stems from the extensive number of
variables involved. The search space is defined by a combination of variables, objectives, variable ranges, and
constraints relevant to the given problems. While the space can be visually represented and observed in 1D,
ep
2D, and 3D forms on Cartesian coordinates, it becomes increasingly difficult to visualize dimensions beyond
3. This inherent limitation presents the initial hurdle to overcome.
In reality, variables can be either continuous or discretized, resulting in continuous or discrete search
Pr
spaces. Hence, the range of variables must be defined within distinct search spaces for each respective problem.
This poses a challenge when determining the variable range for each problem because while many optimization
problems involve constraints on variables, there are instances where a specific range cannot be defined.
3
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Furthermore, search spaces need to have specific constraints as they can generate undesired gaps in the
problem where certain regions are not suitable. Consequently, solutions that violate constraint regions are
referred to as infeasible solutions, while solutions within the constraint regions are called feasible solutions.
ed
The terms "feasible" and "infeasible" are employed to describe the regions within and outside the constraint
regions, respectively, in the search space. Equipping a problem with a constrained search space can potentially
lead to reduced algorithm performance compared to an unconstrained space [19]. Therefore, the handling of
iew
constraints necessitates the provision of suitable operators in optimization techniques [20].
The speed at which an optimization algorithm advances towards the global minimum, referred to as
convergence speed, is of great significance. Striving for rapid convergence may lead to being stuck at local
optima, whereas sudden changes in solutions can help evade local optima but decelerate progress towards the
v
global minimum. Striking a balance between these factors poses a significant challenge for algorithms
addressing real-world problems. Convergence speed plays a critical role in attaining a precise approximation
re
of the global minimum.
However, addressing convergence speed is merely one of the challenges encountered by optimization
algorithms. Additional hurdles encompass dealing with deceptive optima[18], [21], isolated optima[22],
er
uncertainty[23], noisy/dynamic objective functions[24],[25], and reliable optima [26].
susceptibility to becoming trapped at local optima, resulting from their inherent lack of stochastic behavior
when addressing optimization issues.
In contrast, stochastic optimization algorithms, which belong to the second category, utilize random
tn
operators. This introduction of randomness leads to diverse solutions, even with the same initial point, resulting
in a slightly lower level of reliability compared to deterministic algorithms. Nonetheless, stochastic behavior
enables them to avoid local minima, representing a significant advantage of stochastic algorithms. Their
rin
Random optimization algorithms bifurcate into two types: individual and collective. The first type of
optimization begins and proceeds with a single solution, which is randomly modified and enhanced through
ep
predefined steps or upon meeting a termination criterion. This category includes acclaimed algorithms such as
TS[15] and Simulated Annealing (SA)[27]. These algorithms provide the benefit of reduced computational
expense and a decreased need for function evaluations. However, collaborative methods entail creating
Pr
multiple random solutions and iteratively improving them during the optimization procedure. In the search
space, the solution set cooperatively ascertains the optimal global value. This sector includes notable
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
algorithms like PSO[5], DE[4] and ACO[8]. The main advantage of collective algorithms is their ability to
reduce the likelihood of getting trapped at local minima due to the larger number of solutions considered.
ed
A notable advantage of these algorithms is their straightforwardness, as the majority of them mimic
simple patterns observed in groups of animals like flocks, swarms, schools, or herds. It is quite apparent that
the real search space consists of a substantial quantity of local solutions. Consequently, the amalgamation of
solutions and the randomness element aid in uncovering enhanced solutions for practical problems. Collective
iew
algorithms exhibit remarkable adaptability, demonstrating their capability to collectively handle various search
space challenges. Treating the problem as a black box, stochastic collective algorithms focus solely on its
inputs, outputs, and constraints. As a result, they eliminate the requirement for gradient information.
v
comprising of exploration and exploitation phases. During the exploration phase, the main objective is to
re
identify promising regions within the search space while avoiding local solutions. After a sufficient period of
exploration, the solutions commence a pattern of gradual alterations and execute localized maneuvers – a
process identified as exploitation. Herein, the principal objective is to augment the precision of the most
favourable solutions ascertained during the exploration phase. Although circumvention of local minima may
er
transpire within the exploitation phase, it should be noted that the range of the search during this phase does
not match the expanse covered during the exploration phase. Consequently, solutions tend to dodge local
pe
solutions proximal to the global optimum value [28].
To summarize, algorithms for SOPs, whether deterministic or stochastic, are pivotal in providing
effective solutions to intricate problems. Deterministic algorithms exhibit consistency, whereas their stochastic
counterparts bring to the table a valuable flexibility with their capacity to bypass local minima. Random
ot
optimization algorithms, both individual and collective, showcase distinct strengths and challenges. Thanks to
these advantages, such random optimization algorithms have experienced considerable development and
application across various sectors. Individual algorithms earn praise for their reduced computational cost,
tn
while collective ones stand out with broad-ranging applications and exceptional adaptability [5], [15], [27].
Both varieties abide by a dual-phase method, exploring and exploiting the search space to locate optimal
solutions. The transition between these phases is typically orchestrated by adaptive mechanisms, ensuring the
rin
best approximation of the solution. The nuanced variations, coupled with their specific advantages, certify the
ongoing relevance and significance of these algorithms in diverse scientific and industrial applications [29].
MOPs, also referred to as multi-objective optimization (MOO), present a set of distinct challenges due
to the need for simultaneous optimization of multiple objective functions, which frequently exhibit conflicting
objectives. It is constructed as follows:
Pr
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
𝑆𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜:𝑔𝑖(𝑥) ≥ 0, {𝑖|𝑖 ∈ ℤ,1 ≤ 𝑖 ≤ 𝑛} (2-7)
ed
Where 𝑙 denotes the number of objectives, 𝑛 represents the number of inequality restrictions, and the
number of equality constraints is indicated by 𝑚, 𝑑 denotes the number of variables. The lower bound of the
𝑖𝑡ℎ variable is represented by 𝑙𝑏𝑖, while the upper bound of the 𝑖𝑡ℎ variable is denoted by 𝑢𝑏𝑖.
iew
One of the biggest challenges in MOPs is dealing with trade-offs among objectives [30]. Due to the
conflicting nature of objectives (e.g., minimizing cost while maximizing performance), enhancing one
objective may result in the degradation of another. Solutions are pursued based on the concept of Pareto
optimality, which signifies a state where no objective can be improved without sacrificing the performance of
at least one other objective. Identifying the Pareto frontier, which includes all Pareto optimal solutions, is
v
computationally challenging and can become intractable for high-dimensional problems.
re
The escalating challenge of scalability becomes evident as the quantity of objectives within a multi-
objective optimization (MOO) problem magnifies [31]. This escalation is often referred to as the "curse of
dimensionality"[32], a phenomenon that exponentially amplifies the intricacy of the problem, thus
er
complicating the identification of optimal solutions. This intensification of complexity is particularly evident
when the objectives under consideration do not possess the attribute of independence. Concurrently, the
expansion of the Pareto front exhibits an exponential correlation with the burgeoning number of objectives.
pe
This growth leads to a significantly heightened difficulty in discerning and accurately representing all Pareto
optimal solutions. The profound scalability challenges in the realm of MOO arise as a direct result of the
amplification in the complexity and dimensionality of the problem space.
Decision-making and expressing preferences pose substantial challenges, particularly when dealing with
ot
a set of Pareto optimal solutions. The decision-maker, based on their preferences, must discern the most
applicable solution. However, articulating preferences in high-dimensional spaces is nontrivial due to cognitive
tn
overload, as noted by Sweller[33]. Furthermore, preferences are seldom static. As Tsoukiàs[34] highlights,
preferences can fluctuate over time or depending on the context, introducing an additional layer of complexity.
These complexities necessitate the development of interactive and adaptive MOO algorithms capable of
managing dynamic preferences [25].
rin
Real-world optimization problems frequently encounter uncertainties and noisy evaluations, which can
significantly influence the efficiency of MOO algorithms and the quality of the solutions[24]. Consequently,
it is crucial for these algorithms to exhibit robustness against uncertainties and have the capacity to handle
ep
noisy data. Robust MOO algorithms aim to identify solutions that perform optimally even under uncertain
conditions, thereby enhancing reliability and applicability[25].
Numerous MOO algorithms exist, each with unique strengths and weaknesses, making selection
Pr
challenging. Performance assessment in MOO is complex, typically involving comparison of sets of solutions
rather than individual ones. Multiple performance metrics have been proposed, such as hypervolume indicator
or inverted generational distance, yet no consensus exists on the most effective metric[35].
6
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
The aforementioned challenges underscore the intricate nature of multi-objective problems,
necessitating the utilization of advanced methodologies and tools for their effective resolution. Ongoing
advancements in computational methods, machine learning, and decision analysis are actively contributing to
ed
the continuous progression within this domain.
iew
image approach. The pre-image approach, also known as Scalarization techniques, has been acknowledged as
an effective means of converting a multi-objective problem into a SOP. This is achieved by transforming the
multiple objective functions into a single scalar-valued function, enabling the application of standard single-
objective optimization algorithms. Numerous scalarization methods are available, with the Weighted Sum and
v
the ε-constraint method being particularly prevalent in practice. [36],[37].
re
Weighted Sum method (WSM):
The WSM, or the Linear WSM, is perhaps the simplest and the most common scalarization method [36].
This approach involves assigning weights to each of the objective functions, typically determined by an expert,
er
indicating the relative importance of each objective. A single objective is then formed by taking the weighted
sum of the objectives.
𝑙
(2-9)
pe
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒:𝐹(𝑥) = 𝑤𝑖𝑓𝑖(𝑥);𝑖 ∈ ℤ
𝑖=1
Here, 𝑙 denotes the number of objectives, 𝑤𝑖 is the weight assigned to the 𝑖th objective function, and 𝑓𝑖
is the original 𝑖th objective function. While this method is straightforward and intuitive, it comes with the
ot
limitation that it may fail to find Pareto-optimal solutions when the objective functions are non-convex, or the
Pareto front is non-convex. Moreover, it is unable to find solutions in the non-convex parts of the Pareto front
[38].
tn
𝜺-constraint method
An alternative to the WSM is offered by the ε-constraint method. In this approach, one objective is
chosen as the "optimization objective," while the remaining objectives are considered constraints with
rin
specified threshold values (ε-values). The problem is subsequently solved for various ε-values, resulting in
different trade-off solutions along the Pareto front.
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒:𝑓1(𝑥) (2-10)
ep
Where 𝑛 denotes the number of objectives, 𝑓1(𝑥) is the optimization objective; 𝑓2(𝑥),…,𝑓𝑛(𝑥)are the
Pr
objectives treated as constraints, 𝜀2, ..., 𝜀𝑛 are the thresholds for the constraint objectives, 𝛺 represents the
feasible region defined by the set of constraints.
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
This method has the capability to generate the entire Pareto front, thus it can provide more decision-
making information than the Weighted Sum Method. However, one of its main drawbacks is that it often
requires a substantial amount of computational resources due to the need to solve the problem multiple times
ed
for different 𝜀-values [39].
The multi-objective formulation of the problem is retained by the a posteriori approach. This approach
offers a key advantage, as it allows the determination of the Pareto-optimal set only in a run. Another notable
iew
benefit is the ability to approximate serveral shape of Pareto-optimal front without the need for expert
intervention in determining weights. In contrast to priori methods, this approach offers multiple distinct
solutions for the decision maker. However, posteriori optimization is confronted with the drawback of
requiring the consideration of multiple objectives and the utilization of a specialized mechanism for
v
determining the set and front of Pareto-optimal solutions. Consequently, this increases the complexity and
computational cost associated with a posteriori optimization. Despite these drawbacks, the benefits of a
re
posteriori MOO outweigh its limitations. Consequently, the primary emphasis of this study will be on the
optimization of multiple objectives using the a posteriori approach.
between these ranked individuals. A lower rank indicates a higher probability of selection and a greater
contribution towards the creation of a new population will. Additionally, a crowding operator is integrated into
NSGA-II, aiding in the preservation of a diverse distribution of solutions that are non-dominated across all
tn
objectives.
given problem into numerous subproblems, each functioning as an individual single-objective instance. The
distinctive feature of MOEA/D compared to pure decomposition methods is how these subproblems are
optimized. Each subproblem doesn't operate in isolation; instead, it draws information from its neighboring
ep
subproblems during the optimization process. Evolutionary operators are then utilized to combine and evolve
these subproblems across subsequent generations. For more enhancements and variations on this algorithm,
readers are referred to the study detailed in [44].
Pr
Alternative optimization techniques anchored in Differential Evolution (DE) [45] and Particle Swarm
Optimization (PSO) [46] are also achieving prominence in the field of MOO. Yet these methods are not without
their intricacies, bringing forth distinct challenges. One such challenge involves the convoluted task of
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
identifying the optimal solution amidst a set of non-dominated solutions. Another challenge lies in fostering a
diverse portfolio of solutions, which is vital for achieving a balanced and comprehensive exploration of the
problem space.
ed
To confront and surmount these challenges, the scientific community has put forth a range of innovative
strategies. The roulette wheel selection method [47] is one such strategy, leveraging principles of probability
to ensure each solution has a chance of being selected, thereby promoting diversity. Crowding methods [48],
iew
on the other hand, prioritize the preservation of population diversity in an attempt to foster a broad range of
solutions. Another approach, the sigma method [46], attempts to regulate the selection pressure, thereby
preventing premature convergence and maintaining diversity.
To wrap up, tackling MOPs remains a formidable task due to the inherent trade-offs between multiple
v
conflicting objectives. Nonetheless, the advent of sophisticated algorithms, coupled with appropriate handling
re
strategies, has made it possible to seek out efficient and effective solutions for these complex problems. It's
worth noting that this work emphasis on a posteriori MOO, given that the advantages it presents outweigh the
shortcomings when compared to a priori methods.
algorithm, inspired by the H5N1 virus in the reference materials, serve as the primary motivations for this
work.
The following section will discuss the main sources of inspiration and propose a mathematical model
tn
for simulating an operating mechanism of the H5N1 virus. Additionally, two optimization algorithms will be
introduced to address optimization problems with both single-objective and multi-objective aspects.
rin
3 H5N1 algorithms
3.1 Inspiration
H5N1 is an Influenza A virus subtype that has an HA 5 protein and an NA 1 protein. The name H5N1
refers to the subtypes of surface antigens present on the virus: hemagglutinin type 5 and neuraminidase type
ep
1. H5N1 virus belongs to the Orthomyxoviridae family [49], with a shape similar to a round particle, with a
diameter of about 80-120 nanometers and a length of about 200-300 nanometers. H5N1 virus has a lipid
membrane surrounding a layer of protein, called hemagglutinin (𝐻), and another protein layer, called
Pr
neuraminidase (𝑁). 𝐻 and 𝑁 are surface proteins of the virus, playing an important role in the invasion and
spread of the virus in human and animal bodies. The shape of a H5N1 is shown in Figure 1.
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
ed
v iew
re
Figure 1. The shape of H5N1
H5N1 is a highly pathogenic strain of avian influenza virus that has caused several outbreaks in birds
er
and humans. This virus is known for its ability to cause severe respiratory illness and even death in infected
individuals. The study of H5N1 virus has been ongoing for several decades, but its behavior and characteristics
are still not fully understood [50].
pe
This paper focuses on the remarkable characteristics of the H5N1 virus, specifically its captivating
capacity to spread, mutate, and adapt to different environments. Among the most intriguing aspects of the
H5N1 virus is its rapid transmission among avian and poultry populations. This virus exhibits a high degree
ot
of contagion among birds, posing a significant threat as it can swiftly infect and decimate entire flocks.
Although the precise mechanism of transmission remains partially elusive, current knowledge suggests that
the virus is primarily spread through contact with infected birds, contaminated surfaces, or even through
tn
airborne particles.
Another interesting aspect of H5N1 virus is its ability to mutate and adapt to new environments [51].
This makes it a constant threat to both animal and human populations, as it can rapidly evolve and develop
rin
10
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
??
M
Migratory birds
ed
Reservoir hosts
iew
Chicken and Ducks
other terrestrial & geese
Spill-over hosts
v
Non-migratory wild birds
Cats Falconiformes (e.g. eagles),
Passeriformes (crows, magpies, myna),
Other felids Psittaciformes
Viverrids Ciconiiformes (e.g. herons, egrets),
re
Humans
Gruiformes (e.g. coots).
Stone marten
Pigs
In this subsection, a novel model of the H5N1 virus is introduced with the aim of tackling optimization
problems. To establish the mathematical representation of the H5N1 virus, the population affected by the virus
tn
is initially classified into two distinct groups: poultry and humans. The H5N1 virus mainly spreads to poultry
and livestock, and then to humans with a lower transmission rate through direct contact with poultry and their
excrement, contact with contaminated household items or food, or contact with infected individuals.
rin
Similar to other metaheuristic techniques, the H5N1 virus is situated within an 𝑛-dimensional (𝑑) search
space, where 𝑛 represents the quantity of variables, and the locations of all viruses are maintained within a
two-dimensional matrix know as 𝑋 = {𝑥𝑖𝑗} . The virus's movement and exploration of new locations are guided
by various parameters, such as the transmission rate to poultry or humans, represented by the parameter 𝑃1. A
ep
survival parameter, 𝑃2, is utilized to evaluate the host's immune response, and if the host's immune response
is robust, the virus will mutate to create a new strain that is more adapted to the host. If the host's immune
response is weak, the virus will adjust its position within the H5N1 virus population. Moreover, the intended
Pr
target of the virus, which could be a host such as a poultry or a human, is symbolized as 𝐹𝑔, representing the
global best solution in the search space. Similarly, the local best solution for each virus group is denoted as
11
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
𝐹𝑝. Therefore, the H5N1 algorithm can serve as a model for describing the H5N1 virus's spread and optimizing
control strategies.
ed
As previously mentioned, viruses have the ability to mutate when their host dies or encounters a hostile
environment. Therefore, it is necessary to have an equation that simulates the mutation of the virus when
encountering adverse conditions. To achieve this, a mutation method based on permuting the best positions in
the virus population is proposed in this paper. These permutations enable the algorithm to explore and exploit
iew
optimal positions within the population more effectively, rather than relying solely on a single global optimum.
This helps avoid the algorithm getting trapped in local optima. The permutation method proposed in this paper
is presented by equations (3-1)-(3-5) as follows:
𝑎1 = (𝑗1,𝑗2,…,𝑗𝑛) (3-1)
v
where 𝑗1,𝑗2,…,𝑗𝑛 are random integers from [1,𝑛].
re
Calculate the vector 𝑟𝑡1by rotating the indices in the array ro according to the shift 𝜎1:
Create an array 𝛼2 by shifting the indices in the array 𝛼1 according to the array 𝑟𝑡1, where each element
is incremented by 1 unit:
pe
𝛼2 = 𝛼1(𝑟𝑡1 + 1) (3-3)
Calculate the array 𝑟𝑡2 by rotating the indices in the array ro based on the shift 𝜎2:
Create an array 𝛼3 by shifting the indices in the array 𝛼2 according to the array 𝑟𝑡2, with each element
incremented by 1 unit:
tn
𝛼3 = 𝛼2(𝑟𝑡2 + 1) (3-5)
Retrieve the corresponding rows in the matrix 𝐹𝑝based on the indices in arrays 𝛼1, 𝛼2, and 𝛼3 to generate
rin
An alternate formulation is proposed to adjust the location of the virus, which can be used to simulate
the behavior of the H5N1 virus in a population:
ep
{
𝑥𝑖𝑗 = 𝑝𝑚𝑝𝑖3,𝑗 + 𝑅𝑖𝑗(𝐹𝑔,𝑗 ― 𝐹𝑖𝑝,𝑗) (3-6)
𝑃𝑎𝑡𝑡𝑎𝑐𝑘 < 𝑃1
𝑅𝑖𝑗(𝑝𝑚𝑝𝑖1,𝑗 ― 𝑝𝑚𝑝𝑖2,𝑗) + 𝑅𝑖𝑗(𝐹𝑔,𝑗 ― 𝑥𝑖𝑗) 𝑃𝑎𝑡𝑡𝑎𝑐𝑘 ≥ 𝑃1
𝑥𝑖𝑗 = 𝑥𝑖𝑗 +
2
Pr
The Eq.(3-6) is used if the probability of the virus attaching to poultry, denoted as 𝑃𝑎𝑡𝑡𝑎𝑐𝑘, is less than
𝑃1. In that case, the virus position is updated using a formula that involves the 3𝑡ℎ permuate population 𝑝𝑚𝑝3,
the global best position 𝐹𝑔, the local best position 𝐹𝑝 and a matrix 𝑅 which contained random number ∀𝑅𝑖𝑗 ∈
12
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
[0,1]. Otherwise, if 𝑃𝑎𝑡𝑡𝑎𝑐ℎ is greater than or equal to 𝑃1, the virus position is updated using a formula that
𝑛𝑑
involves current position 𝑥, 1𝑠𝑡,2 permuate population 𝑝𝑚𝑝1,𝑝𝑚𝑝2 and 𝐹𝑔.
ed
Due to the fast spreading tendency of the virus within the poultry population compared to other
populations, 𝑃1 is often selected with values ranging from 0.8 to 0.95 to achieve the best results. Therefore,
the corresponding infection rate when attacking poultry 𝑃1 will be higher, and an equation is needed to help
iew
the virus in the population infect individuals quickly and accurately, when 𝑃𝑎𝑡𝑡𝑎𝑐ℎ < 𝑃1, the virus is guided to
evolve in the direction of the most optimal value within the population.
On the other hand, when hosts die, encounter unfavorable environments, or possess
strong resistance that can eliminate and prevent viruses, especially in human bodies, the viruses utilize their
v
mutation and diffusion abilities to seek new environments or develop for survival. This is the ability to explore.
The diffusion will be randomly selected based on their previous position, as shown in second equation of
re
equation (3-6).
Even though we've effectively captured the assault on the host organism in Eq. (3-6), we have not yet
mathematically modeled how the virus mutates in response to unfavorable conditions in both poultry and
er
human environments. As a result, we need an equation that can capture the mutation characteristic of the virus,
such as the following equation:
pe
𝑥𝑜𝑙𝑑 = 𝑥𝑔 (3-7)
(𝑥𝑖𝑛𝑒𝑤,𝑗 + 𝑥𝑖𝑜𝑙𝑑,𝑗) ∗ 𝑟3 ∗ 𝑤
ot
𝑥𝑖𝑚𝑢𝑡𝑎𝑡𝑒,𝑗 =𝑐∗
2
In Eq.(3-7), 𝑥𝑜𝑙𝑑 represents the previous position of 𝑥, 𝑥𝑛𝑒𝑤 refers to the new search position of 𝑥 being
tn
searched within a radius of 𝑟2, and 𝑥𝑚𝑢𝑡𝑎𝑡𝑒 denotes the location of the virus following mutation. 𝑥𝑚𝑢𝑡𝑎𝑡𝑒 is
computed based on the mean value of 𝑥𝑛𝑒𝑤 and 𝑥𝑜𝑙𝑑, and stochasticity and weighting are incorporated to ensure
convergence. To describe the environment and the challenges that the virus encounters, two parameters 𝑃𝑎𝑑𝑎𝑝𝑡
rin
and 𝑃2 are utilized. 𝑃𝑎𝑑𝑎𝑝𝑡 is a random parameter that characterizes the adaptability of the virus, while 𝑃2
represents the percentage of host survival after infection or the mutation rate of the virus, i.e., the probability
of virus adaptation. If 𝑃𝑎𝑑𝑎𝑝𝑡 ≤ 𝑃2, it indicates a favorable environment for the virus to grow and Eq.(3-6) is
used to calculate. Conversely, if 𝑃𝑎𝑑𝑎𝑝𝑡 > 𝑃2, the environment is unfavorable, and the virus needs to mutate to
ep
One of the most significant parameters in S-H5N1 is the weight coefficient 𝑤, which maintains a
exploitation of the virus. This is defined as follows:
Pr
𝑤 = 𝑤 ∗ 𝑤𝑑 (3-8)
13
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
―(𝑇𝑡 )
𝑤𝑑 = 𝑤𝑚𝑖𝑛 + (𝑤𝑚𝑎𝑥 ― 𝑤𝑚𝑖𝑛) ∗ 𝑒
Where 𝑡 denotes iteration 𝑡 th in maximum iterations 𝑇. The variable 𝑤 represents the weight factor. 𝑤𝑑
ed
is the damping of weight. 𝑤𝑚𝑖𝑛 and 𝑤𝑚𝑎𝑥 represent the minimum and maximum of weight, respectively.
Additionally, the parameter 𝑐 plays a pivotal role. It is computed using the formula (3-9) and is instrumental
in managing the virus state, striving to strike an equilibrium between exploration (searching new possibilities)
iew
and exploitation (making use of known resources), thereby preventing the algorithm from becoming overly
focused on exploitation due to the influence of the weight factor 𝑤. The distribution of c is depicted as
uniformly even throughout the computation process in Figure 3.
𝑡
𝑐 = 𝑟4 ∗ 𝑒
―4∗
𝑇 (3-9)
v
Moreover, since mutations do not always yield favorable outcomes, a coefficient is needed to control
re
the adaptiveness of the virus. The formula for this is presented below:
1 (3-10)
𝑝= 𝑡 + 𝑟5 ∗ (1 ― 𝑝)
―10∗( ― 0.5)
1+ 𝑒 𝑇
er
The equation above includes the parameter 𝑝 which controls the adaptability of the virus after mutation,
and 𝑟5, a random parameter between 0 and 1. The purpose here is to control the adaptability of the virus after
mutation based on the number of iterations. Finally, to update the position after mutation, a parameter 𝑝𝑠 with
pe
a random value between 0 and 1 is used to determine whether to update the mutated value to the global best
value 𝐹𝑔.
c
0.9 p
0.8
0.7
tn
0.6
value
0.5
0.4
0.3
rin
0.2
0.1
0
0 50 100 150 200 250 300 350 400 450 500
T
ep
Figure 3 depicts the distribution of 𝑝 and 𝑐. It can be observed that 𝑝 is well-balanced, as it spreads
evenly from 0 to 1 during the first half of the algorithm. This results in an easy selection of 𝑥𝑚𝑢𝑡𝑎𝑡𝑒 mutation
Pr
at the initial stage. However, during the second half, the distribution becomes more concentrated and closer to
1, which allows for less emphasis on exploring the search space. This is a significant factor in avoiding the
algorithm becoming overly focused on exploitation, which can be advantageous in expanding the search space.
14
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Before being updated in global best value, the current position of virus is going to update by follow this:
ed
Where 𝑚𝑥 is vector of mutate coeficient in current virus, this vector will be updated by using random
vector with random coefficient under permutate condition 𝑝𝑐 of 𝛼3, 𝑚𝑥𝑝 is vector of muate coeffiecient in
population of virus.
iew
Finally, the algorithm will be updated with the global best value using the following equation:
(𝑥 ),𝑋
[𝐹𝑔,𝑋𝑔] = {[𝐹min(𝐹(𝑥))
𝑚𝑢𝑡𝑎𝑡𝑒
,
𝑚𝑢𝑎𝑡𝑒], 𝑃𝑎𝑑𝑎𝑝𝑡 ≤ 𝑃2 𝑎𝑛𝑑 𝑝𝑠 ≤ 𝑝
𝑜𝑡ℎ𝑒𝑟𝑠
(3-12)
The H5N1 algorithm has been mathematically modeled using equations (3-6) to (3-12).
v
3.2.2 Pseudo-code of S-H5N1
re
The H5N1 algorithm commences with the generation of a stochastic collection of tentative solutions,
referred to as populations of virus. As the algorithm proceeds along its iterative trajectory, it endeavours to
approximate the feasible positions of proximate optimal solutions by perpetually oscillating between the stages
of exploitation and exploration in a random fashion. Every solution refines its spatial orientation in accordance
er
with the most superior solution identified hitherto. To modulate the equilibrium between exploration and
exploitation, the algorithm employs the parameters 𝑤 and 𝑐, wherein 𝑤 undergoes a decrement from 1 to 0,
pe
and c experiences an increment from 0 to 1. Additionally, the parameter 𝑝 is instated to govern mutation, its
value descending from 1 to 0 in order to confine the scope of mutations and selectively preserve only those
values that contribute positively, as opposed to indiscriminately integrating all values. The H5N1 algorithm
culminates its operation upon meeting the specified termination criteria. Algorithm 1 delineates the pseudo-
code of the H5N1 algorithm, and a comprehensive depiction of the H5N1 procedure is illustrated in Figure 4.
ot
5: While (t<T) do
6: Caculate 𝑐 using Eq.(3-9)
7: Caculate 𝑤𝑑 using second equation in Eq.(3-8)
8: Generating 𝑃𝑎𝑡𝑡𝑎𝑐ℎ and 𝑃𝑎𝑑𝑎𝑝𝑡 randomly
ep
12: for (𝑖 = 1: N) do
13: Get 𝑚𝑥 where 𝑠𝑚 < 𝑝𝑐
14: Get 𝑚𝑥 where 𝑠𝑚 < 𝑝𝑐
15
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
15: if 𝑃𝑎𝑡𝑡𝑎𝑐ℎ < 𝑃1
16: if 𝑃𝑎𝑑𝑎𝑝𝑡 < 𝑃2
17: The first rule in Eq.(3-6) is used to update the positions of the 𝑖𝑡ℎ solution
ed
18: else
19: using Eq.(3-7) to update positions of the ith solution
20: caculate 𝑝 using Eq.(3-10)
iew
21: using first equation Eq.(3-12) to udate global best solutions
22: endif
23: else
24: if 𝑃𝑎𝑑𝑎𝑝𝑡 < 𝑃2
25: The second rule in Eq.(3-6) is used to update the positions of the 𝑖𝑡ℎ solution
v
26: else
27: using Eq.(3-7) to update positions of the ith solution
re
28: caculate 𝑝 using Eq.(3-10)
29: using first equation in Eq.(3-12) to update global best solutions
30: endif er
31: endif
32: endfor
33: Update position 𝑥 of virus using Eq.(3-11)
pe
34: using second equation in Eq.(3-12) to update global best solutions
35: update 𝑤 using first equation in Eq.(3-8)
36: end while
37: Get the best solution of 𝒙
ot
Initialize the H5N1 Initialize the While t < T or other Return the best
Start No Stop
parameters candidate solutions critical solution
Yes
tn
16
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
3.2.3 The computional complexity of H5N1
The Big 𝑂 notation is employed to assess the computational complexity of the H5N1 algorithm, which
is inherently influenced by three key components: the algorithm's initialization process, the evaluation of the
ed
fitness function, and the subsequent updating of solutions. Analyzing the complexity of these components
provides valuable insights into the efficiency and scalability of the H5N1 algorithm in tackling optimization
problems.
iew
In the initial population evaluation loop, this loop runs 𝑁 times, where 𝑁 is number of solutions, hence
it has a complexity of 𝑂(𝑁). Inside this loop, the fitness function is called once for each individual, so the
overall complexity depends on the complexity of the fitness function. If the complexity of fitness function is 𝑂(
𝐹), where 𝐹 is fitness function, then the complexity of this loop becomes 𝑂(𝑁 ∗ 𝐹). In The main loop, this
v
loop runs 𝐼 times, where I depicts iterations in main loop. Inside this loop, there are multiple operations,
re
including the population update loop that runs 𝑁 times. Each execution of this inner loop involves several
mathematical operations, conditional checks, and calls to the cost function. If we assume that these
mathematical operations, conditional checks, and the random number generation can be done in constant time,
i.e., 𝑂(1), and that the complexity of the fitness function is 𝑂(𝐹), then the complexity of the population update
er
loop is 𝑂(𝑁 ∗ 𝐹). The mutation operation and boundary checks also run 𝑁 times, but they involve only
mathematical operations and conditional checks, hence their complexity is 𝑂(𝑁). Combining all these, the
complexity of the main loop is 𝑂(𝐼 ∗ (𝑁 ∗ 𝐹 + 𝑁)) = 𝑂(𝐼 ∗ 𝑁 ∗ (𝐹 + 1)). Therefore, combining all the
pe
components, the overall time complexity of this function is 𝑂(𝑁 ∗ 𝐹 + 𝐼 ∗ 𝑁 ∗ (𝐹 + 1)).
Regarding space complexity, It can be considered to be 𝑂(𝑁 ∗ 𝐷), where 𝐷 represents dimension. This
is under the assumption that other variables and data structures take up significantly less space in comparison.
ot
3.2.4 Observation
In the H5N1 virus model, the virus spreads randomly and infects at a predetermined rate, with a
significantly higher infection rate observed in poultry compared to humans. Therefore, the virus seeks out the
tn
most favorable environment for transmission, which corresponds to the most unfavorable conditions for the
host. As a result, the virus exhibits a tendency to be attracted to this optimal environment. If this optimal
environment is replaced with the global optimum value, the virus will adjust its dispersal mechanism to align
rin
with this value. However, the challenge lies in the fact that the global optimum value for optimization problems
is uncertain (the position cannot be determined). Therefore, the algorithm operates under the assumption that
the environment with the most favorable solution achieved thus far represents the global optimum value,
ep
Algorithm 1 presents the pseudocode for the H5N1 algorithm. Upon examination, it is evident that the
algorithm commences by initializing crucial parameters required for the virus, such as the infection rate and
Pr
step size coefficient. These parameters play a pivotal role in facilitating the virus's adaptation to the specific
optimization problem under consideration. Next, the H5N1 virus algorithm initializes a batch of random values
within a limited value range of solutions. Subsequently, the algorithm proceeds to calculate the fitness of each
17
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
virus and identifies the virus with the highest fitness, thereby designating it as the best environment for virus
development. The position of this best virus is then assigned to the variables 𝐹 and 𝑋, serving as a favorable
environment source for the virus population and guiding their subsequent movement direction. Meanwhile,
ed
the coefficient 𝑐 is updated by employing a specific method. Similarly, for each dimension, the position of the
virus with the best environment is updated. During this stage, the coefficient c undergoes an update process,
which follows Eq.(3-9). Additionally, for each dimension, the position of the virus associated with the best
iew
environment experiences an update using Eq.(3-12). When the virus encounters an unfavorable environment,
it undergoes mutation to ensure survival using Eq. (3-7). It should be noted that not all mutations are beneficial,
hence the selection of the result after mutation to update the value of the best virus is managed by the parameter
𝑝, which is computed based on Eq.(3-10). Then, the virus's latest position for that iteration is updated. If any
v
virus goes beyond the search space, it is brought back to the boundary by randomly replacing the variables 𝑝𝑚
𝑝1 and 𝑝𝑚𝑝2. Additionally, the gradient management weight 𝑤, which controls the convergence of the
re
algorithm, is also updated using Eq.(3-8). All the aforementioned steps, except for initialization, are performed
iteratively until the termination criterion is satisfied. To assess the effectiveness of the H5N1 virus model and
the proposed H5N1 algorithm in solving optimization problems, several observations are presented below:
er
The H5N1 algorithm ensures the preservation of the best solution obtained throughout the
optimization process. This solution is assigned to the best environmental variable, guaranteeing that
it is never lost, even if the population size decreases.
pe
The H5N1 algorithm dynamically updates the position of the best virus and directs the remaining
viruses towards the environmental source that represents the best solution obtained thus far. This
ensures that the algorithm continues to converge towards improved solutions, even as the population
size decreases.
ot
The H5N1 algorithm has the ability to mutate due to the virus's survival feature when encountering
unfavorable environments. As a result, local optima within the population are effectively addressed
tn
The H5N1 algorithm relies on only three main control parameters (𝑐,𝑤,𝑝).
enhance its survival capability. However, this algorithm does not address the primary challenges faced by
MOO problems, which are similar to the common issues encountered by single-objective algorithms such as:
Information loss: Similar to the Scalarization technique, the design of H5N1 for SOPs causes the
Pr
transformation of multiple objectives into a single objective through linear or nonlinear combinations.
However, this leads to the loss of important information in the original objectives and fails to
accurately reflect the relationships among the objectives. Additionally, storing solutions is also a
18
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
challenge as H5N1, being a single-objective algorithm, only stores the best environment rather than a
set of solutions.
Inability to identify all Pareto-optimal solutions: For MOPs, there can exist multiple optimal solutions
ed
(known as the Pareto optimal set), where it is not possible to improve one objective without
compromising others. However, the H5N1 algorithm designed for SOPs does not possess operators to
automatically discover all Pareto-optimal solutions.
iew
Inability to simultaneously handle conflicting objectives: Single-objective algorithms sometimes face
the challenge of maximizing one objective while simultaneously minimizing another objective. This
can lead to the absence of a clear solution. The H5N1 algorithm also lacks the capability to handle
multiple conflicting objectives simultaneously, which requires the algorithm to be equipped with
v
trade-off operators..
Given the aforementioned considerations, the first step is to equip H5N1 with a suitable solution
re
repository and environment. The repository preserves the finest non-dominated solutions acquired throughout
the optimization process. It is designed with a predetermined capacity to accommodate a restricted quantity of
non-dominated solutions per case. Therefore, an issue that needs to be addressed is how the algorithm handles
er
the situation when the number of stored solutions exceeds the maximum size (memory full).
To tackle the memory limitation, a straightforward method involves the random removal of a solution
pe
from the repository, followed by its replacement with a newly found non-dominated solution. However, this
method may not result in a well-distributed Pareto optimal solution and may lead to bias towards a particular
direction in the Pareto front. A better approach is to use selective deletion, where a selection operator is
incorporated into the repository update process. In this approach, the selection operator calculates the distance
between non-dominated solutions and identifies the nearest dominating solution. The removal of a solution
ot
from the repository in a multi-objective algorithm proves most advantageous when it resides within a densely
populated region, thereby enhancing the distribution of solutions and promoting diversity throughout the
tn
iteration process. In this study, a distance calculation formula is derived starting from the Euclidean distance
formula, where the Euclidean distance between two solutions 𝐹 and 𝐹′ is computed as the square root of the
sum of squared differences of their corresponding objectives. The formula is represented as follows:
rin
(3-13)
𝐷(𝐹, 𝐹′) = (𝐹 ― 𝐹′)2
Where 𝐹 and 𝐹’ are two vectors representing solutions previous and current in the objective space,
ep
respectively. By applying Eq.(3-13), we can determine distance 𝐷 in the objective space. Subsequently, we
will compute the radius 𝑟 using this equation.
10 ∗ max(𝐷)2 (3-14)
𝑟=
𝑅𝑒𝑝_𝑠𝑖𝑧𝑒
Pr
Where 𝑅𝑒𝑝_𝑠𝑖𝑧𝑒 iss size of repository.The radius 𝑟 will aid in determining the necessary distance for
storage solutions. Consequently, when encountering a densely populated area, if the distance among solutions
19
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
is less than 𝑟, they will be assigned a ranking. Once rankings are assigned to each resident in the warehouse
based on the quantity of nearby solutions, Tournament Selection techniques (TSt) [57] are utilized to select
among them. The probability of a solution being eliminated from the storage pool increases with a higher rank,
ed
which indicates a larger number of neighboring solutions. Figure 5 provides an illustration of this storage
updating mechanism. It is important to emphasize that while the vicinity must be defined for all solutions, this
diagram specifically focuses on the study of only four non-dominant solutions.
v iew
re
er
Figure 5: The repository's update mechanism upon reaching full capacity.
pe
As mentioned earlier, the H5N1 algorithm faces limitations in determining the complete Pareto front
during the optimization of MOPs. This is primarily attributed to the existence of multiple solutions of equal
optimality within the multi-objective search space. We continue to randomly select a suitable environment
from the storage repository. However, as explained earlier, we should exclude solutions in densely populated
ot
regions, as this is a superior approach compared to random selection for removal. To accomplish this, a hybrid
ranking and TSt combinatorial process is implemented for the maintenance operator of the storage repository.
tn
During repository maintenance, solutions with higher ranks, indicating densely populated regions, are more
likely to be selected for further consideration or removal. Conversely, neighborhoods with lower ranks,
indicating less population, in the storage repository have a higher probability of being selected as good
rin
environment sources for non-dominated solutions. This is the difference in the probability of selecting non-
dominated solutions. For example, in Figure 6, the non-dominated solutions is median and without any
neighboring solutions with the highest probability are chosen as food sources. The processing procedure of the
MH5N1 algorithm is presented in Algorithm 2.
ep
Algorithm 2 exemplifies the MH5N1 algorithm, which initiates a sequence of coefficients to shape the
structure of the search space and its management. Subsequently, potential solutions are randomly initialized
by the algorithm. These solutions are then subjected to evaluation through the fitness function to determine the
Pr
optimal solution for the current, local, and global contexts. If the storage repository is not at full capacity, non-
dominated solutions are added to it. However, when the repository reaches its maximum capacity, a
management and maintenance process is executed, addressing potential solutions in densely populated regions.
20
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Initially, the potential solutions undergo ranking based on distance calculations using Eq. (3-13) and (3-14),
followed by the application of TSt to select solutions for removal. This iterative process continues until a
predetermined number of densely populated solutions are eliminated, enabling the addition of non-dominated
ed
solutions to the repository. The repository update immediately follows the maintenance process, where a
solution is chosen from the non-dominated solutions in the repository with the least dense neighborhood.
Similar to the repository maintenance, this selection process involves ranking solutions and utilizing TSt.
iew
Subsequent steps follow a similar approach to the H5N1 algorithm. Finally, the repository is further updated
with additional values, and the process iterates until the termination condition is met. Multiple observations
are made to evaluate the effectiveness of the MH5N1 algorithm.
⁻ The storage repository operates to store the non-dominated solutions obtained so far, ensuring that
v
these solutions are never lost even if the entire population decreases during an iteration.
⁻ Solutions with densely populated neighborhoods are eliminated during the repository maintenance,
re
and the calculation of distances using Eq.(3-13) and (3-14) improves the coverage and uniformity
of the the set of solutions that are not dominated by any other solution in terms of all objectives.
⁻ MH5N1 inherits the exploration and exploitation capabilities of H5N1, allowing for a balanced
er
exploration-exploitation trade-off.
⁻ In addition to the main control parameters like H5N1 (𝑐, 𝑝, 𝑤), the size of the storage repository is
also included here.
pe
As alluded to in subsection 3.2.4, we've established the computational complexity of the H5N1
algorithm as 𝑂(𝑁 ∗ 𝐹 + 𝐼 ∗ 𝑁 ∗ (𝐹 + 1)).When it comes to the MH5N1 algorithm, we're dealing with a rise
in the number of objectives, thus it becomes crucial to factor this in, referred to as 𝑀. This calculation involves
an escalating quantity of solutions owing to the task of managing the storage repository, resulting in the total
ot
solutions being evaluated as 𝑁2 . Consequently, the complexity for the MH5N1 algorithm is determined to be
𝑂(𝑁 ∗ 𝐹 + 𝐼 ∗ 𝑁 ∗ (𝐹 + 1) + 𝐼 ∗ 𝑀 ∗ 𝑁2). In the upcoming sections, we delve into a thorough investigation
tn
and substantiate these assertions through practical experimentation on both benchmark and real-world
problems.
21
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
11: end if
12: Update 𝐹𝑔,𝑋𝑔 use TSt
13: Caculate 𝑐 using Eq.(3-9)
ed
14: Caculate 𝑤𝑑 using second equation in Eq.(3-8)
15: Generating 𝑃𝑎𝑡𝑡𝑎𝑐ℎ and 𝑃𝑎𝑑𝑎𝑝𝑡 randomly
16: Generating 𝜎, 𝛼1,𝛼2,𝛼3, 𝑟𝑡1, 𝑟𝑡2, 𝑝𝑚𝑝1,𝑝𝑚𝑝2, 𝑝𝑚𝑝3 using Eq.(3-1) – Eq.(3-5)
iew
17: Generating 𝑝𝑐 under permutate condition of 𝛼3
18: Generating stochastic for mutate coefficient 𝑠𝑚
19: for (𝑖 = 1: N) do
20: Get 𝑚𝑥 where 𝑠𝑚 < 𝑝𝑐
21: Get 𝑚𝑥 where 𝑠𝑚 < 𝑝𝑐
v
22: if 𝑃𝑎𝑡𝑡𝑎𝑐ℎ < 𝑃1
re
23: if 𝑃𝑎𝑑𝑎𝑝𝑡 < 𝑃2
24: The first rule in Eq.(3-6) is used to update the positions of the 𝑖𝑡ℎ solution
25: else
26: using Eq.(3-7) to update positions of the ith solution
er
27: caculate 𝑝 using Eq.(3-10)
28: using first equation Eq.(3-12) to udate global best solutions
29: end if
pe
30: else
31: if 𝑃𝑎𝑑𝑎𝑝𝑡 < 𝑃2
32: The second rule in Eq.(3-6) is used to update the positions of the 𝑖𝑡ℎ solution
33: else
ot
37: end if
38: end if
39: end for
rin
22
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
functions[59]. In addition, for addressing MOO problems, we employed the ZDT[60] and CEC2009[61]
funtion suite. The outcomes are then juxtaposed with those derived from the following algorithms:
ed
Arithmetic Optimization Algorithm (AOA) [53]
Cuckoo Search (CS)[6]
Particle Swarm Optimization (PSO)[5]
iew
Biogeography-based Optimization (BBO)[62]
Genetic Algorithm (GA)[1]
Gravitational Search Algorithm (GSA)[63]
Moth-Flame Optimization (MFO)[64]
Grey Wolf Optimizer (GWO) [65]
v
Salp Swarm Algorithm (SSA) [66]
re
To validate the theoretical claims discussed earlier, a diverse range of experiments have been conducted.
The evaluation process began by utilizing a set of qualitative metrics to assess the superiority of H5N1 over
other similar algorithms. Following that, quantitative measurements were collected to precisely quantify the
er
degree of improvement provided by H5N1 when compared to its counterparts. Finally, the efficacy of the
H5N1 algorithm was compared to similar algorithms present in the literature. It is important to mention that
both single and multi-target challenging benchmark problems were employed in this section. The following
pe
subsections offer comprehensive details on the experimental setup for each phase, present the obtained
outcomes, and provide a thorough analysis of the results.
multimodal, and composite functions [67], [68]. The portrayal of diverse test problem classes is depicted in
Figure 6. For unimodal functions, the presence of solely one global optimum and the absence of local optima
tn
can be observed. Hence, these types of functions are particularly suitable for the assessment of the convergence
rate and exploitation behavior of optimization algorithms. In contrast, the existence of one or more local optima
in multimodal test tasks renders them more arduous. To evaluate the capability to circumvent local optima and
explore the search space, multimodal and composite functions are deemed ideal. Lastly, composite test
rin
functions are typically more intricate than multimodal functions and more closely emulate genuine search
spaces. Seven of the aforementioned test functions have been chosen as case studies for the ensuing section.
The geometrical representations and configurations of the test functions are illustrated in Figure 6 and
ep
23
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
ed
iew
F1 F8 CF15
Table 1 presents the search history of the H5N1 algorithm, providing insights into its exploration and
v
exploitation of the search space while solving all test functions. Indeed, examining the search history in Table
re
1 reveals that the search populations surround the search around the optimal regions of the corresponding
search space. This is most evident in unimodal functions, where the optimal area has a dense population of
individuals surrounding it. Furthermore, upon closer observation, it can be seen that for multimodal and
composite functions, although the number of individuals surrounding local optimal points is high, the number
er
of individuals surrounding global optimal points still dominates, indicating that the H5N1 algorithm has
performed well in escaping from local regions with a large search space of multimodal and composition test
function.
pe
The second qualitative result was obtained from the trajectories of search agents. This specific
trajectory curve was collected as a representative sample to illustrate the solution process for various test
functions, as shown in Table 1. This figure illustrates that the trajectory of the first individual experiences
ot
abrupt changes in the early iterations, gradually transitioning to more gradual changes, and ultimately
demonstrating a monotonic behavior in the later iterations. This indicates that the initial individuals are directed
towards exploring the search space, and as the optimization progresses, they gradually converge towards the
tn
minimum point of the search space. Additionally, it can be observed that the initial individuals converge faster
to the single-peak function compared to multi-peak and composite functions, which can be easily understood
as the complexity of multi-peak and composite functions is higher than that of single-peak functions.
rin
Another interesting observation is the initial decline in population productivity during the optimization
process, particularly due to the impact of the exploration process, in which Equation (2-6) exerts the strongest
influence on the population when the infection and transmission rates for poultry are very high. However, as
ep
the optimization process progresses, the curve becomes smoother and shrinks due to the virus's lower impact
on the human population with a lower infection rate, requiring more frequent splitting and mutation. Due to
the mutability of the virus, a sudden jump in the curve can be observed after a certain period, which is caused
by the virus's mutation process.
Pr
24
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
1 0.05 100 100
0.9 0
Coefficient
-0.2
0.5 -0.25 10-150
10-150
0.4 -0.3
-200
-0.35 10
0.3
ed
10-200
-0.4
0.2
-250
-0.45 10
0.1
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 10-250
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200
iterations
F1
1 100 100
0.9 0.2
0.8
0.1
0.7 10-100
10-100
0.6 0
Coefficient
iew
0.5
-0.1
0.4 10-200
10-200
-0.2
0.3
0.2
-0.3
-300
10
0.1
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450
10-300
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300
F2
iterations
1 100
100
0.9
2.5
0.8 10-50
0.7 2
10-100 10 -100
0.6
Coefficient
1.5
0.5 10-150
0.4
1 10-200
v
0.3 10-200
0.2 0.5
10-250
0.1
0 0 10-300
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 50 100 150 200 250
iterations
F3
re
10-4
1 16 100
0.9 14
12 10-50
0.8
10
0.7 10-100
10-100
8
0.6
Coefficient
6
0.5 10-150
4
0.4 10-200
2 10-200
0.3
0
0.2
-2 10-250
10-300
0.1
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400
10-300
F4
1
0
0 50
er
100 150 200 250
iterations
300 350 400 450 500
0.35
22
50 100 150 200 250 300
20
0.9
0.3 18
0.8 16
106
0.25 14
0.7
12
pe
0.6 0.2
Coefficient
0.5 104 10
0.15
0.4
0.1 8
0.3
102
0.05
0.2 6
0
0.1
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
F5
iterations
1
0.1
0.9 100
0.8 0
100
0.7
-0.1
ot
0.6
Coefficient
-0.2
10-5
0.5 10-5
0.4 -0.3
0.3
-0.4
10-10
0.2 10-10
-0.5
0.1
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
F6
iterations
tn
1
0.045
101
0.9
0.04
0.8
0.035 10-2
0.7 100
0.03
0.6
Coefficient
0.025
0.3 0.01
10-4
0.2 0.005
10-2
0.1 0
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
rin
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
F7
iterations
1 -1500
400
0.9
350
0.8
300 -2000
0.7
0.6 250
Coefficient
-10 3
0.5 200 -2500
0.4
ep
150
-3000
0.3
100
0.2
-3500
50
0.1
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
-4000
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
F8
iterations
1 1
0.7
0.7
0.6 10-5
0.6
Coefficient
0.5
0.5
0.4
10-5
0.4
0.3 10-10
0.3 0.2
0.2 0.1
0.1 0
50 100 150 200 250 300 350 400 450 500 10 20 30 40 50 60 70 80 90 100 110
10-10
0
0 50 100 150 200 250 300 350 400 450 500 5 10 15 20 25 30 35 40 45
F9
iterations
25
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
10-3
1
10
0.9 100
0.8 8
0.7 10-5
6
0.6 10-5
Coefficient
0.5 4
0.4
10-10
2 10-10
0.3
ed
0.2
0
0.1
0 -2 10-15 10-15
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
iterations
F10
1
100
0.4
0.9
100
0.8 0.2
0.7
0 10-5
10-5
0.6
Coefficient
0.5 -0.2
iew
0.4
10-10
-0.4 10-10
0.3
0.2 -0.6
0.1 10-15
-0.8
0 10-15
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 10 20 30 40 50 60 70 80 90 100 110 5 10 15 20 25 30 35 40 45 50
iterations
F11
1
100
-0.1
0.9
-0.2
0.8
-0.3
5 10-1
10
0.7
v
-0.4
0.6
Coefficient
-0.5
0.5 10-2
-0.6
0.3 -0.8
10-3
0.2 -0.9
-1
re
0.1
50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 10-4
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
iterations
F12
1
1
100
0.9 0.9
0.8 105
0.8
0.7 0.7
10-5
0.5 er 0.5
0.4 0.4
10-5
0.3 0.3
0
0 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 450 500
iterations
F13
Despite the observed improvement in population performance during the optimization process by H5N1,
pe
it cannot be unequivocally asserted that the estimation of global optimal points also increases. Thus,
convergence plots become useful in this regard. Examining the convergence curves in Table 1, it can be
observed that the accuracy of the estimation of the global optimal point is improved by the H5N1 algorithm
through iterations. However, it is important to note that the observed improvement is not consistently evident
ot
across all images. In some cases, the results may show variability or lack of significant progress. This
observation highlights that H5N1 showcases diverse convergence behaviors when tackling different problem
tn
types.
algorithm, it is still necessary to quantitatively measure its performance and evaluate its efficiency for standard
benchmark functions. To achieve this, classic techniques for quantitative assessment in the optimization world
are employed, including the calculation of the mean value and the standard deviation (STD). These metrics
allow for comparison with other studies, and the best solutions obtained after 30 runs are recorded. By
ep
examining the mean value, we can assess the overall performance of the H5N1 algorithm. On the other hand,
the STD provides an indication of the algorithm's overall efficiency. Both metrics together provide insights
into the performance of the H5N1 algorithm for different benchmark functions.
Pr
However, these metrics can only measure the overall performance of the algorithm and cannot provide
insights into the performance of each individual run. To address this issue, the Wilcoxon rank-sum test is
applied. This test allows for the comparison of each run independently and ensures the accuracy of the results.
26
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
In this study, the statistical analysis follows a specific procedure where a 𝑝 ― 𝑣𝑎𝑙𝑢𝑒 threshold of 0.05 is utilized
to determine significant evidence against the null hypothesis. Subsequently, the best-performing algorithm for
each test function is identified, and independent comparisons are conducted with other algorithms. For
ed
instance, if H5N1 is identified as the best-performing algorithm, pairwise comparisons are conducted between
H5N1/AOA, H5N1/CS, and other combinations.
To evaluate the performance of the H5N1 algorithm in solving challenging problems, a set of benchmark
iew
functions, including the ones used in the previous section along with additional 33 functions, are employed.
The dimensionality of the problems is set to 30. The mathematical formulations of these benchmark functions
can be found in Appendix A, where they are provided in detail.
To validate the obtained results, a comparative analysis is conducted between the H5N1 algorithm and
v
a collection of established and contemporary algorithms, namely AOA, CS, PSO, BBO, GA, GSA, MFO,
re
GWO, and SSA. To ensure a fair and consistent comparison, the key control parameters of these algorithms,
including the total number of search viruses and the maximum iteration threshold, are uniformly set.
Specifically, a fixed value of 30 is chosen for the number of search agents, while the maximum number of
iterations is set at 500. This standardization ensures that the performance evaluation is conducted under
er
consistent conditions for all algorithms. To determine the optimal performance of each algorithm, the values
of other control parameters are derived from the latest version of their respective source code. By utilizing
pe
these up-to-date parameter values, the study ensures that each algorithm operates at its best performance level.
Each algorithm is run 30 times on each benchmark function, and the results are presented in Table 2 and Table
3.
The results on unimodal, multimodal, and composite functions demonstrate the superiority of the H5N1
ot
algorithm compared to other algorithms on the majority of test functions. The rankings of the algorithms are
calculated for each function, and the average values are computed and presented in the last row of Table 2 and
Table 3. The higher average values indicate that H5N1 performs better on average compared to other
tn
algorithms, and the standard deviation confirms the stability of this superiority. Furthermore, the 𝑝 ― 𝑣𝑎𝑙𝑢𝑒𝑠 in
Table 4 and Table 5 obtained from the Wilcoxon test for F1-F23 on unimodal and multimodal functions show
that most of these values are smaller than 0.05, except for the value in function F16 for MFO. This demonstrates
rin
27
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Std 0 1.967E+01 1.928E-04 1.684E-01 1.075E+02 1.543E+01 1.269E+03 1.422E-25 4.762E-03 0
Mean 8.181E-203 4.186 5.485E-05 3.852E-02 5.860E-01 2.453E-09 3.434 5.508E-18 3.675E-04 0
F4
Rank 2 10 5 7 8 4 9 3 6 1
ed
Std 0 1.142 9.527E-05 1.423E-02 2.165E-01 6.930E-10 5.980 8.665E-18 1.891E-03 0
Mean 6.638 1.012E+02 5.891 1.722E+01 2.796E+01 3.105E+01 3.130E+03 6.605 1.474E+02 3.852E+00
F5
Rank 4 8 2 5 6 7 10 3 9 1
Std 3.576E-01 7.638E+01 4.608 6.101E+01 2.779E+01 6.021E+01 1.642E+04 5.690E-01 4.504E+02 3.400E-01
iew
Mean 2.445E-02 1.381 1.366E-32 1.234E-04 7.119E-03 9.914E-18 3.368E-13 3.320E-06 8.570E-10 1.035E-22
F6
Rank 9 10 1 7 8 3 4 6 5 2
Std 1.156E-02 7.379E-01 4.248E-32 9.130E-05 1.411E-02 3.907E-18 4.860E-13 1.295E-06 2.851E-10 1.205E-12
Mean 7.885E-05 2.677E-02 3.547E-03 3.179E-03 3.704E-03 1.266E-02 9.503E-03 6.243E-04 1.320E-02 1.089E-04
F7
Rank 1 10 5 4 6 8 7 3 9 2
Std 8.514E-05 8.342E-03 2.097E-03 1.618E-03 1.917E-03 6.627E-03 6.980E-03 4.301E-04 1.207E-02 1.825E-04
v
Mean -2.825E+03 - -3.175E+03 -3.255E+03 -3.700E+03 -1.463E+03 -3.317E+03 -2.696E+03 -2.780E+03 -4.103E+03
F8 2.935E+03
Rank 7 6 5 4 2 10 3 9 8 1
re
Std 2.519E+02 1.847E+02 2.418E+02 3.735E+02 1.779E+02 2.217E+02 3.525E+02 3.657E+02 3.952E+02 8.190E+01
F10
Mean
Rank
4.441E-16
1.5
3.787
10
5.536E-15
3
er
4.769E-03
7
2.029E-02
8
4.162E-09
5
1.566E-07
6
7.076E-15
4
7.607E-01
9
4.441E-16
1.5
Mean 3.915E-06 7.046E-01 1.016E-01 5.321E-02 5.920E-02 3.312 1.830E-01 2.029E-02 2.119E-01 0
F11
pe
Rank 2 9 6 4 5 10 7 3 8 1
Std 2.145E-05 1.284E-01 5.094E-02 3.035E-02 3.103E-02 1.955 1.033E-01 1.811E-02 1.102E-01 0
Mean 7.285E-01 8.553E+03 1.915 4.332E-01 2.054E-01 4.057 1.888E+07 1.047E-01 1.205E+01 1.804E-07
F12
Rank 5 9 6 4 3 7 10 2 8 1
Std 2.768E-02 1.493E+04 1.175 8.075E-01 2.006E-01 1.262 6.483E+07 3.023E-02 4.272 1.205E-03
ot
Mean 8.458E-01 2.995E-01 7.325E-04 9.844E-06 8.850E-04 2.177E-04 5.127E-03 1.345E-02 5.096E-03 8.606E-24
F13
Rank 10 9 4 2 5 3 7 8 6 1
Std 1.434E-01 1.481E-01 2.788E-03 7.527E-06 2.809E-03 1.192E-03 9.002E-03 3.486E-02 6.191E-03 1.778E-02
tn
Mean rank
3.65 9.15 4.15 5.46 6.08 6.15 7.46 4.15 7.38 1.35
Final rank
2 10 3 5 6 7 9 3 8 1
F14 Mean 11.326 0.998 1.097 5.798 0.998 5.565 2.316 4.586 1.164 1.064
Rank 10 2 4 9 1 8 6 7 5 3
ep
Std 2.317 1.770E-10 3.995E-01 4.131 6.987E-11 3.967 1.819 4.356 3.768E-01 1.828
F15 Mean 0.02 0.00 0.00 0.01 0.00 0.01 0.00 0.00 0.00 0.00
Rank 10 2 6 9 5 8 3 7 4 1
Std 2.331E-02 1.823E-04 6.021E-03 7.560E-03 4.909E-03 3.889E-03 1.380E-03 7.535E-03 3.577E-03 2.617E-04
Pr
F16 Mean -1.03 -1.03 -1.03 -1.03 -1.03 -1.03 -1.03 -1.03 -1.03 -1.03
Rank 9 6 2 5 8 2 2 10 4 7
Std 1.689E-07 6.880E-10 6.712E-16 5.099E-13 1.871E-07 4.879E-16 6.775E-16 4.680E-06 2.691E-14 7.046E-09
F17 Mean 0.41 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40 0.40
28
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Rank 10 7 2 6 8 2 2 9 4 5
F18 Mean 9.30 3.00 3.00 11.10 3.90 3.00 3.00 3.00 3.00 3.00
ed
Rank 9 6 2 10 8 4 2 7 5 2
Std 1.161E+01 2.211E-10 1.125E-15 1.759E+01 4.930 3.857E-15 1.327E-15 2.883E-05 3.531E-13 6.850E-16
F19 Mean -3.85 -3.86 -3.86 -3.86 -3.86 -3.86 -3.86 -3.86 -3.86 -3.86
Rank 10 7 3 3 8 3 3 9 6 3
iew
Std 4.112E-03 4.739E-10 2.668E-15 2.157E-15 3.976E-07 2.340E-15 2.710E-15 1.815E-03 1.011E-10 2.710E-15
F20 Mean -3.04 -3.32 -3.27 -3.29 -3.29 -3.32 -3.23 -3.25 -3.22 -3.29
Rank 10 1 6 3 5 2 8 7 9 4
Std 1.115E-01 3.040E-03 7.206E-02 5.115E-02 5.542E-02 2.173E-02 7.117E-02 1.087E-01 5.844E-02 3.627E-02
F21 Mean -3.57 -10.14 -6.06 -4.32 -8.73 -5.42 -6.90 -9.06 -8.23 -6.36
Rank 10 1 7 9 3 8 5 2 4 6
v
Std 1.083 1.758E-02 3.503 2.802 2.691 3.663 3.420 2.265 3.055 2.552
F22 Mean -3.51 -10.39 -8.12 -6.63 -9.10 -10.16 -7.55 -10.22 -8.28 -8.08
re
Rank 10 1 6 9 4 3 8 2 5 7
Std 1.003 1.324E-02 3.337 3.621 2.696 1.334 3.579 9.701E-01 3.123 2.517
F23 Mean -4.40 -10.52 -8.52 -5.92 -9.28 -9.58 -6.91 -10.08 -8.52 -9.55
Rank 10 1 7 9 5 3 8 2 6 4
F24
Std
Mean
1.644
1181.71
1.919E-02
447.79
3.434
497.19
er3.619
515.94
2.873
433.64
2.515
417.26
3.769
423.21
1.751
398.57
3.408
335.85
2.323
385.58
Rank 10 7 8 9 6 4 5 3 1 2
Std 1.553E+02 4.583E+01 1.837E+02 1.293E+02 1.337E+02 2.028E+02 9.987E+01 1.614E+02 9.520E+01 9.768E+01
pe
F25 Mean 1190.42 497.09 471.97 469.72 495.31 455.76 455.84 491.59 429.51 433.58
Rank 10 9 6 5 8 3 4 7 1 2
Std 1.812E+02 5.992E+01 1.680E+02 1.191E+02 1.286E+02 2.434E+02 1.052E+02 1.756E+02 1.239E+02 1.094E+02
F26 Mean 910.00 927.31 998.24 1027.00 995.02 976.30 938.74 955.49 926.50 910.00
F27 Mean 910.00 926.36 994.61 1029.26 997.90 975.21 936.95 970.23 923.92 910.00
F28 Mean 910.00 927.09 982.96 1012.96 1003.82 951.45 950.28 967.23 925.24 910.00
F29 Mean 1763.18 1091.40 1430.26 1040.22 1134.88 1164.35 1472.33 1327.15 950.38 892.12
Rank 10 4 8 3 5 6 9 7 2 1
Std 2.024E+01 1.582E+02 1.305E+02 3.037E+02 3.427E+02 3.310E+02 2.603E+01 1.960E+02 2.160E+02 1.312E+02
F30 Mean 1907.78 1337.51 1328.91 1518.45 1479.23 1310.36 1347.86 1396.16 1387.96 1296.49
ep
Rank 10 4 3 9 8 2 5 7 6 1
Std 9.894E+01 3.446E+01 5.988E+01 6.853E+01 8.007E+01 2.539E+01 6.837E+01 4.768E+01 6.886E+01 2.235E+01
F31 Mean 1765.27 1262.83 1434.44 1148.86 1299.41 1326.10 1469.56 1282.33 1164.89 930.34
Rank 10 4 8 2 6 7 9 5 3 1
Std 2.111E+01 1.477E+02 8.904E+01 3.213E+02 3.162E+02 2.701E+02 1.211E+01 1.734E+02 2.890E+02 1.425E+02
Pr
F32 Mean 1689.28 1207.08 1214.68 1166.20 671.40 658.65 1214.64 1124.60 867.23 460.00
Rank 10 7 9 6 3 2 8 5 4 1
Std 3.484E+01 8.834E+01 4.297E+01 5.081E+02 4.300E+02 3.967E+02 4.200E+01 3.140E+02 4.797E+02 7.595E-07
F33 Mean 1449.41 1397.07 1456.68 1578.45 1542.65 1548.45 1364.10 1509.74 1472.59 1265.97
29
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Rank 4 3 5 10 8 9 2 7 6 1
Std 3.816E+01 3.575E+01 8.282E+01 2.148E+01 4.677E+01 1.073E+01 3.095E+01 6.315E+01 9.652E+01 4.666
Mean Rank 8.325 4.2 5.85 7.3 6.25 4.8 5.2 6.1 4.2 2.775
ed
Final Rank 10 2 6 9 8 4 5 7 2 1
Table 4. Wilcoxon rank-sum test comparing H5N1 with other algorithms on F1-F23 functions. (continued).
iew
F AOA CS PSO BBO GA
v
F4
re
F6 16.91E-18 1 + 16.91E-18 1 + 16.91E-18 1 + 16.91E-18 1 + 16.91E-18 1 +
F19
Table 5. Wilcoxon rank-sum test comparing H5N1 with other algorithms on F1-F23 functions.
15
F6 16.91E-18 1 + 16.91E-18 1 + 443.04E-12 1 + 16.91E-18 1 + 16.91E-18 1 +
30
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
F10 16.91E-18 1 + 16.91E-18 1 + 16.91E-18 1 + 16.91E-18 1 + 16.91E-18 1 +
ed
F13 16.91E-18 1 + 122.59E-6 1 + 1.18E-9 1 + 16.91E-18 1 + 16.91E-18 1 +
iew
F17 N/A 0 = N/A 0 = N/A 0 = 16.91E-18 1 + 50.73E-18 1 +
583.50E-
F18 1 + 16.91E-18 1 + 58.17E-15 1 + 16.91E-18 1 + 16.91E-18 1 +
12
F19 N/A 0 = 524.25E-18 1 + N/A 0 = 16.91E-18 1 + 16.91E-18 1 +
v
12
F23 542.31E-9 1 + 139.46E-12 1 + 11.04E-6 1 + 22.86E-15 1 + 14.85E-15 1 +
re
4.3 Multi-dimension and analysis.
Given that real-world problems often involve a large number of variables, it is imperative to conduct a
er
rigorous assessment of the H5N1 algorithm's adaptability in optimizing as the dimensionality increases. This
represents a standard benchmarking test, widely utilized in optimization literature, capable of manifesting the
dimensional effects on H5N1's performance. It is intended to validate the algorithm's capabilities not merely
pe
for low-dimensional problems but also for high-dimensional issues.
Firstly, the algorithm will be executed on 13 test functions sequentially with computational dimensions
of 30, 100, 500, and 1000. Secondly, to determine the algorithm's trend in solving problems with increasing
dimensions, we rely on the initial results table of the 13 test functions. This will enable the selection of two
ot
representative test functions that provide the clearest perspective on the results trend. These representative
functions are F5 (unimodal) and F12 (multimodal), employed with varying parameter numbers. These
tn
All scenarios are run 30 times per function, with 500 iterations and a virus population size of 200. The
comprehensive results from these trials will be retained and depicted in Figure 7 and Table 6. This rigorous
rin
testing approach ensures that the algorithm's robustness and efficiency are fully examined, supporting high
academic standards in the optimization field.
F Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank
F1 0 0 1 0 0 1 0 0 1 0 0 1
Pr
F2 0 0 1 0 0 1 0 0 1 0 0 1
F3 0 0 1 0 0 1 0 0 1 0 0 1
F4 0 0 1 0 0 1 0 0 1 0 0 1
31
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
F6 1.08E-12 1.90E-12 1 2.94 6.89E-01 2 7.75E+01 2.44 3 1.95E+02 2.65 4
ed
F8 -4.09E+03 1.08E+02 1 -2.16E+04 1.47E+03 2 -4.48E+04 3.75E+03 3 -6.28E+04 4.74E+03 4
F9 0 0 1 0 0 1 0 0 1 0 0 1
F11 0 0 1 0 0 1 0 0 1 0 0 1
iew
F13 1.63E-11 5.64E-11 1 8.25 4.72E-01 2 4.89E+01 3.20E-01 3 9.89E+01 4.61E-01 4
Rank 1 2 3 4
Table 6 presents the performance of the H5N1 algorithm, assessed based on the influence of
v
dimensionality. As the number of dimensions increases, there's a simultaneous decrease in the algorithm's
re
convergence rate and result accuracy. This is entirely reasonable, considering that the number of solutions
remains constant throughout the experiment's computation process.
A notable point here is the algorithm's performance for certain functions changes minimally, and in
er
some cases, such as functions F1, F2, F3, F4, F9, F10, and F11, the results do not change at all despite the
increase in parameter numbers. This issue could potentially be resolved by increasing the population size
involved in the search process, that is, the viral solutions.
pe
The results in Table 6 also elucidate our choice of functions F5 and F12 for the subsequent experiment,
which involves increasing the computational dimensions with a higher density. The rationale behind this
decision is that results for some of the functions in the given set showed almost no change throughout the
process of increasing computational dimensions.
ot
In the next experiment, the results in Figure 7 visually illustrate that the performance of the H5N1
algorithm decreases with the increase in densely packed computational dimensions. Similar results also appear
tn
in this experiment, which is understandable, given that the number of solutions remains constant throughout
this experiment.
200 0.12
rin
180
0.1
160
0.08
140
ep
120 0.06
100
0.04
80
0.02
Pr
60
0
40 50 100 150 200
50 100 150 200
Figure 7. Results of several dimensions using the H5N1 algorithm on F5 and F12
32
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
4.4 Results and discussion of MH5N1
4.4.1 ZDT test problems
ed
Similar to the single-objective H5N1 approach, this section utilizes a range of test functions to assess
the effectiveness of the H5N1 algorithm in handling MOPs. The steps for quantitative and qualitative testing
are also discussed here. Accordingly, we use the ZDT test function set[69], comprising three standard test
functions: ZDT1, ZDT2, and ZDT3. Furthermore, the modified ZDT1-LF and ZDT2-3O functions, which
iew
include Pareto fronts and three objectives, have been sourced from reference [66], [70]. The mathematical
models for these functions can be found in
v
F24 Rotated Hybrid Composition Function M, R, N, S 30 [ ― 5,5] 𝐷
𝑥 = 𝑜1, 𝐹16(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠16 = 120
∗
(C-F15)
re
F25 Rotated Hybrid Composition Function M, R, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹17(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠17 = 120
(C-F16) with Noise in Fitness
F26 Rotated Hybrid Composition Function M, R, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹18(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠18 = 10
(C-F17)
F27 Rotated Hybrid Composition Function M, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹19(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠19 = 10
(C-F18) with narrow basin global optimum
F28 Rotated Hybrid Composition Function M, N, S
er 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹20(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠20 = 10
(C-F19) with Global Optimum on the Bounds
F29 Rotated Hybrid Composition Function M, R, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹21(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠21 = 360
(C-F20)
F30 Rotated Hybrid Composition Function M, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹22(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠22 = 360
(C-F21) with High Condition Number Matrix
F31 Non-Continuous Rotated Hybrid M, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹23(𝑥∗) ≈ 𝑓_𝑏𝑖𝑎𝑠23 = 360
pe
(C-F22) Composition Function
F32 Rotated Hybrid Composition Function M, R, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹24(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠24 = 260
(C-F23)
F33 Rotated Hybrid Composition Function M, N, S 30 [2,5]𝐷 𝑥∗ = 𝑜1, is outside of the
(C-F24) without bounds initialization range
𝐹25(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠25 = 260
Note:(M: Multimodal, R: Rotated, N: Non-Separable, S: Scalable, D: Dimension).
ot
Appendix B.
To validate the results, we select two algorithms considered best in the optimization world: MOPSO
tn
and NSGA-II. These algorithms are subjected to comprehensive testing using consistent parameters, including
a population size of 60 search individuals and a fixed number of iterations set to 100. Similar to the single-
objective experiments, each algorithm will be executed 30 times for each function, ensuring robustness and
statistically meaningful results. In addition, we need a coefficient that generalizes the overall result of the
rin
algorithm, thereby serving as a basis to quantitatively compare the results between algorithms when solving
the same problem. In this paper, we use the inverse generational distance (IGD) proposed by Sierra and Coello
Coello [71]. This coefficient is calculated according to the following formula:
ep
𝑛 (4-1)
𝑑2𝑖
𝑖=1
IGD =
𝑛
Pr
Whereas the number of actual Pareto optimal solutions is denoted as 𝑛, the Euclidean distance between
𝑡ℎ
the 𝑖 actual Pareto optimal solution and the nearest actual Pareto optimal solution obtained in the reference
set is represented by 𝑑𝑖.
33
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
The qualitative and quantitative results will also be calculated and presented in Figure 8-Figure 12 and
Table 7, Table 8, respectively:
ed
Table 7. Results on the ZDT problems by using MH5N1, MOPSO and NSGA-II.
iew
MH5N1 0.0014 0.0006 0.0013 0.0010 0.0023 0.0337 0.7141 0.0114 0.0009 0.0734
MOPSO 0.0080 0.0085 0.0013 0.0009 0.0321 0.1258 0.0487 0.1157 0.0311 0.1996
NSGA-II 0.0946 0.0168 0.0907 0.0662 0.1284 0.2169 0.0243 0.2172 0.1257 0.2589
v
MH5N1 0.0249 0.0004 0.0247 0.0242 0.0259 0.0071 0.0147 0.0014 0.0009 0.0564
MOPSO 0.0283 0.0058 0.0242 0.0242 0.0324 0.0535 0.0531 0.0435 0.0010 0.2152
re
NSGA-II 0.0468 0.0077 0.0459 0.0333 0.0660 0.1376 0.0232 0.1374 0.0855 0.1760
Algorithm Standard
Mean Mid Best Worst
deviation
MH5N1 0.0070 0.0016 0.0072 0.0052 0.0098
MOPSO
NSGA-II
0.0079
0.0262
0.0027
0.0030
0.0072
0.0274
er 0.0051
0.0190
0.0143
0.0294
Table 7 presents the quantitative results of the algorithms. Notably, the results shown in Table 7
demonstrate the superiority of MH5N1 in all ZDT problems compared to the other two algorithms. At first
pe
glance, the mean value of MH5N1 is the lowest for all functions, while NSGA-II has the highest value for all
functions.
MH5N1 1 1 1 2 1 1.20 1
MOPSO 2 2 2 1 2 1.80 2
Pr
NSGA-II 3 3 3 3 3 3.00 3
34
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
For easier comparison, let's examine Table 8, where the achieved values of the three functions are
compared for each type of value and these values are ranked. The average of the ranks (mean rank) is then
calculated and serves as the basis for the final ranking among the three algorithms solving the problems in the
ed
ZDT test suite. The results indicate that MH5N1 ranks first across all ZDT functions, followed by MOPSO,
and finally NSGA-II. To delve deeper, we will combine this result with the qualitative results presented in
Figure 8 to Figure 12.
iew
ZDT1 function is a concave surface and is always a challenge for composite methods. Upon examining
the Pareto optimal fronts depicted in Figure 8, it becomes evident that MH5N1 and MOPSO algorithms exhibit
superior convergence when compared to NSGA-II, with both algorithms showing no gap between the Obtained
PF and the True PF, while NSGA-II presents a significant distance between these two values. The solutions
v
stored are evenly distributed in both MH5N1 and MOPSO algorithms. Between the two solutions, it is hard to
tell from Figure 8 which algorithm is superior, but upon closer examination, one can see a small gap in the
re
middle of the results for MH5N1, while MOPSO exhibits 3 gaps distributed at the beginning, middle, and end
of the results. Hence, we can somewhat conclude that in the ZDT1 problem, the MH5N1 algorithm outperforms
MOPSO in terms of distribution. This qualitative result also partially explains the quantitative results we
er
obtained in Table 7, where the MH5N1 result slightly surpasses MOPSO and is significantly superior to
NSGA-II.
pe
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
ot
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
In contrast to ZDT1, the ZDT2 function provides an excellent opportunity to compare the convergence
and coverage of algorithms on a convex surface instead of a concave one. From Figure 9, it can be clearly seen
that the convergence and coverage of MH5N1 are entirely superior to the rest. While MH5N1 can easily
rin
converge and cover the entire TF of ZDT2, MOPSO only shows good coverage when there is a significant gap
between the TF and PF of the algorithm. As for NSGA-II, it cannot even cover or converge. This is entirely
reasonable when looking at Table 7, where the value of MH5N1 is much smaller than the other two algorithms,
ep
35
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
1
0.9
0.8
0.7
ed
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
iew
Figure 9. Results of Pareto front by algorithms for ZDT2.
The ZDT3 function is an optimization problem with a Pareto optimal front comprised of a series of local
optimal points. Comparing the results based on Figure 10, it can be observed that the outcomes are similar,
with NSGA-II continually providing the lowest results as the solutions cannot converge into the Pareto optimal
v
area, but are able to distribute along the optimal region.
re
Conversely, MH5N1 performs exceptionally well, reaching the optimal region and harmoniously
distributing across all areas of the Pareto optimal value. Additionally, MOPSO can also distribute well across
the optimal region, and even appears to have a more even distribution than MH5N1. However, its convergence
is significantly weaker, with evident gaps between the actual optimal areas and the optimal solution.
2
er
1.5
pe
1
0.5
-0.5
The ZDT1 function with a linear front is utilized as an evaluation tool to test coverage, given that its
tn
linear front facilitates a clear assessment of this attribute. MH5N1 and MOPSO yield remarkably similar
results, as both converge and distribute fairly uniformly across the plane generated. We can only discern a
quantitative difference through Table 2, where MH5N1 outputs superior values. Similar to previous problems,
rin
NSGA-II delivers significantly inferior results compared to the other two algorithms, which is evident in Figure
11, where NSGA-II returns a substantial gap between the optimal solution and the actual optimal front.
1.2 1.2
ep
1 1
0.8 0.8
0.6 0.6
0.4 0.4
Pr
0.2 0.2
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 11. Results of Pareto front by algorithms for ZDT1 with linear front.
36
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
We have traversed ZDT1, ZDT2, ZDT3, and ZDT1 with a linear front, functions which, although
differing in shape, share the characteristic of having a single Pareto optimal front. In practice, situations will
arise that require solutions for three objectives. Hence, the three-objective version of ZDT2, an extension of
ed
ZDT2, was developed.
v iew
Figure 12. Results of Pareto front by algorihms for ZDT2 (tri-objectives problems).
re
If we look at Table 7 and Table 8, we can observe that MH5N1 performs well, outperforming MOPSO
and significantly surpassing NSGA-II, as NSGA-II fails to locate the optimal region. Furthermore, NSGA-II
cannot approach the coverage area and converge close to the Pareto optimal front. In contrast, in Figure 12,
we can see that the Pareto optimal solutions indicate that MOPSO can achieve superior coverage across all
er
ZDT facets uniformly. However, MOPSO's convergence struggles as the results display a distance between
the optimal region and the obtained value.
pe
Unlike MOPSO, MH5N1 exhibits moderate coverage but excellent convergence, as the solutions
coincide with the problem's Pareto optimal front. The value difference in Table 7 can be attributed to this
explanation. The results obtained serve as evidence to substantiate the assertion that the MH5N1 algorithm is
proficient in accurately approximating the actual Pareto optimal front in tri-objective optimization problems.
ot
as one of the most formidable challenges for optimization algorithms. This test suite is renowned for its steep
slopes, rotations, compositions, translations, and the intricate interplay between continuous and discontinuous
spaces. In this subsection, the H5N1 algorithm is employed to solve the test problems, and a comprehensive
comparative analysis is conducted to assess its performance in comparison to MSSA [66], MOPSO, and
rin
MOEA/D algorithms. The qualitative and quantitative results are depicted in Figure 13 and Table 9. In addition
to providing these results, this study offers statistical comparisons and rankings of the MH5N1 results against
other algorithms, as shown in Table 10.
ep
The results demonstrate that MH5N1 outperforms other algorithms, including the latest such as MSSA.
The key advantages of the proposed MH5N1 algorithm over MSSA, MOPSO, and MOEA/D are its high
convergence and broad coverage. Firstly, looking at the qualitative results in Figure 13, it is seen that all
Pr
functions are very well approximated in optimizing the Pareto plane. Turning to the quantitative results in
Table 9, the numerical results are superior for all functions.
37
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Table 9: Statistical results on UF1 - UF10 in the CEC2009 problems using MH5N1, MSSA, MOPSO and
MOEA/D.
ed
Problems UF1 UF2 UF3
Metric MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D
Mean 0.0014 0.1024 0.1370 0.1871 0.0014 0.0576 0.0604 0.1223 0.0015 0.2628 0.3139 0.2886
Median 0.0014 0.1026 0.1317 0.1828 0.0013 0.0580 0.0483 0.1201 0.0015 0.2424 0.3080 0.2892
iew
STD. 0.0001 0.0062 0.0440 0.0507 0.0002 0.0048 0.0276 0.0107 0.0002 0.0727 0.0447 0.0159
Worst 0.0016 0.1093 0.2278 0.2464 0.0018 0.0657 0.1305 0.1436 0.0019 0.4005 0.3777 0.3129
Best 0.0011 0.0897 0.0899 0.1265 0.0010 0.0479 0.0369 0.1048 0.0011 0.1711 0.2564 0.2634
Metric MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D
v
Mean 0.0014 0.0902 0.1360 0.0681 0.1206 0.6659 2.2023 1.2914 0.0738 0.1903 0.6475 0.6881
Median 0.0014 0.0891 0.1343 0.0684 0.1206 0.6931 2.1257 1.3376 0.0738 0.1962 0.5507 0.6984
re
STD. 0.0001 0.0040 0.0073 0.0021 0.0000 0.0986 0.5530 0.1348 0.0000 0.0457 0.2661 0.0553
Worst 0.0016 0.0984 0.1518 0.0703 0.1206 0.7914 3.0383 1.4674 0.0739 0.2666 1.2428 0.7401
Best 0.0011 0.0855 0.1273 0.0646 0.1206 0.4495 1.4647 1.1230 0.0737 0.1163 0.3793 0.5523
Metric
Mean
MH5N1
0.0009
MSSA
0.0690
MOPSO
0.3539
MH5N1
0.0111
er
MSSA
0.2743
MOPSO
0.5367
MH5N1
0.0121
MSSA
0.4441
MOPSO
0.4885
MH5N1
0.0111
MSSA
0.9769
MOPSO
1.6371
Median 0.0008 0.0686 0.3873 0.0111 0.2655 0.5364 0.0120 0.4222 0.4145 0.0111 0.9190 1.5916
pe
STD 0.0001 0.0059 0.2044 0.0001 0.0447 0.1825 0.0003 0.1084 0.1444 0.0001 0.2189 0.2987
Worst 0.0011 0.0796 0.6151 0.0114 0.3794 0.7963 0.0127 0.6422 0.7221 0.0113 1.3142 2.1622
Best 0.0007 0.0610 0.0540 0.0110 0.2249 0.2453 0.0116 0.2849 0.3335 0.0110 0.6082 1.2200
ot
Table 10: Statistical ranking results on UF1 - UF10 in the CEC2009 problems using MH5N1, MSSA,
MOPSO and MOEA/D.
tn
Metric MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D
Mean 1.00 2.00 3.00 4.00 1.00 2.00 3.00 4.00 1.00 2.00 4.00 3.00
Median 1.00 2.00 3.00 4.00 1.00 3.00 2.00 4.00 1.00 2.00 4.00 3.00
rin
STD 1.00 2.00 3.00 4.00 1.00 2.00 4.00 3.00 1.00 4.00 3.00 2.00
Worst 1.00 2.00 3.00 4.00 1.00 2.00 3.00 4.00 1.00 4.00 3.00 2.00
Best 1.00 2.00 3.00 4.00 1.00 3.00 2.00 4.00 1.00 2.00 3.00 4.00
Mean 1.00 2.00 3.00 4.00 1.00 2.40 2.80 3.80 1.00 2.80 3.40 2.80
Final 1.00 2.00 3.00 4.00 1.00 2.00 3.00 4.00 1.00 2.00 3.00 2.00
ep
Metric MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D MH5N1 MSSA MOPSO MOEA/D
Mean 1.00 3.00 4.00 2.00 1.00 2.00 4.00 3.00 1.00 2.00 3.00 4.00
Median 1.00 3.00 4.00 2.00 1.00 2.00 4.00 3.00 1.00 2.00 3.00 4.00
Pr
STD 1.00 3.00 4.00 2.00 1.00 2.00 4.00 3.00 1.00 2.00 4.00 3.00
Worst 1.00 3.00 4.00 2.00 1.00 2.00 4.00 3.00 1.00 2.00 4.00 3.00
Best 1.00 3.00 4.00 2.00 1.00 2.00 4.00 3.00 1.00 2.00 3.00 4.00
Mean 1.00 3.00 4.00 2.00 1.00 2.00 4.00 3.00 1.00 2.00 3.40 3.60
38
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Final rank 1.00 3.00 4.00 2.00 1.00 2.00 4.00 3.00 1.00 2.00 3.00 4.00
Problems UF7 UF8 UF9 UF10
Metric MH5N1 MSSA MOPSO MH5N1 MSSA MOPSO MH5N1 MSSA MOPSO MH5N1 MSSA MOPSO
ed
Mean 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00
Median 1.00 2.00 3.00 1.00 2.00 3.00 1.00 3.00 2.00 1.00 2.00 3.00
STD 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00
Worst 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00
Best 1.00 3.00 2.00 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00
iew
Mean 1.00 2.20 2.80 1.00 2.00 3.00 1.00 2.20 2.80 1.00 2.00 3.00
Final rank 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00 1.00 2.00 3.00
v
re
er
pe
ot
tn
Figure 13:Results of Pareto front using MH5N1 on the CEC2009 problems (UF1 – UF10).
5 Real-world applications
rin
This portion of the study implements the developed H5N1 and MH5N1 methodologies to a variety of
practical applications. The H5N1 algorithm is utilized in the resolution of five conventional engineering design
conundrums. The MSSA method is adopted to tackle the wheel disc brake design problem. The complete
ep
task operates within a significantly restricted exploration area [72]. The structural variables involved in this
task are illustrated in Figure 14. The mathematical problem is presented in Appendix C. Table 11 displays the
39
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
outcome of employing the H5N1 method to tackle this problem. The results indicate that this approach holds
its own when compared with traditional and stochastic optimization methods described in past research.
ed
v iew
re
Figure 14. Problem with the design of Three-bar truss
Table 11. Comparison of the results of several algorithms for the problem of designing Three-bar truss
er
Optimal solutions Best Subjects Optimal Cost
Ray & Saini [76] 0.7950000 0.3950000 N/A N/A N/A 264.3000000
The primary objective of the welded beam design issue [77] is to ascertain the lowest manufacturing
expense by identifying the optimum values for a set of given variables. These include four optimization
variables, as depicted in Figure 15: the length of the attached portion of the bar 𝑥1(𝑙), the weld thickness 𝑥2(ℎ),
rin
the bar's height 𝑥3(𝑡), and the bar's thickness 𝑥4(𝑏). These defined variables are to comply with seven
restrictions. A mathematical delineation of this problem is provided in Appendix C. The results of applying
the H5N1 technique to this problem are demonstrated in Table 12.
ep
Pr
40
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
ed
iew
Figure 15. Problem with the design of Welded beam
v
Table 12. Comparison of the results of several algorithms for the problem of designing the welded beam
re
Coello &
CPSO Coello
H5N1 SSA[66] Montes WOA[52] CDE[79] Coello[81] Siddall [81] Ragsdell [82] Deb [77]
[78] [80]
[19]
𝑥1(𝑥) 0.20573 0.20570 0.20237 0.20599 0.20540 0.20314 0.20880 0.18290 0.24440 0.24550 0.24890
𝑥2(𝑥) 3.47049 3.47140 3.54421 3.47133 3.48429 3.54300 3.42050 4.04830 6.21890 6.19600 6.17300
𝑥3(𝑥)
𝑥4(𝑥)
9.03662
0.20573
9.03660
0.20570
9.04821
0.20572
9.02022
0.20648
er
9.03743
0.20628
9.03350
0.20618
8.99750
0.21000
9.36660
0.20590
8.29150
0.24440
8.27300
0.24550
8.17890
-0.25330
- -
𝑔1(𝑥) 0.00000 N/A -0.07409 N/A -0.33781 -408.73277 -5743.50200 -5743.82650 -5758.60380
12.83980 44.57857
- -
𝑔2(𝑥) 0.00000 N/A -1.24747 -0.26623 N/A -2105.91421 -4.01521 -4.71510 -255.57690
pe
44.66353 353.90260
𝑔3(𝑥) 0.00000 N/A -0.00150 -0.00050 N/A -0.00304 -0.00120 -0.02306 0.00000 0.00000 -0.00440
𝑔4(𝑥) -3.43298 N/A -3.42935 -3.43004 N/A -3.42373 -3.41187 -3.32153 -3.02256 -3.02029 -2.98287
𝑔5(𝑥) -0.08073 N/A -0.07938 -0.08099 N/A -0.07814 -0.08380 -0.05788 -0.11940 -0.12050 -0.12390
𝑔6(𝑥) -0.23554 N/A -0.23554 -0.23551 N/A -0.23556 -0.23565 -0.23703 -0.23424 -0.23421 -0.23416
- - - -
𝑔7(𝑥) 0.00000 N/A N/A -160.58645 -3490.46940 -3604.27500 -4465.27090
ot
process: the pressure vessel's thickness 𝑥1(𝑇𝑠), the head's thickness 𝑥2(𝑇ℎ), the inner radius of the vessel 𝑥3
(𝑅), and the length of the vessel without the heads 𝑥4(𝐿). The outcomes generated from deploying the H5N1
approach to this problem are presented in Table 13. The mathematical representation of this problem is fully
ep
articulated in Appendix C.
Pr
41
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
ed
iew
Figure 16. Problem with the design of Pressure vessel
Table 13. Comparison of the results of several algorithms for the problem of designing the pressure vessel.
v
𝑥1(𝑇𝑠) 𝑥2(𝑇ℎ) 𝑥3(𝑅) 𝑥4(𝐿) 𝑔1(𝑥) 𝑔2(𝑥) 𝑔3(𝑥) 𝑔4(𝑥) 𝑓(𝑥)
0.78058 0.38635 40.43444 198.42283 -0.00019 -0.00060 -77.53741 -41.57717 5892.58762
re
H5N1
0.83037 0.41621 42.75127 169.34540 N/A N/A N/A N/A 6048.78440
AOA [53]
0.81250 0.43750 42.09845 176.63660 N/A N/A N/A N/A 6059.71433
HPSO [84]
0.81250 0.43750 42.09841 176.63769 N/A N/A N/A N/A 6059.73400
CDE [79]
WOA [52]
0.81250 0.43750 42.09827
er 176.63900 N/A N/A N/A N/A 6059.74100
eleven constraints [89]. Appendix C-C.4 presents the scopes of design variables and constants utilized in the
problem of the five-stage cantilever beam, respectively. The results derived from applying the H5N1 technique
to this problem are outlined in Table 14.
ep
Pr
42
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
ed
v iew
Figure 17. Cantilever beam design problem
re
Table 14. Comparison of the results of several algorithms for the problem of designing the cantilever beam
Algorithm
Optimal Solutions
H5N1 ADEA[90] dBA [89] PSGA [91] SR[91] GA [92] GA [26] BA [89]
𝑥1(𝑏1)
𝑥2(ℎ1)
3.05649
61.12975
3.05770
61.15460
er
3.04668
60.93361
3.00000
60.00000
3.10321
60.29433
3.00000
60.00000
3.00000
60.00000
3.43593
56.98366
𝑥3(𝑏2) 2.81437 2.81330 2.81981 3.10000 2.79583 3.10000 3.10000 3.01897
pe
𝑥4(ℎ2) 56.28738 56.26530 56.39611 55.00000 55.87571 55.00000 55.00000 54.81524
𝑥5(𝑏3) 2.52386 2.52360 2.52973 2.60000 2.56392 2.60000 2.60000 2.79102
𝑥6(ℎ3) 50.47711 50.47170 50.59009 50.00000 51.26774 50.00000 50.00000 51.81262
𝑥7(𝑏4) 2.20457 2.20460 2.20516 2.28866 2.24726 2.28370 2.30820 4.55338
𝑥8(ℎ4) 44.09142 44.09111 44.10232 45.61715 44.13850 45.55070 45.04880 36.22565
𝑥9(𝑏5) 1.74976 1.74980 1.74976 1.74982 1.79172 1.75320 1.81180 1.71773
ot
43
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
a penalty function is employed in a manner akin to that in reference [94]. The data in Table 15 demonstrates
the advantages of employing the proposed H5N1 approach to this problem as well.
ed
iew
Figure 18. Problem with the design of Tension/compression spring
Table 15. Comparison of the results of several algorithms for the problem of designing the tension/compression
spring.
v
Optimal solutions Best Subjects Optima cost
Algorithm 𝑥1(𝑑) 𝑥2(𝐷) 𝑥3(𝑃) 𝑔1(𝑥) 𝑔2(𝑥) 𝑔3(𝑥) 𝑔4(𝑥) 𝑓(𝑥)
re
H5N1 0.051726 0.357657 11.231553 0 0 -4.056560 -0.727078 0.012663
CDE [79] 0.051609 0.354714 11.410831 N/A N/A N/A N/A 0.012670
CPSO [78] 0.051728 0.357644 11.244543 -0.000845 -0.000013 -4.051300 -0.727090 0.012675
PSO [78] 0.051728 0.357644 11.244543 N/A N/A N/A N/A 0.012675
SSA [66] 0.051207 0.345215 12.004032 N/A N/A N/A N/A 0.012676
GSA [66] 0.050276 0.323680 13.525410 N/A N/A N/A N/A 0.012702
Coello [81] 0.051480 0.351661 11.632201 -0.002080 -0.000110 -4.026318 -4.026318 0.012705
pe
Kannan [96] 0.050000 0.315900 14.250000 -0.000014 -0.003782 -3.938302 -0.756067 0.012833
variables as illustrated in Figure 19: the inner radius of the discs 𝑥1(𝑅𝑖), the outer radius of the discs 𝑥2(𝑅𝑜),
the engaging force 𝑥3(𝐹), and the number of friction surfaces 𝑥4(𝑛). The parameters and constants used in this
tn
problem, along with their respective ranges, are given in Appendix C-C.6. The results yielded from the
application of the H5N1 method to this problem are summarized in Table 16.
rin
18
16
14
12
ep
10
6
Pr
2
0 0.5 1 1.5 2 2.5 3
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Table 16. Comparison of the results of several algorithms for the problem of designing Disc-brake
ed
𝑥1(𝑅𝑖) 𝑥2(𝑅𝑜) 𝑥3(𝐹) 𝑥4(𝑛) 𝑓1(𝑥) 𝑓2(𝑥)
Plain stochastic[97] 𝑓1(𝑥) 62.6 83.5 2920.9 11 1.79 2.77
𝑓2(𝑥) 70.4 106.6 2948.4 11 3.76 2.24
iew
𝑓2(𝑥) 78.7 108.3 2988.3 11 3.25 2.11
Hybrid Immune-Hill
Climbing algorithm 𝑓1(𝑥) N/A N/A N/A N/A 0.137 25.87
(HIHC)[99]
𝑓2(𝑥) N/A N/A N/A N/A 2.816 2.083
v
NSGA-II[98] 𝑓1(𝑥) 55 75 2736.72 2 0.1274 16.83
𝑓2(𝑥) 79.99 109.99 2999.99 11 3.3459 2.071
re
PAHS[98] 𝑓1(𝑥) 57.95 78.57 2736.72 2 0.1274 17.38
𝑓2(𝑥) 79.99 109.99 2999.99 11 3.3459 2.071
and MOPs. In H5N1, the optimal environment that the virus aims to locate corresponds to the best solutions
achieved so far. To achieve a balance between exploration and exploitation, H5N1 incorporates a range of
coefficients and adaptive mechanisms. On the other hand, the MH5N1 algorithm incorporates a repository
tn
where non-dominated solutions obtained thus far are stored for further utilization and analysis. When the
storage is full, solutions located in densely populated areas are eliminated, while non-dominated solutions in
sparsely populated regions are selected as favorable environment sources.
rin
A series of experiments were conducted to validate the effectiveness of the proposed algorithms. Various
qualitative indicators such as search history, trajectory, average fitness, and convergence curve were utilized.
H5N1 was employed to optimize a set of standard functions. The observations and conclusions drawn from
ep
the experiments indicate that H5N1 exhibits the capability to explore the most promising regions within the
search space. Furthermore, it gradually adjusts the positions of the host, representing the virus environment,
particularly in the final iterations. H5N1 also demonstrates the ability to enhance the average fitness across all
Pr
accessible environments and consistently improves upon the best solution found during the optimization
process.
45
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Moreover, the H5N1 algorithm was applied to solve a collection of high-dimensional test functions,
which included unimodal, multimodal, and composite functions. The purpose was to showcase its effectiveness
in tackling problems with numerous variables and diverse characteristics. Through a statistical test, the results
ed
were compared against various established and recent algorithms. The findings and analysis strongly suggest
that H5N1 has the ability to identify global optimum values for a broad range of standard unimodal,
multimodal, and composite functions, outperforming several optimization algorithms utilized in this study.
iew
In order to evaluate the effectiveness of the MH5N1 algorithm, a comprehensive set of widely
recognized multi-objective test functions was employed. The obtained results were compared with those
achieved by other state-of-the-art MOAs such as MOPSO, NSGA-II, and MODA/E. Based on the analysis and
findings, it can be deduced that the MH5N1 algorithm is capable of effectively approximating the true Pareto
v
optimal front, regardless of its shape or complexity. Furthermore, a significant conclusion can be drawn that
the MH5N1 algorithm exhibits a remarkable characteristic of achieving a harmonious convergence and
re
coverage. This attribute empowers the algorithm to effectively discover precise and evenly-distributed Pareto
optimal solutions across all objectives, thereby substantiating its excellence in MOPs.
The outcomes of the test function evaluations have demonstrated the potential of the H5N1 and MH5N1
er
algorithms in tackling real-world problems, substantiating their efficacy in practical applications. The
effectiveness of these algorithms has been verified by their ability to discover optimal solutions for the utilized
pe
problem instances. By examining the results from real-world applications, we can conclude that both the
SH5N1 and MH5N1 algorithms are capable of effectively solving problems in real-world scenarios with
unknown search spaces.
Through simulations, results analysis, extensive searches, in-depth discussions, and comprehensive
ot
conclusions, it can be asserted that the SH5N1 algorithm and MH5N1 hold notable advantages compared to
existing optimization algorithms in the literature, warranting their application to diverse problem domains.
This research also unveils several potential research avenues for further exploration. It is recommended
tn
to investigate the application of the SH5N1 and MH5N1 algorithms in solving SOPs and MOPs across various
fields. Additionally, proposing binary versions of these algorithms could be a valuable contribution to the
optimization literature. While the present work briefly touches upon constraint optimization using the proposed
rin
algorithms, further research should investigate the impact of different constraint handling methods on the
performance of SH5N1 and MH5N1.
Finally, we will provide the Matlab source codes of the H5N1 algorithm that are publicly available at
ep
7 Acknowleagement
This work is funded by Vingroup and supported by Innovation Foundation (VINIF) under project code
Pr
VINIF.2021.DA00192.
46
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Appendix A. Problems for single-objective employed in this study.
ed
Table 17: Unimodal functions.[58]
iew
𝑖=1
𝑛 𝑛
𝑛 𝑖 2
𝐹3(𝔵) =
𝑖=1 𝑗=1
( ) 𝔵𝑗 20 [-100,100] 0
v
𝐹4(𝔵) = 𝑚𝑎𝑥{|𝔵𝑖|, 1 ≤ 𝑖 ≤ 𝑛} 20 [-100,100] 0
re
𝑛―1
𝑖=1
𝐹10(𝔵) = 20𝑒
( ―0,2 1
𝑛
𝑛
𝑖=1
) + 𝑒(
𝔵2𝑖 1
𝑛
𝑛
𝑖=1
cos (2𝜋𝑥𝑖) ) ―20 ― 𝑒 20 [-32, 32] 0
𝑛 𝑛
𝐹11(𝔵) =
1
𝑥2𝑖
() 𝑥𝑖
rin
― cos 1 +1 20 [-600,600] 0
4000
𝑖=1 𝑖=1 𝑖2
𝑛―1 𝑟
𝜋
{
𝐹12(𝔵) = 10 𝑠𝑖𝑛(𝑦1𝜋) +
𝑛
𝑖=1
(𝑦𝑖 ― 1) [1 + 10 𝑠𝑖𝑛 𝑦𝑖+1𝜋)] + (𝑦𝑛 ― 1) +
2 2( 2
} 1=1
𝑢(𝔵𝑖,10,100,4)
20 [-50, 50] 0
ep
{
𝔵𝑖 + 1 𝑘(𝔵𝑖 ― 𝑎)𝑚, 𝔵𝑖 > 𝑎
𝑦𝑖 = 1 + ( )
, 𝑢 𝔵𝑖,𝑎,𝑘,𝑚 = 0, ― 𝑎 < 𝔵𝑖 < 𝑎
4 𝑘( ―𝔵𝑖 ― 𝑎)𝑚, 𝔵𝑖 < ―𝑎
𝑛 𝑟
{
𝐹13(𝔵) = 0.1 𝑠𝑖𝑛 𝜋3𝔵1) + 2( (𝔵𝑖 ― 1) [1 + 𝑠𝑖𝑛 𝜋3𝔵𝑖 + 1)] + (𝔵𝑛 ― 1) [1 + 𝑠𝑖𝑛
2 2(
20 𝜋2𝔵𝑛)] + [-50,
2 2(
𝑢(50]
}
𝔵𝑖,5100,4) 0
Pr
𝑖=1 1=1
47
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Table 19: Fixed-dimension multimodal functions[58]
ed
Function Dimensions Boundary 𝒇𝐦𝐢𝐧
25 ―1
(
F14(𝔵) = 0.002 +
𝑗=1 𝑗
2
1
+ ∑𝑖=1 (𝔵𝑖 ― 𝑎𝑖𝑗) ) 2 [ ― 65,65] 1
iew
11 2
𝔵1(𝑏2𝑖 + 𝑏𝑖𝔵2)
𝐹15(𝔵) =
𝑖=1
(
𝑎𝑖 ―
𝑏2𝑖 + 𝑏𝑖𝔵3 + 𝔵4 ) 4 [ ― 5,5] 0.00030
1
𝐹16(𝔵) = 4𝔵21 ― 2.1𝔵41 + 𝔵61 + 𝔵1𝔵2 ― 4𝔵22 + 4𝔵42 2 [ ― 5,5] -1.0316
v
3
( 5.1 5
) + 10((1 ― 8𝜋1 )cos(𝔵 ) + 1)
re
𝐹17(𝔵) = 𝔵2 ― 𝔵21 + 𝔵1 ― 6 1 2 [ ― 5,5] 0.398
4𝜋2 𝜋
―1
𝐹22(𝔵) = ― ∑7𝑖=1 [(𝑋 ― 𝔞𝑖)(𝑋 ― 𝔞𝑖)𝑇 + 𝔠𝑖] 4 [0,1] -10.4028
―1
𝑖=1 [(𝑋 ― 𝔞𝑖)(𝑋 ― 𝔞𝑖) + 𝔠𝑖]
𝑇
𝐹23(𝔵) = ― ∑10 4 [0,1] -10.5363
ot
F24 Rotated Hybrid Composition Function M, R, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹16(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠16 = 120
(C-F15)
F25 Rotated Hybrid Composition Function M, R, N, S 30 [ ― 5,5]𝐷 𝑥∗ = 𝑜1, 𝐹17(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠17 = 120
(C-F16) with Noise in Fitness
rin
(C-F23)
F33 Rotated Hybrid Composition Function M, N, S 30 [2,5]𝐷 𝑥∗ = 𝑜1, is outside of the
(C-F24) without bounds initialization range
𝐹25(𝑥∗) = 𝑓_𝑏𝑖𝑎𝑠25 = 260
Note:(M: Multimodal, R: Rotated, N: Non-Separable, S: Scalable, D: Dimension).
48
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Appendix B. Problems for multi-objective employed in this study
ed
Table 21: The problems of ZDT were utilized in this study.[66], [70]
Functions Equations
iew
𝑓2(𝑥) = 𝑔(𝑥) × 1 ―
( 𝑓1(𝑥)
𝑔(𝑥) )
Where: 0 ≤ 𝑥𝑖 ≤ 1, {𝑖|𝑖 ∈ 𝑍, 1 ≤ 𝑖 ≤ 30}
𝑁
9
𝑔(𝑥) = 1 + 𝑥𝑖
𝑁―1
v
𝑖=2
re
(
𝑓2(𝑥) = 𝑔(𝑥) 1 ―
𝑓1(𝑥)
𝑔(𝑥) )
Where: 0 ≤ 𝑥𝑖 ≤ 1, {𝑖|𝑖 ∈ 𝑍, 1 ≤ 𝑖 ≤ 30}
𝑁
9
er 𝑔(𝑥) = 1 +
𝑁―1
𝑖=2
𝑥𝑖
( ( ) 𝑓1(𝑥)
( ))
pe
𝑓1(𝑥)
𝑓2(𝑥) = 𝑔(𝑥) × 1 ― ― sin(10𝜋𝑓1(𝑥))
𝑔(𝑥) 𝑔(𝑥)
Where: 0 ≤ 𝑥𝑖 ≤ 1, 1 ≤ 𝑖 ≤ 30
𝑁
9
𝑔(𝑥) = 1 + 𝑥𝑖
29
𝐼=2
ot
(
𝑓2(𝑥) = 𝑔(𝑥) 1 ―
𝑓1(𝑥)
𝑔(𝑥) )
tn
49
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
Table 22. CEC 2009 test problems (UF1 - UF10)[61]
Function Equations
ed
UF1 Minimized: 2
𝑓1(𝑥) = 𝑥1 +
2
|𝐽1|
𝑗∈𝐽
(
𝑥𝑗 ― sin 6𝜋𝑥1 +
1
𝑗𝜋
𝑛 ( ))
2
𝑓2(𝑥) = 1 ― 𝑥 +
2
|𝐽2|
𝑗∈𝐽
(
𝑥𝑗 ― sin 6𝜋𝑥1 +
2
𝑗𝜋
𝑛 ( ))
iew
Where: 𝐽1 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑜𝑑𝑑, 2 ≤ 𝑗 ≤ 𝑛}
𝐽2 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑒𝑣𝑒𝑛, 2 ≤ 𝑗 ≤ 𝑛}
UF2 Minimized: 2
𝑓1(𝑥) = 𝑥1 + 𝑦𝑗2,
|𝐽1|
𝑗∈𝐽1
2
𝑓2(𝑥) = 1 ― 𝑥 + 𝑦𝑗2
v
|𝐽2|
𝑗∈𝐽2
Where: 𝐽1 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑜𝑑𝑑, 2 ≤ 𝑗 ≤ 𝑛}
re
𝐽2 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑒𝑣𝑒𝑛, 2 ≤ 𝑗 ≤ 𝑛}
{
4𝑗𝜋 𝑗𝜋
0.3𝑥 cos (24𝜋𝑥 +
𝑛 )
2
1 1 1 1 𝑗 1
𝑛
𝑦 =
[ ] ( 𝑛 ) ― 𝑥 , 𝑖𝑓 𝑗𝜖𝐽
𝑗 4𝑗𝜋 𝑗𝜋
er 0.3𝑥 2
cos
1 ( 24𝜋𝑥 +
𝑛 )
+10.6𝑥 sin 6𝜋𝑥 + 1 1 𝑗 2
UF3 Minimized:
𝑓1(𝑥) = 𝑥1 +
2
|𝐽1| (4
𝑗∈𝐽1
𝑦2𝑗 ― 2
𝑗∈𝐽1
cos ( )
20𝑦𝑗𝜋
𝑗
+2
)
pe
𝑓2(𝑥) = 𝑥1 +
2
|𝐽2| ( 4
𝑗∈𝐽1
𝑦2𝑗 ― 2
𝑗∈𝐽2
cos ( )
20𝑦𝑗𝜋
𝑗
+2
)
Where: 𝐽1 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑜𝑑𝑑, 2 ≤ 𝑗 ≤ 𝑛}
𝐽2 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑒𝑣𝑒𝑛, 2 ≤ 𝑗 ≤ 𝑛}
(
0.5 1.0+
3(𝑗―2)
𝑛―2 )―𝑥
ot
𝑦𝑗 = 𝑥1 𝑗 ,{ 𝑗│𝑗 ∈ ℤ,2 ≤ 𝑗 ≤ 𝑛}
UF4 Minimized: 2
𝑓1(𝑥) = 𝑥1 + ℎ(𝑦𝑗)
|𝐽1|
𝑗∈𝐽1
tn
2
𝑓2(𝑥) = 1 ― 𝑥2 + ℎ(𝑦𝑗)
|𝐽2|
𝑗∈𝐽2
Where: 𝐽1 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑜𝑑𝑑, 2 ≤ 𝑗 ≤ 𝑛}
𝐽2 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑒𝑣𝑒𝑛, 2 ≤ 𝑗 ≤ 𝑛}
rin
𝑗𝜋
(
𝑦𝑗 = sin 6𝜋𝑥1 +
𝑛)― 𝑥𝑗 ,{ 𝑗│𝑗 ∈ ℤ,2 ≤ 𝑗 ≤ 𝑛}
|𝑦𝑗|
ℎ(𝑦𝑗) = | | ,{ 𝑗│𝑗 ∈ ℤ,2 ≤ 𝑗 ≤ 𝑛}
1 + 𝑒2 𝑦𝑗
ep
UF5 Minimized:
𝑓1(𝑥) = 𝑥1 + (2𝑛1 + 𝜀)|sin(2𝑁𝜋𝑥 )| + 1
2
|𝐽1|
𝑗∈𝐽1
ℎ(𝑦𝑗)
Where: 𝐽1 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑜𝑑𝑑, 2 ≤ 𝑗 ≤ 𝑛}
Pr
𝐽2 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑒𝑣𝑒𝑛, 2 ≤ 𝑗 ≤ 𝑛}
𝑗𝜋
(
𝑦𝑗 = sin 6𝜋𝑥1 +
𝑛)― 𝑥𝑗 ,{ 𝑗│𝑗 ∈ ℤ,2 ≤ 𝑗 ≤ 𝑛}
50
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
ℎ(𝑦𝑗) = 2𝑦𝑗2 ― cos(4𝜋𝑦𝑗) + 1
𝑁 = 10, 𝜀 = 0.1
ed
𝑓1(𝑥) = 𝑥1 + max 0.2 1
1 𝑗∈𝐽1
𝑦2𝑗 ― 2
𝑗∈𝐽1
cos ( )
20𝑦𝑗𝜋
𝑗
+1
iew
Where: 𝐽1 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑜𝑑𝑑, 2 ≤ 𝑗 ≤ 𝑛}
𝐽2 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑒𝑣𝑒𝑛, 2 ≤ 𝑗 ≤ 𝑛}
𝑗𝜋
(
𝑦𝑗 = sin 6𝜋𝑥1 +
𝑛)― 𝑥𝑗 , { 𝑗│𝑗 ∈ ℤ,2 ≤ 𝑗 ≤ 𝑛},
𝑁 = 2 𝑎𝑛𝑑 ∀𝜀 > 0
v
UF7 Minimized: 2
𝑓1(𝑥) = 5 𝑥1 + 𝑦𝑗2
|𝐽1| 𝑗∈𝐽1
re
𝑓2(𝑥) = 1 ― 5 𝑥1 + 𝑦𝑗2
|𝐽2| 𝑗∈𝐽2
Where: 𝐽1 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑜𝑑𝑑, 2 ≤ 𝑗 ≤ 𝑛}
𝐽2 = {𝑗 | 𝑗 ∈ ℤ, 𝑗 𝑖𝑠 𝑒𝑣𝑒𝑛, 2 ≤ 𝑗 ≤ 𝑛}
𝑗𝜋
er (
𝑦𝑗 = 𝑥𝑗 ― sin 6𝜋𝑥1 +
𝑛 )
,{ 𝑗│𝑗 ∈ ℤ,2 ≤ 𝑗 ≤ 𝑛}
𝑁 = 10 𝑎𝑛𝑑 ∀𝜀 > 0
UF8 Minimized: 2
𝑓1(𝑥) = cos(0.5𝑥1𝜋) cos(0.5𝑥2𝜋) +
2
(𝑥 ― 2𝑥 𝑠𝑖𝑛 (2𝑥 𝜋 + 𝑗𝜋𝑛))
𝑗 2 1
pe
|𝐽1|
𝑗∈𝐽1
2
𝑓2(𝑥) = cos(0.5𝑥1𝜋) sin(0.5𝑥2𝜋) +
|𝐽2|
2
𝑗∈𝐽2
(𝑥 ― 2𝑥 𝑠𝑖𝑛 (2𝑥 𝜋 + 𝑗𝜋𝑛))
𝑗 2 1
2
𝑓3(𝑥) = sin(0.5𝑥1𝜋) +
2
|𝐽3|
𝑗∈𝐽
(
𝑥𝑗 ― 2𝑥2 𝑠𝑖𝑛 2𝑥1𝜋 +
3
𝑗𝜋
𝑛 ( ))
ot
UF9 Minimized: 2
𝑓1(𝑥) = 0.5[𝑚𝑎𝑥{0,(1 + 𝜀)(1 ― 4(2𝑥1 ― 1)2} + 2𝑥1] +
2
|𝐽1|
𝑗∈𝐽1
(𝑥 ― 2𝑥 𝑠𝑖𝑛 (2𝑥 𝜋 + 𝑗𝜋𝑛))
𝑗 2 1
2
𝑓2(𝑥) = 0.5[𝑚𝑎𝑥{0,(1 + 𝜀)(1 ― 4(2𝑥1 ― 1)2} + 2𝑥1] +
|𝐽2|
2
𝑗∈𝐽2
(𝑥 ― 2𝑥 sin (2𝑥 𝜋 + 𝑗𝜋𝑛))
𝑗 2 1
rin
2
𝑓3(𝑥) = 1 ― 𝑥2 +
2
|𝐽3|
𝑗∈𝐽
(
𝑥𝑗 ― 2𝑥2 sin 2𝑥1𝜋 +
3
𝑗𝜋
𝑛 ( ))
Where: 𝐽1 = {𝑗│𝑗 ∈ ℤ,3 ≤ 𝑗 ≤ 𝑛, 𝑎𝑛𝑑 (𝑗 ― 1) 𝑚𝑜𝑑 3 = 0},
ep
𝑎𝑛𝑑 ∀𝜀 > 0
UF10 Minimized: 2
𝑓1(𝑥) = cos(0.5𝑥1𝜋) cos(0.5𝑥2𝜋) + (4𝑦2𝑗 ― cos(8𝜋𝑦𝑗) + 1)
|𝐽1|
Pr
𝑗∈𝐽1
2
𝑓2(𝑥) = cos(0.5𝑥1𝜋) sin(0.5𝑥2𝜋) + (4𝑦2𝑗 ― cos(8𝜋𝑦𝑗) + 1)
|𝐽2|
𝑗∈𝐽1
51
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
2
𝑓3(𝑥) = sin(0.5𝑥1𝜋) + (4𝑦2𝑗 ― cos(8𝜋𝑦𝑗) + 1)
|𝐽3|
𝑗∈𝐽1
ed
𝐽2 = {𝑗│𝑗 ∈ ℤ,3 ≤ 𝑗 ≤ 𝑛, 𝑎𝑛𝑑 (𝑗 ― 2) 𝑚𝑜𝑑 3 = 0},
𝑗𝜋
(
𝑦𝑗 = 𝑥𝑗 ― 2𝑥2𝑠𝑖𝑛 2𝜋𝑥1 +
𝑛 )
,{𝑗│𝑗 ∈ ℤ,3 ≤ 𝑗 ≤ 𝑛}
iew
Appendix C. Real-world problems utilised in this study
7.1 C.1. Problem with the design of Three-bar truss
𝑓(𝑥) = (2 2𝑥1 + 𝑥2) × 𝑙
v
Minimized:
re
Where: 0 ≤ 𝑥1,𝑥2 ≤ 1
2𝑥1 + 𝑥2
Subject to: 𝑔1(𝑥) =
er
2𝑥21 + 2𝑥1𝑥2
𝑃―𝜎≤0
𝑥2
pe
𝑔2(𝑥) = 𝑃―𝜎≤0
2𝑥21 + 2𝑥1𝑥2
1
𝑔3(𝑥) = 𝑃―𝜎≤0
2𝑥2 + 𝑥1
ot
𝑥2
𝜏(𝑥) = (𝜏′)2 + 2𝜏′𝜏′′ + (𝜏′′)
2
rin
2𝑅
𝑃 𝑀𝑅
𝜏′ = , 𝜏′′ =
2𝑥1𝑥2 𝐽
ep
𝑥2
( 2)
2
𝑥22 𝑥1 + 𝑥3
𝑀=𝑃 𝐿+ ,𝑅=
4
+ ( 2 )
Pr
{ [ ) ]}
2
𝑥22 𝑥1 + 𝑥3
𝐽=2 2𝑥1𝑥2
12
+ ( 2
52
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
6𝑃𝐿 4𝑃𝐿3
𝜎(𝑥) = 2
, 𝛿(𝑥) =
𝑥4𝑥3 𝐸𝑥33𝑥4
ed
(𝑥23𝑥64/36)
𝑃𝑐(𝑥) =
4.013𝐸
2
𝐿 (
× 1―
𝑥3 𝐸
2𝐿 4𝐺)
iew
𝑃 = 6000 (lb), 𝐿 = 14 (in), 𝐸 = 30 × 106 (psi), 𝐺 = 12 × 106(psi)
v
𝑔3(𝑥) = 𝑥1 ― 𝑥4 ≤ 0
re
𝑔4(𝑥) = 0.10471𝑥21 + 0.04811𝑥3𝑥4(14 + 𝑥2) ― 5 ≤ 0
𝑔2(𝑥) = ― 𝑥2 + 0.00954𝑥3 ≤ 0
tn
𝑔4(𝑥) = 𝑥4 ― 240 ≤ 0
rin
53
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
6𝑃𝑙 6𝑃(2𝑙)
Subject to: 𝑔1 = ― 𝜎max ≤ 0; 𝑔2 = ― 𝜎max ≤ 0
𝑥9𝑥210 𝑥7𝑥28
ed
6𝑃(3𝑙) 6𝑃(4𝑙)
𝑔3 = ― 𝜎max ≤ 0; 𝑔4 = ― 𝜎max ≤ 0
𝑥5𝑥26 𝑥3𝑥24
6𝑃(5𝑙)
𝑔5 = ― 𝜎max ≤ 0
iew
𝑥1𝑥22
𝑃𝑙3 244
𝑔6 =
𝐸 ( 𝑥1𝑥32
+
148
𝑥3𝑥4 3
+
76
𝑥5𝑥6 3
+
28
𝑥7𝑥38
+
4
𝑥9𝑥103 ) ― 𝛿max ≤ 0
() () ()
v
𝑥2 𝑥4 𝑥6
𝑔7 = ― 20 ≤ 0;𝑔8 = ― 20 ≤ 0;𝑔9 = ― 20 ≤ 0
𝑥1 𝑥3 𝑥5
re
𝑔8 = ()
𝑥4
𝑥3
― 20 ≤ 0; 𝑔9 = ()
𝑥6
𝑥5
― 20 ≤ 0
()
𝑥8
( )𝑥10
𝑔10 =
𝑥7
er
― 20 ≤ 0;𝑔11 =
𝑥9
― 20 ≤ 0
(
𝑔2(𝑥) = 4𝑥22 ―
12,566(
𝑥1𝑥2
𝑥2𝑥31 ― 𝑥41 )) (
+
1
5108𝑥21 ) ―1≤0
tn
𝑔3(𝑥) = 1 ― (140.45𝑥1/𝑥22𝑥3) ≤ 0
7.6 C.6. Problem with the design of Disc Brake for two-objective
Minimized: 𝑓1(𝑥) = 4.9 × 10―5(𝑥22 ― 𝑥21)(𝑥4 ― 1), in kg.
ep
54
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
𝑔2(𝑥) = 30 ― 2.5(𝑥4 + 1) ≥ 0
𝑥3
ed
𝑔3(𝑥) = 0.4 ― ≥0
3.14(𝑥22 ― 𝑥21)
iew
2.66 × 10―2𝑥3𝑥4(𝑥32 ― 𝑥31)
𝑔5(𝑥) = ― 900 ≥ 0
(𝑥22 ― 𝑥21)
v
References
[1] A. H. Wright, “Genetic Algorithms for Real Parameter Optimization,” in Foundations of Genetic
re
Algorithms, G. J. E. Rawlins, Ed., Elsevier, 1991, pp. 205–218. doi: 10.1016/B978-0-08-050684-
5.50016-1.
[2] I. Rechenberg, “Evolutionsstrategie,” Optimierung technischer Systeme nach Prinzipien derbiologischen
Evolution, 1973.
[3] J. R. Koza, Genetic programming II, vol. 17. MIT press Cambridge, 1994.
er
[4] K. V. Price, “Differential Evolution,” in Handbook of Optimization: From Classical to Modern
Approach, I. Zelinka, V. Snášel, and A. Abraham, Eds., in Intelligent Systems Reference Library. Berlin,
Heidelberg: Springer, 2013, pp. 187–214. doi: 10.1007/978-3-642-30504-7_8.
[5] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN’95-international
pe
conference on neural networks, IEEE, 1995, pp. 1942–1948.
[6] X.-S. Yang and S. Deb, “Cuckoo Search via Lévy flights,” in 2009 World Congress on Nature &
Biologically Inspired Computing (NaBIC), Dec. 2009, pp. 210–214. doi:
10.1109/NABIC.2009.5393690.
[7] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artificial
life, vol. 5, no. 2, pp. 137–172, 1999, doi: 10.1162/106454699568728.
[8] M. Dorigo, M. Birattari, and T. Stutzle, “Ant colony optimization,” IEEE computational intelligence
ot
[15] F. Glover, “Tabu search—part I,” ORSA Journal on computing, vol. 1, no. 3, pp. 190–206, 1989.
[16] Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony
search,” simulation, vol. 76, no. 2, pp. 60–68, 2001.
[17] R. V. Rao, V. J. Savsani, and D. P. Vakharia, “Teaching–learning-based optimization: a novel method
for constrained mechanical design optimization problems,” Computer-aided design, vol. 43, no. 3, pp.
Pr
303–315, 2011.
[18] D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE transactions on
evolutionary computation, vol. 1, no. 1, pp. 67–82, 1997.
55
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
[19] C. A. Coello Coello and E. Mezura Montes, “Constraint-handling in genetic algorithms through the use
of dominance-based tournament selection,” Advanced Engineering Informatics, vol. 16, no. 3, pp. 193–
203, Jul. 2002, doi: 10.1016/S1474-0346(02)00011-3.
[20] H. Tran-Ngoc et al., “Model Updating for a Railway Bridge Using a Hybrid Optimization Algorithm
ed
Combined with Experimental Data,” presented at the Proceedings of 1st International Conference on
Structural Damage Modelling and Assessment: SDMA 2020, 4-5 August 2020, Ghent University,
Belgium, Springer, 2021, pp. 19–30.
[21] B. Freisleben and P. Merz, “Fitness landscape analysis and memetic algorithms for the quadratic
assignment problem,” IEEE Trans. Evol. Computat., vol. 4, no. 4, pp. 337–352, Nov. 2000, doi:
iew
10.1109/4235.887234.
[22] K. Deb and H. Jain, “An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-
Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints,” IEEE Trans.
Evol. Computat., vol. 18, no. 4, pp. 577–601, Aug. 2014, doi: 10.1109/TEVC.2013.2281535.
[23] H. Tran-Ngoc, S. Khatir, T. Le-Xuan, G. De Roeck, T. Bui-Tien, and M. Abdel Wahab, “Finite element
model updating of a multispan bridge with a hybrid metaheuristic search algorithm using experimental
v
data from wireless triaxial sensors,” Engineering with Computers, vol. 38, no. S3, pp. 1865–1883, Aug.
2022, doi: 10.1007/s00366-021-01307-9.
[24] H.-G. Beyer and B. Sendhoff, “Robust optimization – A comprehensive survey,” Computer Methods in
re
Applied Mechanics and Engineering, vol. 196, no. 33–34, pp. 3190–3218, Jul. 2007, doi:
10.1016/j.cma.2007.03.003.
[25] Y. Jin and J. Branke, “Evolutionary Optimization in Uncertain Environments—A Survey,” IEEE Trans.
Evol. Computat., vol. 9, no. 3, pp. 303–317, Jun. 2005, doi: 10.1109/TEVC.2005.846356.
[26] A. C. C. Lemonge and H. J. C. Barbosa, “An adaptive penalty scheme for genetic algorithms in structural
er
optimization,” Int. J. Numer. Meth. Engng., vol. 59, no. 5, pp. 703–736, Feb. 2004, doi:
10.1002/nme.899.
[27] S. Kirkpatrick, C. D. Gelatt Jr, and M. P. Vecchi, “Optimization by simulated annealing,” science, vol.
220, no. 4598, pp. 671–680, 1983.
pe
[28] M. Črepinšek, S.-H. Liu, and M. Mernik, “Exploration and exploitation in evolutionary algorithms: A
survey,” ACM computing surveys (CSUR), vol. 45, no. 3, pp. 1–33, 2013.
[29] T. Bäck, D. B. Fogel, and Z. Michalewicz, “Handbook of evolutionary computation,” Release, vol. 97,
no. 1, p. B1, 1997.
[30] Y. Collette and P. Siarry, Multiobjective optimization: principles and case studies. Springer Science &
Business Media, 2004.
ot
[31] E. Zitzler, Evolutionary algorithms for multiobjective optimization: Methods and applications, vol. 63.
Shaker Ithaca, 1999.
[32] J.-H. Chen, D. E. Goldberg, S.-Y. Ho, and K. Sastry, “Fitness Inheritance In Multi-objective
Optimization.,” in GECCO, Citeseer, 2002, pp. 319–326.
tn
[33] J. Sweller, “Cognitive Load During Problem Solving: Effects on Learning,” Cognitive Science, vol. 12,
no. 2, pp. 257–285, Apr. 1988, doi: 10.1207/s15516709cog1202_4.
[34] A. Tsoukiàs, “On the concept of decision aiding process: an operational perspective,” Ann Oper Res, vol.
154, no. 1, pp. 3–27, Jul. 2007, doi: 10.1007/s10479-007-0187-z.
[35] C. A. C. Coello, G. B. Lamont, and D. A. Van Veldhuizen, Evolutionary algorithms for solving multi-
rin
[38] Y. Haimes, “On a bicriterion formulation of the problems of integrated system identification and system
optimization,” IEEE transactions on systems, man, and cybernetics, no. 3, pp. 296–297, 1971.
[39] R. E. Steuer, “Multiple criteria optimization,” Theory, Computation, and Application, 1986.
[40] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm:
NSGA-II,” IEEE transactions on evolutionary computation, vol. 6, no. 2, pp. 182–197, 2002.
Pr
[41] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast elitist non-dominated sorting genetic algorithm
for multi-objective optimization: NSGA-II,” in Parallel Problem Solving from Nature PPSN VI: 6th
International Conference Paris, France, September 18–20, 2000 Proceedings 6, Springer, 2000, pp.
849–858.
56
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
[42] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,”
IEEE Transactions on evolutionary computation, vol. 11, no. 6, pp. 712–731, 2007.
[43] H. Li and Q. Zhang, “Multiobjective optimization problems with complicated Pareto sets, MOEA/D and
NSGA-II,” IEEE transactions on evolutionary computation, vol. 13, no. 2, pp. 284–302, 2008.
ed
[44] A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P. N. Suganthan, and Q. Zhang, “Multiobjective evolutionary
algorithms: A survey of the state of the art,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp.
32–49, Mar. 2011, doi: 10.1016/j.swevo.2011.03.001.
[45] B. V. Babu and M. M. L. Jehan, “Differential evolution for multi-objective optimization,” in The 2003
Congress on Evolutionary Computation, 2003. CEC’03., IEEE, 2003, pp. 2696–2703.
iew
[46] S. Mostaghim and J. Teich, “Strategies for finding good local guides in multi-objective particle swarm
optimization (MOPSO),” in Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS’03 (Cat.
No. 03EX706), IEEE, 2003, pp. 26–33.
[47] G. Dhiman et al., “MOSOA: A new multi-objective seagull optimization algorithm,” Expert Systems
with Applications, vol. 167, p. 114150, 2021.
[48] R. A. Santana, M. R. Pontes, and C. J. Bastos-Filho, “A multiple objective particle swarm optimization
v
approach using crowding distance and roulette wheel,” in 2009 Ninth International Conference on
Intelligent Systems Design and Applications, IEEE, 2009, pp. 237–242.
[49] J. M. Peiris, M. D. De Jong, and Y. Guan, “Avian influenza virus (H5N1): a threat to human health,”
re
Clinical microbiology reviews, vol. 20, no. 2, pp. 243–267, 2007, doi: 10.1128/cmr.00037-06.
[50] B. Olsen, V. J. Munster, A. Wallensten, J. Waldenström, A. D. Osterhaus, and R. A. Fouchier, “Global
patterns of influenza A virus in wild birds,” science, vol. 312, no. 5772, pp. 384–388, 2006.
[51] Y. Watanabe, M. S. Ibrahim, Y. Suzuki, and K. Ikuta, “The changing nature of avian influenza A virus
(H5N1),” Trends in Microbiology, vol. 20, no. 1, pp. 11–20, Jan. 2012, doi: 10.1016/j.tim.2011.10.003.
er
[52] S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in engineering software, vol.
95, pp. 51–67, 2016, doi: 10.1016/j.advengsoft.2017.07.002.
[53] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, and A. H. Gandomi, “The arithmetic optimization
algorithm,” Computer methods in applied mechanics and engineering, vol. 376, p. 113609, 2021, doi:
pe
10.1016/j.cma.2020.113609.
[54] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747,
2016, doi: 10.48550/arXiv.1609.04747.
[55] D. E. Goldberg, Genetic algorithms. pearson education India, 2013.
[56] F. Martínez-Álvarez et al., “Coronavirus optimization algorithm: a bioinspired metaheuristic based on
the COVID-19 propagation model,” Big data, vol. 8, no. 4, pp. 308–322, 2020, doi:
ot
10.1089/big.2020.0051.
[57] T. Blickle, “Tournament selection,” Evolutionary computation, vol. 1, pp. 181–186, 2000.
[58] X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on
Evolutionary computation, vol. 3, no. 2, pp. 82–102, 1999.
tn
[59] P. N. Suganthan, N. Hansen, J. J. Liang, and K. Deb, “Problem Definitions and Evaluation Criteria for
the CEC 2005 Special Session on Real-Parameter Optimization”.
[60] S. Huband, P. Hingston, L. Barone, and L. While, “A review of multiobjective test problems and a
scalable test problem toolkit,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 5, pp. 477–
506, Oct. 2006, doi: 10.1109/TEVC.2005.861417.
rin
[61] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari, “Multiobjective optimization test
instances for the CEC 2009 special session and competition,” University of Essex, Colchester, UK and
Nanyang technological University, Singapore, special session on performance assessment of multi-
objective optimization algorithms, technical report, vol. 264, pp. 1–30, 2008.
[62] D. Simon, “Biogeography-Based Optimization,” IEEE Transactions on Evolutionary Computation, vol.
ep
[65] S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey Wolf Optimizer,” Advances in Engineering Software,
vol. 69, pp. 46–61, Mar. 2014, doi: 10.1016/j.advengsoft.2013.12.007.
[66] S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, and S. M. Mirjalili, “Salp Swarm
Algorithm: A bio-inspired optimizer for engineering design problems,” Advances in Engineering
Software, vol. 114, pp. 163–191, Dec. 2017, doi: 10.1016/j.advengsoft.2017.07.002.
57
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
[67] V. Kumar, J. K. Chhabra, and D. Kumar, “Parameter adaptive harmony search algorithm for unimodal
and multimodal optimization problems,” Journal of Computational Science, vol. 5, no. 2, pp. 144–155,
Mar. 2014, doi: 10.1016/j.jocs.2013.12.001.
[68] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm
ed
optimizer for global optimization of multimodal functions,” IEEE transactions on evolutionary
computation, vol. 10, no. 3, pp. 281–295, 2006.
[69] E. Zitzler, K. Deb, and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical
Results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, Jun. 2000, doi:
10.1162/106365600568202.
iew
[70] S. Mirjalili, “Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-
objective, discrete, and multi-objective problems,” Neural Comput & Applic, vol. 27, no. 4, pp. 1053–
1073, May 2016, doi: 10.1007/s00521-015-1920-1.
[71] M. R. Sierra and C. A. Coello Coello, “Improving PSO-Based Multi-objective Optimization Using
Crowding, Mutation and ∈-Dominance,” in Evolutionary Multi-Criterion Optimization, C. A. Coello
Coello, A. Hernández Aguirre, and E. Zitzler, Eds., in Lecture Notes in Computer Science. Berlin,
v
Heidelberg: Springer, 2005, pp. 505–519. doi: 10.1007/978-3-540-31880-4_35.
[72] A. Sadollah, A. Bahreininejad, H. Eskandar, and M. Hamdi, “Mine blast algorithm: A new population
based algorithm for solving constrained engineering optimization problems,” Applied Soft Computing,
re
vol. 13, no. 5, pp. 2592–2612, May 2013, doi: 10.1016/j.asoc.2012.11.026.
[73] H. Liu, Z. Cai, and Y. Wang, “Hybridizing particle swarm optimization with differential evolution for
constrained numerical and engineering optimization,” Applied Soft Computing, vol. 10, no. 2, pp. 629–
640, Mar. 2010, doi: 10.1016/j.asoc.2009.08.031.
[74] M. Zhang, W. Luo, and X. Wang, “Differential evolution with dynamic stochastic selection for
er
constrained optimization,” Information Sciences, vol. 178, no. 15, pp. 3043–3074, Aug. 2008, doi:
10.1016/j.ins.2008.02.014.
[75] T. Ray and K. M. Liew, “Society and civilization: An optimization algorithm based on the simulation of
social behavior,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 4, pp. 386–396, Aug.
pe
2003, doi: 10.1109/TEVC.2003.814902.
[76] T. Ray and P. Saini, “ENGINEERING DESIGN OPTIMIZATION USING A SWARM WITH AN
INTELLIGENT INFORMATION SHARING AMONG INDIVIDUALS,” Engineering Optimization,
vol. 33, no. 6, pp. 735–748, Aug. 2001, doi: 10.1080/03052150108940941.
[77] K. Deb, “Optimal design of a welded beam via genetic algorithms,” AIAA Journal, vol. 29, no. 11, pp.
2013–2015, Nov. 1991, doi: 10.2514/3.10834.
ot
[78] Q. He and L. Wang, “An effective co-evolutionary particle swarm optimization for constrained
engineering design problems,” Engineering Applications of Artificial Intelligence, vol. 20, no. 1, pp. 89–
99, Feb. 2007, doi: 10.1016/j.engappai.2006.03.003.
[79] F. Huang, L. Wang, and Q. He, “An effective co-evolutionary differential evolution for constrained
tn
optimization,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 340–356, Mar. 2007, doi:
10.1016/j.amc.2006.07.105.
[80] C. A. Coello Coello, “Use of a self-adaptive penalty approach for engineering optimization problems,”
Computers in Industry, vol. 41, no. 2, pp. 113–127, Mar. 2000, doi: 10.1016/S0166-3615(99)00046-9.
[81] C. A. Coello Coello, “CONSTRAINT-HANDLING USING AN EVOLUTIONARY
rin
[83] K. Deb, “GeneAS: A robust optimal design technique for mechanical component design,” Evolutionary
algorithms in engineering applications, pp. 497–514, 1997.
[84] Q. He and L. Wang, “A hybrid particle swarm optimization with a feasibility-based rule for constrained
optimization,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 1407–1422, Mar. 2007, doi:
10.1016/j.amc.2006.07.134.
Pr
[85] K. Deb, “GeneAS: A Robust Optimal Design Technique for Mechanical Component Design,” in
Evolutionary Algorithms in Engineering Applications, D. Dasgupta and Z. Michalewicz, Eds., Berlin,
Heidelberg: Springer Berlin Heidelberg, 1997, pp. 497–514. doi: 10.1007/978-3-662-03423-1_27.
58
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770
[86] M. Mahdavi, M. Fesanghary, and E. Damangir, “An improved harmony search algorithm for solving
optimization problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1567–1579, May
2007, doi: 10.1016/j.amc.2006.11.033.
[87] B. K. Kannan and S. N. Kramer, “An Augmented Lagrange Multiplier Based Method for Mixed Integer
ed
Discrete Continuous Optimization and Its Applications to Mechanical Design,” Journal of Mechanical
Design, vol. 116, no. 2, pp. 405–411, Jun. 1994, doi: 10.1115/1.2919393.
[88] P. B. Thanedar and G. N. Vanderplaats, “Survey of Discrete Variable Optimization for Structural
Design,” J. Struct. Eng., vol. 121, no. 2, pp. 301–306, Feb. 1995, doi: 10.1061/(ASCE)0733-
9445(1995)121:2(301).
iew
[89] A. Chakri, H. Ragueb, and X.-S. Yang, “Bat Algorithm and Directional Bat Algorithm with Case
Studies,” in Nature-Inspired Algorithms and Applied Optimization, X.-S. Yang, Ed., in Studies in
Computational Intelligence, vol. 744. Cham: Springer International Publishing, 2018, pp. 189–216. doi:
10.1007/978-3-319-67669-2_9.
[90] J. L. Patel, P. B. Rana, and D. I. Lalwani, “Optimization of five stage cantilever beam design and three
stage heat exchanger design using amended differential evolution algorithm,” Materials Today:
v
Proceedings, vol. 26, pp. 1977–1981, 2020, doi: 10.1016/j.matpr.2020.02.432.
[91] M. K. Dhadwal, S. N. Jung, and C. J. Kim, “Advanced particle swarm assisted genetic algorithm for
constrained optimization problems,” Comput Optim Appl, vol. 58, no. 3, pp. 781–806, Jul. 2014, doi:
re
10.1007/s10589-014-9637-0.
[92] H. S. Bernardino, H. J. C. Barbosa, A. C. C. Lemonge, and L. G. Fonseca, “A new hybrid AIS-GA for
constrained optimization problems in mechanical engineering,” in 2008 IEEE Congress on Evolutionary
Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China: IEEE, Jun.
2008, pp. 1455–1462. doi: 10.1109/CEC.2008.4630985.
er
[93] J. S. Arora, Introduction to optimum design. Elsevier, 2004.
[94] X.-S. Yang, Nature-inspired metaheuristic algorithms, 2. ed. Frome: Luniver Press, 2010.
[95] C. A. Coello Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary
algorithms: a survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering,
pe
vol. 191, no. 11–12, pp. 1245–1287, Jan. 2002, doi: 10.1016/S0045-7825(01)00323-1.
[96] A. D. Belegundu, A study of mathematical programming methods for structural optimization. The
University of Iowa, 1982.
[97] A. Osyczka and S. Kundu, “A modified distance method for multicriteria optimization, using genetic
algorithms,” Computers & Industrial Engineering, vol. 30, no. 4, pp. 871–882, Sep. 1996, doi:
10.1016/0360-8352(96)00038-1.
ot
59
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=4519770