Improved TSO Algorithm for Engineering Design Problems
Improved TSO Algorithm for Engineering Design Problems
Harun GEZİCİ
ABSTRACT: Tuna Swarm Optimization (TSO) which is developed by being inspired by the hunting
strategies of the tuna fish is a metaheuristic optimization algorithm (MHA). TSO is able to solve some
optimization problems successfully. However, TSO has the handicap of having premature
convergence and being caught by local minimum trap. This study proposes a mathematical model
aiming to eliminate these disadvantages and to increase the performance of TSO. The basic
philosophy of the proposed method is not to focus on the best solution but on the best ones. The
Proposed algorithm has been compared to six current and popular MHAs in the literature. Using
classical test functions to have a preliminary evaluation is a frequently preferred method in the field
of optimization. Therefore, first, all the algorithms were applied to ten classical test functions and the
results were interpreted through the Wilcoxon statistical test. The results indicate that the proposed
algorithm is successful. Following that, all the algorithms were applied to three engineering design
problems, which is the main purpose of this article. The original TSO has a weak performance on
design problems. With optimal costs like 1.74 in welded beam design problem, 1581.47 in speed
reducer design problem, and 38.455 in I-beam design problem, the proposed algorithm has been the
most successful one. Such a case leads us to the idea that the proposed method of this article is
successful for improving the performance of TSO.
Keywords: Tuna Swarm Optimization, Swarm-Based Metaheuristic Algorithm, Engineering Design
Problems.
Gezici, H. (2023). Improved Tuna Swarm Optimization Algorithm for Engineering Design Problems. Journal of
Materials and Mechatronics: A (JournalMM), 4(2), 424-445.
Gezici, H. JournalMM (2023), 4(2) 424-445
1.INTRODUCTION
Today, real world problems are identified through complex mathematical equations which
include many parameters. In the field of optimization, these mathematical equations are named
objective functions (Noureddine, 2015). Depending on the kind of the problem, the output of the
objective function may be required to be minimum or maximum (Mareli and Twala, 2018). At the
same time, these problems have many limitations. These limitations are generally about the
interrelation of the parameters in the objective functions. The overall purpose of optimization is to
optimally determine the parameters in the objective function under certain limitations (Hashim et al.,
2022). In the early stages of optimization studies, gradient descent (GD) methods were used. GD is
unlikely to be preferred by researches because of its incapability in solving nonlinear design
problems. Besides, for engineering problems with wide search space, computation times are long and
they are not able to present optimum solutions (S. Kumar et al., 2023). As a result of such
disadvantages of GD, researchers focused on metaheuristic algorithms (MHA) (Feng et al., 2021).
Depending on the improvement procedures within their structure, MHAs aim to find the most
reasonable result within the most reasonable period of time without scanning the search space. MHAs
are classified into four subgroups depending on their source of inspiration. These are
Evolution-based Algorithms: They are improved by being inspired by the biological behaviours
of living creatures. They are based on evolutionary laws like crossover and mutation. Primary
evolution-based algorithms are genetic algorithms (Mirjalili, 2019), differential evolution (Deng
et al., 2021), genetic programming (F. Zhang et al., 2021), evolutionary strategies (Rosso et al.,
2022), and evolutionary programming (Gul et al., 2021).
Swarm-based Algorithms: They are improved by being inspired by the social behaviours of
animals like insects and birds within their group. Particle swarm optimization (PSO) (Gad, 2022),
ant colony optimization (Wu et al., 2023), grey wolf optimizer (GWO) (Mirjalili et al., 2014),
monarch butterfly optimization (G.-G. Wang et al., 2019), earthworm optimizer (G.-G. Wang et
al., 2018), moth search algorithm (G.-G. Wang, 2018), firefly algorithm (V. Kumar and Kumar,
2021), artificial bee colony (Öztürk et al., 2020), bat algorithm (BA) (Y. Wang et al., 2019) and
Tuna swarm optimization (Xie et al., 2021) are some examples of MHA in this group.
Physical-based Algorithms: They are improved through various physic laws. Simulated
annealing (Amine, 2019), gravitational search algorithm (Rashedi et al., 2009), nuclear reaction
optimization (Wei et al., 2019), water cycle algorithm (Korashy et al., 2019), sine cosine
algorithm (SCA) (Abualigah and Diabat, 2021), big bang-big crunch (Mbuli and Ngaha, 2022),
black hole (Abdulwahab et al., 2019) and harmony search (Abualigah et al., 2020) are the
example of physics-based algorithms.
Human-based algorithms: They are improved by being inspired by the social behaviours of
humans. Teaching-learning-based optimization (Li et al., 2019), social evolution and learning
optimization (M. Kumar et al., 2018), group teaching optimizer (Y. Zhang and Jin, 2020), heap-
based optimizer (Askari, Saeed, et al., 2020), political optimizer (Askari, Younas, et al., 2020),
taboo search (Prajapati et al., 2020), Exchange market algorithm (Jafari et al., 2020) and brain
storm optimizer (Xue et al., 2022) are examples of this group.
MHAs are stochastic. They have two search procedures; exploration and exploitation (Raja et
al., 2022). During the exploration phase, MHAs determine the promising sections of search space.
425
Gezici, H. JournalMM (2023), 4(2) 424-445
During exploitation phase, the determined sections are surveyed in detail. In order to succeed, one of
the most significant characteristic of MHAs is the balance between exploitation and exploration (S.
Kumar et al., 2023). In some MHAs, this balance is constructed by a probability key that is determined
randomly and ranges between 0 and 1 (Ramachandran et al., 2022). Besides, in some MHAs,
exploration is performed in early iteration numbers and exploitation is performed in advanced
iteration numbers (Xie et al., 2021). Some optimization problems have one local minimum, which is
the global minimum at the same time. Some problems have more than one local minimum of which
only one is global minimum. Therefore, it is more difficult to solve these problems. Most of the
MHAs improved to solve this kind of problems have the disadvantages of premature convergence
and being caught by local minimum trap. Moreover, as it is stated by no free lunch theorem, an
optimization algorithm cannot solve all optimization problems (Wolpert and Macready, 1997).
Hence, researchers tend either to improve new optimizers or to increase the productivity of the
available ones.
This study presents an improved version of recently published swarm-based TSO. The proposed
algorithm is named Improved TSO (ITSO). It specifically focuses on the premature convergence
problem of TSO. In addition to that, local search procedure is improved in order to prevent it from
being trapped by local minimum. The improvement is about focusing on the three best points of the
search space rather than focusing on the best one. This method eliminates the problems caused by the
premature convergence problem by increasing the efficiency of TSO's global search capability.
Furthermore, as this method focuses on the three-best solution, it helps to avoid the local minimum
trap. The contribution of this study is as follows.
It introduces a method that allows TSO to escape from premature convergence and local
minimum trap.
It makes betterments in the local search procedure of TSO and presents an improved version of
it.
The proposed algorithm is tested through 10 classical test functions and 3 engineering design
problems. The results are evaluated through Wilcoxon test.
The proposed algorithm is compared to the popular MHAs in the literature.
In earlier studies, TSO is proven to be successful for the solution of optimization problems.
However, once it is applied to real world problems, it presents some failures. Hence, researchers
conduct works in order to increase its performance. While doing literature review, this study
concentrates on works using methods to increase the performance of TSO. In a study on parameter
identification of photovoltaic cells, the researchers propose the chaotic variant of TSO (C. Kumar and
Magdalin Mary, 2022). In this study, two parameters determined by number of iterations and other
randomly determined parameters are assigned through tent chaotic map. The researchers state that
the results are more successful than the results of the competitive algorithms. However, this study
does not enable us with the information on how other chaotic maps effect the performance of TSO.
Besides, no change is made on the mathematical model of TSO. In another study on parameter
estimation of photovoltaic batteries, the researchers present a hybrid algorithm made of TSO and
differential evolution algorithm (Tan et al., 2022). In order to increase population diversity and
convergence efficiency of the proposed algorithm, this study concentrates on strategies such as
mutation, crossover factor ranking, and linear reduction of the population. The researchers inform us
that the improved algorithm outperforms its competitors. Neither this study makes a change on the
mathematical model of TSO. In another study that focuses on estimating the speed of the wind, the
modified TSO is hybridized with long short-term memory strategy (Tuerxun et al., 2022). In this
426
Gezici, H. JournalMM (2023), 4(2) 424-445
study, in order to increase the diversity of the initial population of TSO, tent chaotic map is used.
Moreover, TSO is used for image segmentation as well (J. Wang et al., 2022). Like the previous one,
this study too uses tent chaotic map in order to increase the diversity of the initial population of TSO.
TSO is also used in another study that deals with path planning of autonomous underwater vehicle
(Yan et al., 2023). This study presents TSO based on reinforcement learning. It is emphasized that
reinforcement learning improves the weak determination of TSO. In another study that regards the
problems of TSO’s premature convergence and being caught by local optimum trap, the researchers
adapt circle chaotic map and levy flight to TSO (W. Wang and Tian, 2022). The Circle chaotic map
is used to increase the diversity of the initial population while Levy flight is integrated to
mathematical model of TSO. It is reported that these changes increase the performance of TSO. It is
highly common to use PID method for controlling the engine revolution speed. TSO is used to
determine the PID coefficient (Guo et al., 2022). The researchers indicate that TSO has a better
performance compared to conventional methods (Ashraf et al., 2022; Fu and Zhang, 2022).
Having studied all these methods, it is observed that two methods are used in order to increase
the performance of TSO. The first one is using chaotic maps to diversify the initial population. The
second one is determining the parameters within the mathematical model of TSO in various ways.
On the other hand, in some studies, TSO is used as is. In this study, the mathematical model of TSO
is changed. The main objective of such a change is to focus not on the best solution but on the best
ones. This approach leads TSO to escape from premature convergence and local minimum trap.
The rest of the study is organized as follows. In the second part, TSO is introduced, and
information about the proposed algorithm is given. In the third section, computational results are
presented. In the last section, the results of the study are evaluated.
427
Gezici, H. JournalMM (2023), 4(2) 424-445
fish. The mathematical model of the spiral motion of TSO is given in Equation (1). Some parameters
in this equation are calculated by Equation (2), (3), (4) and (5).
𝛼 ∙ (𝑋 𝑡 + 𝛽 ∙ |𝑋𝑏𝑒𝑠𝑡𝑡
− 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 , 𝑖=1
𝑋𝑖𝑡+1 = { 1 𝑡 𝑏𝑒𝑠𝑡 (1)
𝑡
𝛼1 ∙ (𝑋𝑏𝑒𝑠𝑡 + 𝛽 ∙ |𝑋𝑏𝑒𝑠𝑡 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖−1
𝑡
, 𝑖 = 2, … , 𝑁𝑃
𝑡
𝛼1 = 𝑎 + (1 − 𝑎) ∙ (2)
𝑡𝑚𝑎𝑥
𝑡
𝛼2 = (1 − 𝑎) − (1 − 𝑎) ∙ (3)
𝑡𝑚𝑎𝑥
𝛽 = 𝑒 𝑏𝑙 ∙ 𝑐𝑜𝑠(2𝜋𝑏) (4)
Where, 𝑡 is the current iteration, 𝑡𝑚𝑎𝑥 is the maximum iteration, and 𝑏 is a random number
evenly distributed between 0 and 1. 𝛼1 and 𝛼2 are weight coefficients controlling the tendency of
tuna fish to follow each other. The constant 𝑎 determines the characteristic of this tendency. 𝑖 𝑡ℎ within
the 𝑋𝑖𝑡+1 𝑡 + 1 is an individual. 𝛽 is the equation of spiral movement and 𝑙 is the parameter of this
equation. The most important disadvantage of the spiral movement is the hunting failure of the
followed Tuna fish. In such a case, tuna fish continue hunting by choosing a random location. This
eases each tuna fish to scan a wider area. It also enables TSO to have a more advanced global search
capability. The mathematical model of this hunting strategy is given in Equation (6).
𝑡
𝛼1 ∙ (𝑋𝑟𝑎𝑛𝑑 𝑡
+ 𝛽 ∙ |𝑋𝑟𝑎𝑛𝑑 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 , 𝑖=1
𝑋𝑖𝑡+1 ={ (6)
𝑡
𝛼1 ∙ (𝑋𝑟𝑎𝑛𝑑 𝑡
+ 𝛽 ∙ |𝑋𝑟𝑎𝑛𝑑 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖−1
𝑡
, 𝑖 = 2, … , 𝑁𝑃
𝑡
Here 𝑋𝑟𝑎𝑛𝑑 is a randomly picked individual from the group. While some MHAs conduct global
searches at the early stages of their searching processes, they conduct local searches at the further
stages. While improving TSO, this approach is embraced. Hence, as the number of iteration increases,
TSO changes the reference point of spiral movement from random individuals to the best one. The
final mathematical model of spiral food searching strategy is as follows (Equation (7)).
𝑡
𝛼1 ∙ (𝑋𝑟𝑎𝑛𝑑 𝑡
+ 𝛽 ∙ |𝑋𝑟𝑎𝑛𝑑 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 , 𝑖 =1 𝑡
{ , 𝑟𝑎𝑛𝑑 ≥
𝑡
𝛼1 ∙ (𝑋𝑟𝑎𝑛𝑑 𝑡
+ 𝛽 ∙ |𝑋𝑟𝑎𝑛𝑑 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖−1
𝑡
, 𝑖 = 2, … , 𝑁𝑃 𝑡𝑚𝑎𝑥
𝑋𝑖𝑡+1 = 𝑡 𝑡 𝑡 𝑡 (7)
𝛼 ∙ (𝑋 + 𝛽 ∙ |𝑋𝑏𝑒𝑠𝑡 − 𝑋𝑖 |) + 𝛼2 ∙ 𝑋𝑖 , 𝑖 =1 𝑡
{ 1 𝑡 𝑏𝑒𝑠𝑡 𝑡 𝑡 𝑡 , 𝑟𝑎𝑛𝑑 <
{ 𝛼1 ∙ (𝑋𝑏𝑒𝑠𝑡 + 𝛽 ∙ |𝑋𝑏𝑒𝑠𝑡 − 𝑋𝑖 |) + 𝛼2 ∙ 𝑋𝑖−1 , 𝑖 = 2, … , 𝑁𝑃 𝑡𝑚𝑎𝑥
428
Gezici, H. JournalMM (2023), 4(2) 424-445
𝑋 𝑡 + 𝑟𝑎𝑛𝑑 ∙ (𝑋𝑏𝑒𝑠𝑡
𝑡
− 𝑋𝑖𝑡 ) + 𝑇𝐹 ∙ 𝑝2 ∙ (𝑋𝑏𝑒𝑠𝑡
𝑡
− 𝑋𝑖𝑡 ), 𝑟𝑎𝑛𝑑 < 0.5
𝑋𝑖𝑡+1 = { 𝑏𝑒𝑠𝑡 (8)
𝑇𝐹 ∙ 𝑝2 ∙ 𝑋𝑖𝑡 , 𝑟𝑎𝑛𝑑 ≥ 0.5
(𝑡/𝑡𝑚𝑎𝑥 )
𝑡
𝑝 = (1 − ) (9)
𝑡𝑚𝑎𝑥
where 𝑇𝐹 is a random number with a value of 1 or -1.
2.2 Improved Tuna Swarm Optimization
In TSO, the best solution is the location of the fish to be caught. Tuna fish try to approach the
prey by following each other. This prevents search space from being scanned efficiently. Especially,
TSO’s focusing only on the best solution at advanced iteration numbers leads it to be caught by local
optimum trap. In order to improve the performance of TSO, this study proposes a new local search
procedure that is inspired by GWO.
In order to represent the hierarchical order of the wolves in GWO, Alpha, Beta, and Gamma
wolves are identified (Mirjalili et al., 2014). Alfa wolf leads the pack. Beta ones are the best Alpha
candidates. Besides, Beta wolves enable communication between the pack and the Alpha wolf.
Gamma wolves are tertiary wolves and they assist alpha and beta ones to manage the pack. In GWO,
the three best solutions are represented by Alpha, Beta, and Gamma wolves. The positions of all other
wolves are updated with respect to the positions of these three wolves (A. Kumar et al., 2017).
There is no evidence presenting that tuna fish have a hierarchical order. However, during
hunting, the hunting school could make sudden changes in their directions. This occurs especially
when the hunters are close to the prey. This act of the hunters leads us to the idea that local search
procedure of TSO could be improved. Depending on the position of the prey, the hunting school has
countless probability of changing direction. However, since the number of this probability is so high
and it will increase the solution time of the algorithm, it should be limited at a reasonable number. In
this study, being inspired by GWO, the three best solution vectors are used to update the location of
the tuna fish. The new mathematical model of the proposed ITSO is given in Equation (10) and (11).
𝑡
𝛼1 ∙ (𝑋𝑟𝑎𝑛𝑑 𝑡
+ 𝛽 ∙ |𝑋𝑟𝑎𝑛𝑑 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 , 𝑖=1 𝑡
{ , 𝑟𝑎𝑛𝑑 ≥
𝑡
𝛼1 ∙ (𝑋𝑟𝑎𝑛𝑑 𝑡
+ 𝛽 ∙ |𝑋𝑟𝑎𝑛𝑑 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖−1
𝑡
, 𝑖 = 2, … , 𝑁𝑃 𝑡𝑚𝑎𝑥
𝑡 𝑡 𝑡 𝑡
𝑋1 = 𝛼1 ∙ (𝑋𝛼 + 𝛽 ∙ |𝑋𝛼 − 𝑋𝑖 |) + 𝛼2 ∙ 𝑋𝑖 ,
𝑋𝑖𝑡+1 = 𝑋2 = 𝛼1 ∙ (𝑋𝛽𝑡 + 𝛽 ∙ |𝑋𝛽𝑡 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 , (10)
𝑡
𝑋3 = 𝛼1 ∙ (𝑋𝛾𝑡 + 𝛽 ∙ |𝑋𝛾𝑡 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 , 𝑖 = 1, … , 𝑁𝑃, 𝑟𝑎𝑛𝑑 <
𝑡𝑚𝑎𝑥
𝑋1 + 𝑋2 + 𝑋3
{{ ,
3
𝑋1 = 𝛼1 ∙ (𝑋𝛼𝑡 + 𝛽 ∙ |𝑋𝛼𝑡 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 ,
𝑋2 = 𝛼1 ∙ (𝑋𝛽𝑡 + 𝛽 ∙ |𝑋𝛽𝑡 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 ,
𝑋3 = 𝛼1 ∙ (𝑋𝛾𝑡 + 𝛽 ∙ |𝑋𝛾𝑡 − 𝑋𝑖𝑡 |) + 𝛼2 ∙ 𝑋𝑖𝑡 , 𝑖 = 1, … , 𝑁𝑃, 𝑟𝑎𝑛𝑑 < 0.5
𝑋𝑖𝑡+1 = (11)
𝑋1 + 𝑋2 + 𝑋3
{ ,
3
{ {𝑇𝐹 ∙ 𝑝2 ∙ 𝑋𝑖𝑡 , 𝑖 = 1, … , 𝑁𝑃, 𝑟𝑎𝑛𝑑 ≥ 0.5
In the equations, 𝑋𝛼 , 𝑋𝛽 , and 𝑋𝛾 respectively, represent the best, the second best, and the third
best solution. 𝑋1, 𝑋2, and 𝑋3 are respectively the location vectors acquired by 𝑋𝛼 , 𝑋𝛽 , and 𝑋𝛾 . The
429
Gezici, H. JournalMM (2023), 4(2) 424-445
new position vector is determined by the mean of 𝑋1, 𝑋2, and 𝑋3. Other parameters in the equations
are calculated as in section 2.2. The pseudocode of ITSO is given in Algorithm 1.
Algorithm 1. TSO and ITSO pseudocode
TSO pseudocode ITSO pseudocode
Input: NP: Population size, 𝑡𝑚𝑎𝑥 : maximum iteration Input: NP: Population size, 𝑡𝑚𝑎𝑥 : maximum iteration
Output: 𝑋𝑏𝑒𝑠𝑡 : The best individual, 𝑓𝑏𝑒𝑠𝑡 : Its fitness value Output: 𝑋𝛼 : The best individual, 𝑓𝛼 : Its fitness value
Initialize the random population of tunas (𝑋𝑖 , 𝑖 = 1,2, . . . , 𝑁𝑃) Initialize the random population of tunas (𝑋𝑖 , 𝑖 = 1,2, . . . , 𝑁𝑃)
and assign parameters a and z and assign parameters a and z
While (𝑡 < 𝑡𝑚𝑎𝑥 ) While (𝑡 < 𝑡𝑚𝑎𝑥 )
Calculate the fitness values and update 𝑋𝑏𝑒𝑠𝑡 Calculate the fitness values and update 𝑋𝛼 , 𝑋𝛽 ve 𝑋𝛾
For (each tuna) do For (each tuna) do
Update 𝛼1 , 𝛼2 , 𝑝 using equation (2), (3), (9) Update 𝛼1 , 𝛼2 , 𝑝 using equation (2), (3), (9)
If (𝑟𝑎𝑛𝑑 < 𝑧) then If (𝑟𝑎𝑛𝑑 < 𝑧) then
Update 𝑋𝑖𝑡+1 using equation (1) Update 𝑋𝑖𝑡+1 using equation (1)
Else if (𝑟𝑎𝑛𝑑 ≥ 𝑧) then Else if (𝑟𝑎𝑛𝑑 ≥ 𝑧) then
If (𝑟𝑎𝑛𝑑 < 0.5) then If (𝑟𝑎𝑛𝑑 < 0.5) then
If (𝑡/𝑡𝑚𝑎𝑥 < 𝑟𝑎𝑛𝑑) then If (𝑡/𝑡𝑚𝑎𝑥 < 𝑟𝑎𝑛𝑑) then
Update 𝑋𝑖𝑡+1 using equation (6) Update 𝑋𝑖𝑡+1 using equation (10)
Else if (t/t max ≥ rand) then Else if (t/t max ≥ rand) then
Update Xit+1 using equation (1) Update Xit+1 using equation (10)
Else if (rand ≥ 0.5) then Else if (rand ≥ 0.5) then
Update Xit+1 using equation (8) Update Xit+1 using equation (11)
𝑡 =𝑡+1 𝑡 =𝑡+1
Return: 𝑋𝑏𝑒𝑠𝑡 , 𝑓𝑏𝑒𝑠𝑡 Return: 𝑋𝛼 , 𝑓𝛼
430
Gezici, H. JournalMM (2023), 4(2) 424-445
𝑑𝑖𝑚
𝑑𝑖𝑚 𝑑𝑖𝑚
2
𝑑𝑖𝑚 𝑖
𝑑𝑖𝑚−1
2
𝐹5 (𝑧) = ∑ [100(𝑧𝑖+1 − 𝑧𝑖2 ) + (𝑧𝑖 − 1)2 ] [-30, 30] 30 0
𝑖=1
𝑑𝑖𝑚
431
Gezici, H. JournalMM (2023), 4(2) 424-445
11 2
𝑧1 (𝑏𝑖2 + 𝑏𝑖 𝑧2 )
𝐹7 (𝑧) = ∑ [𝑎𝑖 − 2 ] [-5, 5] 4 ≈ 0,0003075
𝑏𝑖 + 𝑏𝑖 𝑧3 + 𝑧4
𝑖=1
10
432
Gezici, H. JournalMM (2023), 4(2) 424-445
While performing the second experiment, population is set to be 100. In Table 4, all algorithms'
the minimum, average and worst results by the classical test functions are given. In addition, the
convergence curves of the algorithms are given in Figure 2. In minimum value metric, the proposed
algorithm is the most successful one in 6 of 10 functions (F1-F4, F6, F7) while BAT is successful in
433
Gezici, H. JournalMM (2023), 4(2) 424-445
2 functions (F8-F10). On the other hand, TSO and CMA-ES are successful in one function each (F5,
F9). In the mean value metric, the proposed algorithm is the most successful one in 8 out of 10
functions (F1-F4, F7-F10). CMA-ES is successful in two functions (F5, F6). In the worst value
metric, the proposed algorithm is the most successful one by successfully solving 8 functions (F1-F4,
F7-F10). CMA-ES is successful in two functions (F5, F6).
434
Gezici, H. JournalMM (2023), 4(2) 424-445
While performing the third experiment, population is set to be 200. In Table 5, all algorithms'
the minimum, average and worst results by the classical test functions are given. In addition, the
435
Gezici, H. JournalMM (2023), 4(2) 424-445
convergence curves of the algorithms are given in Figure 3. In the minimum value metric, the
proposed algorithm is the most successful one in 6 out of 10 functions (F1-F5, F7) while CMA-ES is
successful in 4 functions (F6, F8-F10). In the mean value metric, the proposed algorithm is the most
successful algorithm in 8 out of 10 functions (F1-F4, F7-F10). CMA-ES is successful in two functions
(F5, F6). In the worst value metric, the proposed algorithm is the most successful one by presenting
the best results in 8 out of 10 functions (F1-F4, F7-F10). CMA-ES solves 2 functions successfully
(F5, F6).
436
Gezici, H. JournalMM (2023), 4(2) 424-445
The results of an MHA informs us about its success. However, whether its success is
statistically meaningful or not should be indicated. The Wilcoxon test is a non-parametric test that is
frequently used for the comparison of optimization algorithms. The significance level of Wilcoxon
test is set to be 5%. In Table 6, the results of the proposed algorithm’s results of its comparison to its
competitors are given. In the tables, “+” symbolizes that the proposed algorithm is better than its
competitors while “-” indicates that it is worse.
437
Gezici, H. JournalMM (2023), 4(2) 424-445
a)
b) c)
Figure 4. Engineering design problems a) Welded beam design problem b) Speed reducer design problem c) I-beam
design problem
438
Gezici, H. JournalMM (2023), 4(2) 424-445
function (Equation (13)) and constraints (Equation (14), (15), (16)) of the problem are given in the
following equations.
The results of by all algorithms from the welded beam design problem are given in Table 7.
Examining the results, it is observed that with 1.74 optimal cost, the proposed algorithm is the most
successful one. GWO is the second and the CMA-ES is the third most successful algorithm with
optimal costs like 1.74 and 1.835. With an optimal cost of 2.25, TSO is one of the algorithms having
the worst result.
Minimize: 𝑓𝑐𝑜𝑠𝑡 (𝑥⃗) = 0.7854𝑥1 𝑥22 (3.3333𝑥32 + 14.9334𝑥3 − 43.0934) − 1.508𝑥1 (𝑥62 + 𝑥72 )
(17)
+ 7.4777(𝑥63 + 𝑥73 ) + 0.7854(𝑥4 𝑥62 + 𝑥5 𝑥72 )
Subject to: 𝑔 (𝑥⃗) = 27 397.5 1.93𝑥43
1 − 1 ≤ 0, 𝑔 (𝑥⃗) =
𝑥1 𝑥22 𝑥3
− 1 ≤ 0, 𝑔 (𝑥⃗) =
2 𝑥1 𝑥22 𝑥32
−1≤0 3 𝑥2 𝑥3 𝑥64
(18)
439
Gezici, H. JournalMM (2023), 4(2) 424-445
1.93𝑥53 1 745𝑥4 2
𝑔4 (𝑥⃗) = 𝑥 4 − 1 ≤ 0, 𝑔5 (𝑥⃗) = 110𝑥3 √( 𝑥 ) + 16.9 × 106 − 1 ≤ 0
2 𝑥 3 𝑥7 6 2 𝑥3
1 745𝑥5 2 𝑥2 𝑥3
𝑔6 (𝑥⃗) = 85𝑥 3 √( 𝑥 ) + 157.5 × 106 − 1 ≤ 0, 𝑔7 (𝑥⃗) = − 1 ≤ 0,
7 2 𝑥3 40
𝑥2 𝑥 1.5𝑥6 +1.9
𝑔8 (𝑥⃗) = 5 − 1 ≤ 0, 𝑔9 (𝑥⃗) = 1 − 1 ≤ 0, 𝑔10 (𝑥⃗) = − 1 ≤ 0,
𝑥1 12𝑥2 𝑥4
1.1𝑥7 + 1.9
𝑔11 (𝑥⃗) = −1≤0
𝑥5
Range: 2.6 ≤ 𝑥1 ≤ 3.6, 0.7 ≤ 𝑥2 ≤ 0.8, 17 ≤ 𝑥3 ≤ 28, 7.3 ≤ 𝑥4 ≤ 8.3, (19)
7.3 ≤ 𝑥5 ≤ 8.3, 2.9 ≤ 𝑥6 ≤ 3.9, 5 ≤ 𝑥7 ≤ 5.5
The results of all algorithms from the speed reducer design problem are given in Table 8. With
1581.47 optimal cost, the proposed algorithm is the most successful one while Bat is the second with
1581.494 optimal cost and CS is the third with 1582.539 optimal cost. Having the 1612.18 optimal
cost, TSO is the one with the worst result.
The results of all algorithms from the I-beam design problem are given in Table 9. With 38.455
optimal cost, the proposed algorithm is the most successful one while GWO is the second with 38.61
optimal cost and CMA-ES is the third with 38.668 optimal cost. TSO is algorithm with the worst
result as it has 44.777 optimal cost.
440
Gezici, H. JournalMM (2023), 4(2) 424-445
4. CONCLUSION
TSO is a swarm-based MHA that is improved by being inspired by the fishing strategies of tuna
fish. The biggest disadvantage of TSO is that it gets caught by the local minimum trap. In order to
solve this problem of TSO, this article proposes a new local search procedure. The main philosophy
of the proposed approach is not focusing only on the best solution but on the best ones. The new
proposed algorithm is applied to 10 classical test functions and welded beam design problem, speed
reducer design problem and I-beam design problem. The results indicate the success of the proposed
algorithm. The results of this study is as follows.
Like many other MHAs, TSO focuses on the best results. Focusing not only on the best solution
but on the best ones allows it to abstain from local minimum trap.
The results of the classical test functions indicate that the proposed algorithm is successful at
solving unimodal and multimodal functions.
Besides, statistical results confirm that the improved algorithm’s success for the classical test
results are meaningful.
In engineering design problems, TSO is not able to present competitive results. In all design
problems, the proposed algorithm is the most successful one though. Such a case indicates that the
method proposed in this article to improve the performance of TSO is successful.
As far as we know, this study is the first one proposing a change in the mathematical model of
TSO. Considering the results, it could be suggested that TSO is an algorithm that is open to be
improved.
Redesigning the mathematical model of TSO's spiral and parabolic search strategies. Moreover,
TSO could be improved so that it might not need initial parameters.
Analysing the parameters of TSO in a wide scale by determining them through 2 or 3 dimensional
chaotic maps.
Finding out its weakness by applying it to more real-world problems.
By integrating TSO into machine learning and artificial intelligence, it could be applied in a field
like image processing.
In the optimization of unmanned air vehicles and the solution of mission planning problems, TSO
could generate successful results.
441
Gezici, H. JournalMM (2023), 4(2) 424-445
5. CONFLICT OF INTEREST
Author approve that to the best of their knowledge, there is not any conflict of interest or common interest
with an institution/organization or a person that may affect the review process of the paper.
6. REFERENCES
Abdulwahab H. A., Noraziah A., Alsewari A. A., Salih S. Q., An Enhanced Version of Black Hole
Algorithm via Levy Flight for Optimization and Data Clustering Problems. IEEE Access 7,
142085-142096, 2019.
Abualigah L., Diabat A., Advances in Sine Cosine Algorithm: A comprehensive survey. Artificial
Intelligence Review 54(4), 2567-2608, 2021.
Abualigah L., Diabat A., Geem Z. W., A Comprehensive Survey of the Harmony Search Algorithm
in Clustering Applications. Applied Sciences 10(11), 3827, 2020.
Ahmadianfar I., Bozorg-Haddad O., Chu X., Gradient-based optimizer: A new metaheuristic
optimization algorithm. Information Sciences 540, 131-159, 2020.
Amine K., Multiobjective Simulated Annealing: Principles and Algorithm Variants. Advances in
Operations Research 2019, e8134674, 2019.
Ashraf H., Elkholy M. M., Abdellatif S. O., El‑Fergany A. A., Synergy of neuro-fuzzy controller
and tuna swarm algorithm for maximizing the overall efficiency of PEM fuel cells stack
including dynamic performance. Energy Conversion and Management:X 16, 100301, 2022.
Askari Q., Saeed M., Younas I., Heap-based optimizer inspired by corporate rank hierarchy for
global optimization. Expert Systems with Applications 161, 113702, 2020.
Askari Q., Younas I., Saeed M., Political Optimizer: A novel socio-inspired meta-heuristic for
global optimization. Knowledge-Based Systems 195, 105709, 2020.
Deng W., Shang S., Cai X., Zhao H., Song Y., Xu J., An improved differential evolution algorithm
and its application in optimization problem. Soft Computing 25(7), 5277-5298, 2021.
Feng Y., Deb S., Wang G.-G., Alavi A. H., Monarch butterfly optimization: A comprehensive
review. Expert Systems with Applications 168, 114418, 2021.
Fu C., Zhang L., A novel method based on tuna swarm algorithm under complex partial shading
conditions in PV system. Solar Energy 248, 28-40, 2022.
Gad A. G., Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review.
Archives of Computational Methods in Engineering 29(5), 2531-2561, 2022.
Gandomi A. H., Yang X.-S., Alavi A. H., Talatahari S., Bat algorithm for constrained optimization
tasks. Neural Computing and Applications 22(6), 1239-1255, 2013.
Gul F., Rahiman W., Alhady S. S. N., Ali A., Mir I., Jalil A., Meta-heuristic approach for solving
multi-objective path planning for autonomous guided robot using PSO–GWO optimization
algorithm with evolutionary programming. Journal of Ambient Intelligence and Humanized
Computing 12(7), 7873-7890, 2021.
Guo S.-M., Guo J.-K., Gao Y.-G., Guo P.-Y., Fu-Jun a H., Wang S.-C., Lou Z.-C., Zhang X.,
Research on Engine Speed Control Based on Tuna Swarm Optimization. Journal of
Engineering Research and Reports 23(12), 272-280, 2022.
Hansen N., Müller S. D., Koumoutsakos P., Reducing the Time Complexity of the Derandomized
Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). Evolutionary
Computation 11(1), 1-18, 2003.
442
Gezici, H. JournalMM (2023), 4(2) 424-445
Hashim F. A., Houssein E. H., Hussain K., Mabrouk M. S., Al-Atabany W., Honey Badger
Algorithm: New metaheuristic algorithm for solving optimization problems. Mathematics and
Computers in Simulation 192, 84-110, 2022.
Jafari A., Khalili T., Babaei E., Bidram A., A Hybrid Optimization Technique Using Exchange
Market and Genetic Algorithms. IEEE Access 8, 2417-2427, 2020.
Korashy A., Kamel S., Youssef A.-R., Jurado F., Modified water cycle algorithm for optimal
direction overcurrent relays coordination. Applied Soft Computing 74, 10-25, 2019.
Kumar A., Pant S., Ram M., System Reliability Optimization Using Gray Wolf Optimizer
Algorithm. Quality and Reliability Engineering International 33(7), 1327-1335, 2017.
Kumar C., Magdalin Mary D., A novel chaotic-driven Tuna Swarm Optimizer with Newton-
Raphson method for parameter identification of three-diode equivalent circuit model of solar
photovoltaic cells/modules. Optik 264, 169379, 2022.
Kumar M., Kulkarni A. J., Satapathy S. C., Socio evolution & learning optimization algorithm: A
socio-inspired optimization methodology. Future Generation Computer Systems 81, 252-272,
2018.
Kumar S., Yildiz B. S., Mehta P., Panagant N., Sait S. M., Mirjalili S., Yildiz A. R., Chaotic marine
predators algorithm for global optimization of real-world engineering problems. Knowledge-
Based Systems 261, 110192, 2023.
Kumar V., Kumar D., A Systematic Review on Firefly Algorithm: Past, Present, and Future.
Archives of Computational Methods in Engineering 28(4), 3269-3291, 2021.
Li S., Gong W., Yan X., Hu C., Bai D., Wang L., Gao L., Parameter extraction of photovoltaic
models using an improved teaching-learning-based optimization. Energy Conversion and
Management 186, 293-305, 2019.
Mareli M., Twala B., An adaptive Cuckoo search algorithm for optimisation. Applied Computing
and Informatics 14(2), 107-115, 2018.
Mbuli N., Ngaha W. S., A survey of big bang big crunch optimisation in power systems. Renewable
and Sustainable Energy Reviews 155, 111848, 2022.
Mirjalili S., SCA: A Sine Cosine Algorithm for solving optimization problems. Knowledge-Based
Systems 96, 120-133, 2016.
Mirjalili S., Evolutionary Algorithms and Neural Networks, Springer International Publishing, First
Edition, United States, pp. 43-55, 2019.
Mirjalili S., Mirjalili S. M., Lewis A., Grey Wolf Optimizer. Advances in Engineering Software 69,
46-61, 2014.
Noureddine S., An optimization approach for the satisfiability problem. Applied Computing and
Informatics 11(1), 47-59, 2015.
Öztürk Ş., Ahmad R., Akhtar N., Variants of Artificial Bee Colony algorithm and its applications
in medical image processing. Applied Soft Computing 97, 106799, 2020.
Prajapati V. K., Jain M., Chouhan L., Tabu Search Algorithm (TSA): A Comprehensive Survey,
3rd International Conference on Emerging Technologies in Computer Engineering: Machine
Learning and Internet of Things (ICETCE), Jaipur/India, February 7-8, 2020, pp: 1-8.
Raja B. D., Patel V. K., Yildiz A. R., Kotecha P., Performance of scientific law-inspired
optimization algorithms for constrained engineering applications. Engineering Optimization
55(10), 1798-1812, 2023.
Rajabioun R., Cuckoo Optimization Algorithm. Applied Soft Computing 11(8), 5508-5518, 2011.
443
Gezici, H. JournalMM (2023), 4(2) 424-445
Ramachandran M., Mirjalili S., Nazari-Heris M., Parvathysankar D. S., Sundaram A., Charles
Gnanakkan C. A. R., A hybrid Grasshopper Optimization Algorithm and Harris Hawks
Optimizer for Combined Heat and Power Economic Dispatch problem. Engineering
Applications of Artificial Intelligence 111, 104753, 2022.
Rashedi E., Nezamabadi-pour, H., Saryazdi S., GSA: A Gravitational Search Algorithm.
Information Sciences 179(13), 2232-2248, 2009.
Rosso M. M., Cucuzza R., Aloisio A., Marano G. C., Enhanced Multi-Strategy Particle Swarm
Optimization for Constrained Problems with an Evolutionary-Strategies-Based Unfeasible
Local Search Operator. Applied Sciences 12(5), 2285, 2022.
Tan M., Li Y., Ding D., Zhou R., Huang C., An Improved JADE Hybridizing with Tuna Swarm
Optimization for Numerical Optimization Problems. Mathematical Problems in Engineering
2022, e7726548, 2022.
Tuerxun W., Xu C., Guo H., Guo L., Zeng N., Cheng Z., An ultra-short-term wind speed prediction
model using LSTM based on modified tuna swarm optimization and successive variational
mode decomposition. Energy Science & Engineering 10(8), 3001-3022, 2022.
Wang G.-G., Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization
problems. Memetic Computing 10(2), 151-164, 2018.
Wang G.-G., Deb S., Coelho L. D. S., Earthworm optimisation algorithm: a bio-inspired
metaheuristic algorithm for global optimisation problems. International Journal of Bio-
Inspired Computation 12(1), 1-22, 2018.
Wang G.-G., Deb S., Cui Z., Monarch butterfly optimization. Neural Computing and Applications
31(7), 1995-2014, 2019.
Wang J., Zhu L., Wu B., Ryspayev A., Forestry Canopy Image Segmentation Based on Improved
Tuna Swarm Optimization. Forests 13(11), 1746, 2022.
Wang W., Tian J., An Improved Nonlinear Tuna Swarm Optimization Algorithm Based on Circle
Chaos Map and Levy Flight Operator. Electronics 11(22), 3678, 2022.
Wang Y., Wang P., Zhang J., Cui Z., Cai X., Zhang W., Chen J., A Novel Bat Algorithm with
Multiple Strategies Coupling for Numerical Optimization. Mathematics 7(2), 135, 2019.
Wei Z., Huang C., Wang X., Han T., Li Y., Nuclear Reaction Optimization: A Novel and Powerful
Physics-Based Algorithm for Global Optimization. IEEE Access 7, 66084-66109, 2019.
Wolpert D. H., Macready W. G., No free lunch theorems for optimization. IEEE Transactions on
Evolutionary Computation 1(1), 67-82, 1997.
Wu L., Huang X., Cui J., Liu C., Xiao W., Modified adaptive ant colony optimization algorithm
and its application for solving path planning of mobile robot. Expert Systems with
Applications 215, 119410, 2023.
Xie L., Han T., Zhou H., Zhang Z.-R., Han B., Tang A., Tuna Swarm Optimization: A Novel
Swarm-Based Metaheuristic Algorithm for Global Optimization. Computational Intelligence
and Neuroscience 2021, e9210050, 2021.
Xue Y., Zhang Q., Zhao Y., An improved brain storm optimization algorithm with new solution
generation strategies for classification. Engineering Applications of Artificial Intelligence
110, 104677, 2022.
Yan Z., Yan J., Wu Y., Cai S., Wang H., A novel reinforcement learning based tuna swarm
optimization algorithm for autonomous underwater vehicle path planning. Mathematics and
Computers in Simulation 209, 55-86 2023.
444
Gezici, H. JournalMM (2023), 4(2) 424-445
Zhang F., Mei Y., Nguyen S., Zhang M., Tan K. C., Surrogate-Assisted Evolutionary Multitask
Genetic Programming for Dynamic Flexible Job Shop Scheduling. IEEE Transactions on
Evolutionary Computation 25(4), 651-665, 2021.
Zhang Y., Jin Z., Group teaching optimization algorithm: A novel metaheuristic method for solving
global optimization problems. Expert Systems with Applications 148, 113246, 2020.
445