Next Article in Journal
A Preliminary Exploration of the Placental Position Influence on Uterine Electromyography Using Fractional Modelling
Previous Article in Journal
Dependence of Piezoelectric Discs Electrical Impedance on Mechanical Loading Condition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Improved Salp Swarm Algorithm: An Application for Feature Selection

1
Faculty of Informatics and Computing, Singidunum University, Danijelova 32, 11010 Belgrade, Serbia
2
Human Language Technology Research Center, University of Bucharest, 010014 Bucharest, Romania
3
Department of Computer Engineering and Technology, Guru Nanak Dev University, Amritsar 143005, India
*
Author to whom correspondence should be addressed.
Submission received: 21 January 2022 / Revised: 5 February 2022 / Accepted: 16 February 2022 / Published: 22 February 2022
(This article belongs to the Section Intelligent Sensors)

Abstract

:
We live in a period when smart devices gather a large amount of data from a variety of sensors and it is often the case that decisions are taken based on them in a more or less autonomous manner. Still, many of the inputs do not prove to be essential in the decision-making process; hence, it is of utmost importance to find the means of eliminating the noise and concentrating on the most influential attributes. In this sense, we put forward a method based on the swarm intelligence paradigm for extracting the most important features from several datasets. The thematic of this paper is a novel implementation of an algorithm from the swarm intelligence branch of the machine learning domain for improving feature selection. The combination of machine learning with the metaheuristic approaches has recently created a new branch of artificial intelligence called learnheuristics. This approach benefits both from the capability of feature selection to find the solutions that most impact on accuracy and performance, as well as the well known characteristic of swarm intelligence algorithms to efficiently comb through a large search space of solutions. The latter is used as a wrapper method in feature selection and the improvements are significant. In this paper, a modified version of the salp swarm algorithm for feature selection is proposed. This solution is verified by 21 datasets with the classification model of K-nearest neighborhoods. Furthermore, the performance of the algorithm is compared to the best algorithms with the same test setup resulting in better number of features and classification accuracy for the proposed solution. Therefore, the proposed method tackles feature selection and demonstrates its success with many benchmark datasets.

1. Introduction

The fields of big data, cryptography, and computer science in general are all influenced by the domain of optimization and to some extent even somewhat rely on it. The field of optimization is broad and employs a large variety of techniques. Although there is a large number of optimization solutions, in most of the cases there is room for further improvements and new algorithms can lead to better results. What is more, some optimization methods prove to be suitable for a certain class of problems, while others perform better for other types. Consequently, when proposing a new optimization technique, it needs to be thoroughly tested in order to identify its strengths and weaknesses with respect to the solutions’ quality when dealing with different types of problems.
Nature-inspired algorithms have been widely applied in recent years for solving various range mathematical and engineering optimization non-deterministic polynomial hard (NP-hard) problems [1] due to its high robustness and efficiency in exploiting and exploring vast search space domain. Of all nature-inspired approaches, evolutionary algorithms (EA) and swarm intelligence metaheuristics stand out the most and they have been effectively applied to different NP-hard real-world challenges [2,3,4]. The EA approaches conduct a search process by adopting reproduction, crossover and mutation operators from natural evolution, while swarm intelligence mimics collective intelligent behavior of group of organisms from nature such as flock of birds, school of fish, colonies of ants and bees, and so forth. Both families of methods belong to the group of artificial intelligence optimization techniques. Various metaheuristics were reviewed and considered to be improved upon. The most recent from the reviewed set are the grey wolf optimizer (GWO), red deer algorithm (RDA) [5], ant lion optimizer (ALO) [6], grasshopper optimization algorithm (GOA) [7], multi-verse optimizer (MVO) [8], moth-flame optimization algorithm (MFO) [9], social engineering optimizer (SEO) [10], dragonfly algorithm (DA) [11], whale optimization algorithm (WOA) [12], harris hawks optimization (HHO) [13], sine cosine algorithm (SCA) [14]. While the mentioned algorithms have all shown notable improvement performance-wise, none are without shortcomings. In the field of swarm metaheuristics, the primary solutions tend to favor either exploration or exploitation phases. There have been attempts in the domain to initially create a solution that performs equally well in both phases like the elaborate SCA. Nevertheless, even the SCA has undergone modifications and achieved better performance than its original version [15]. Hence, the true potential of the swarm metaheuristics is achieved through hybridization. This modification method relies on the principle of fusing the original algorithm with another. This is usually achieved by incorporating a principle from an algorithm that has better performance for the phase that unfavored by the solution that is improved upon. The dynamic of the field dictates constant improvement and search for new solutions and new ways to improve the existing ones. The reason for the authors to opt to improve SSA is with its robustness while maintaining simplicity. The algorithm is easy to implement and the fine-tuning modifications are even suggested by its author.
The expansion of data availability and computer processing power in recent decades has led to interaction between the fields of nature-inspired metaheuristics and machine learning, which is an artificial intelligence subdomain and as a crucial tool for data science. Machine learning models can be efficiently utilized to find patterns and make predictions from what may appear at first glance uncorrelated huge amounts of data. However, employed large datasets are usually packed with inessential and redundant data negatively influencing machine learning performance regarding computational complexity and accuracy. An attribute of “high-dimensional” is usually associated with such datasets and this phenomena is known as the curse of dimensionallity [16].
Therefore, finding relevant information (features) from large datasets is crucial for tackling the above mentioned issue and it is known as the dimensionality reduction challenge in the modern computer science literature [17]. The process of dimensionality reduction is usually employed in the data pre-processing phase of machine learning and it encompasses two approaches: feature extraction and feature selection. By using feature extraction, new variables are derived from the primary dataset [18], while feature selection chooses a subset of significant variables for further use [19].
The aim of feature selection is to find the most informative subset from high-dimensional datasets by removing redundant and irrelevant features, therefore improving classification and prediction accuracy of machine learning model. According to G. Chandrashekar et al. [20], all feature selection methods can be split into three groups: filter, wrapper and embedded. Wrapper methods utilize learning algorithms to evaluate feature subset by training a model and they are the most efficient, however the most computational demanding as well. Filter methods do not rely on a training system, but apply a measure to assign a score to feature subsets. This group is generally less computationally expensive than the wrapper family, but generates a universal set (not tuned to a particular predictive model) since it does not include model training. Finally, the embedded methods use feature selection as a part of the model construction procedure, that is, algorithms execute feature selection during the model training. The embedded methods are as fast as the filter ones, but more precise. Regarding the computational difficulty, embedded methods are in the middle of wrappers and filters.
Nature-inspired algorithms, especially swarm intelligence metaheuristics [21,22], have been successfully applied as wrapper methods for feature selection in machine learning and this is one point where machine learning and optimization metaheuristics intersect. If there are n f features in a dataset, the total number of 2 n f subsets exist and, since for high-dimensional datasets n f is typically a large number, this challenge is considered NP-hard. Consequently, regarding the fact that swarm intelligence proved to be a robust and efficient optimizer for solving NP-hard challenges, its application as a wrapper feature selection method is straightforward.
Notwithstanding that many swarm intelligence applications for feature selection can be found by surveying recent literature sources, considering no free lunch (NFL) theorem [23], there is still space for improvements in this domain. The NFL, which proved to be accurate, states that no universal algorithm exists that can solve all optimization problems. Accordingly, an approach that efficiently solves feature selection issues for all datasets does not exist. The NFL theorem motivates researchers to improve and adjust current algorithms or propose new ones, to solve various problems, including feature selection challenge.
Therefore, the motivation behind the proposed study is to try to further enhance feature selection in machine learning by employing an improved salp swarm algorithm (SSA), which was also developed and evaluated for the purpose of this research. The SSA belongs to the family of swarm intelligence metaheuristics and it was proposed in 2017 by Mirjalili et al. [24]. The basic SSA is enhanced by including an additional mechanism and by hybridization with another well-known swarm intelligence metaheuristics.
Guided by established practice from the modern literature, before its application to feature selection, the proposed enhanced SSA is firstly tested and evaluated on a recognized test-bed with challenging instances of functions having 30 dimensions from the Congress on Evolutionary Computation 2013 (CEC2013) benchmark suite [25]. This also allows a direct comparison of the obtained results with the outputs of a large variety of state-of-the-art (SOTA) metaheuristics. Afterwards, it is adapted as a wrapper-based approach for feature selection and validated against 21 well-known datasets retrieved from University of California, Irvine (UCI) repository [26].
The scientific contributions of proposed study can be summed as follows:
  • proposed improved SSA algorithm overcomes some observed deficiencies and establishes better performance than original SSA;
  • proposed method proves to be promising and competitive with other SOTA metaheuristics according to CEC2013 testing results; and
  • compared to other SOTA approaches, improvements in addressing feature selection issue in machine learning in terms of classification accuracy and number of selected features is established.
Based on that stated above, the method proposed in this study tackles the feature selection challenge and demonstrates its success with many benchmark datasets.
The organization of the manuscript is as follows. Section 2 covers some of the most notable SOTA approaches from the domain of swarm intelligence, as well as from the area of hybrid methods between swarm algorithms and machine learning. In Section 3, the original SSA is presented first, then its drawbacks are indicated and finally details of the proposed algorithm are provided. Section 4 and Section 5 present simulations with standard CEC2013 instances along with feature selection experiments including comparative analysis and discussion with other recent SOTA algorithms. Finally, a summary and future research plans are examined in Section 6.

2. Related Works

There are several recent good survey studies that present the challenges that appear within feature selection in various fields of machine learning, as well as indicate the most prolific methods to achieve the task. Some very inspiring reads are [20,27], as well as the more recent work [28]. These also thoroughly present the complexity of the feature selection task, the manner in which the dimensionality reduction can be achieved for various datasets, ideas that are also marginally discussed in the introduction section of the current article. Another work that also presents a survey for the same problem is [29]. This study especially concentrates on evolutionary computation approaches for achieving the goal, so it is better linked with the current work. A review of studies for feature selection that is further narrowed only on methodologies involving swarm intelligence algorithms is found in [30].
The two most popular evolutionary computation approaches in feature selection are genetic algorithms (GAs) and particle swarm optimization (PSO), and for both there is an increasing trend in the number of studies using them in the last couple of decades [29]. They are both applied in wrapper approaches beside various classification algorithms, like support vector machines [31,32,33], K-nearest neighbor [34,35,36], artificial neural networks [37,38], decision tree [39] and so forth.
In [31], a regression real-world task regarding combustion processes in industry is considered, where support vector regression is actually employed for getting an optimal carbon monoxide concentration in the exhaust gases based on other characteristics. Besides a GA for feature selection, two more methods from Bayesian statistics are tested, but the GA approach proves to be superior. Another case of successful combination between a GA and SVM for classification is presented in [32], where the GA is used both for feature selection and for fine tuning the parameters of the SVM. In [33], dataset with medical microscopical images is considered and features are first extracted from these and they are further reduced by feature selections and eventually an SVM is applied for achieving automated diagnosis.
In [34], a bees inspired optimization algorithm is used as the metaheuristics that takes care of optimization, several benchmark datasets are used and the results are compared to cases when a GA, a PSO or an ant colony optimization are used. The approach in [36] integrates an evolutionary algorithm with a local search technique and the authors claim very good performance for medium- to large-sized datasets.
In [37], a real-world credit dataset is collected at a Croatian bank and the GA combined with ANN is applied to it and then further tested on a UCI database. Applications to medical data are presented in [38], where various classifiers (SVM, artificial neural networks, K-nearest neighbor, linear regression) are optimized via a genetic algorithm as concerns both parameter optimization and feature selection.
Finally, in [39] an application to medical images performs, as in [33] above, feature extraction and then feature selection is performed using a GA. Various classifiers like SVM, ANN and decision tree are used for the final prediction. Another example of feature selection tackled by swarm intelligence is [40], where the PSO algorithm is validated and improved upon with a innovative mechanism of initialization and the update process of solutions with the 20 popular datasets.
SSA has also been used to address the feature selection problem. Some of the efficiently improved cases of the basic SSA include the solution of feature weighting with the minimum distance problem [41], the problem of feature selection solving through hybridization with the opposition based learning heuristics [42], and the improvement of accuracy, reliability and the convergence time for the problem of feature selection with the introduction of the inertia weight control parameter [43]. SSA has also been successfully modified and applied in other application domains recently, such as green home health care routing problem [44], health care supply chain [45], crop disease detection [46] and power systems unit commitment task [47], to name the few.
Nature is the source of inspiration in the case of swarm intelligent algorithms. The benefit for the machine learning techniques derives from the good compatibility with the main principle of swarm intelligence of employing an immense amount of units individually incapable of solving the problem. This sort of algorithms are often applied by themselves for the reason of their well known exceeding performance. Furthermore, their full potential is reached by incorporating hybridization techniques. The real world application of swarm intelligence solutions is vast from the clustering, node localization, and preserving of energy in wireless sensor networks [48,49,50,51], through to the scheduling problem with cloud tasks [2,52], the prediction of COVID-19 cases based on machine learning [53,54], MRI classification optimization [55,56], text document clustering [57], and the optimization of the artificial neural networks [58,59,60,61].

3. Proposed Method

This section first introduces basic details of the original SSA metaheuristics. Afterwards, the observed drawbacks of the basic version are elaborated and mechanisms that are able to overcome its deficiencies are proposed. Finally, solutions for improving SSA are put forward.

3.1. Basic Salp Swarm Algorithm

The SSA [24] algorithm was motivated by the group of animals called salp, which are aquatic, small, barrel-shaped and transparent. The individual units of this specimen bind together with the goal of finding the safest paths in finding food sources. These interesting creatures link up one behind another forming a chain.
The first unit in the chain is the leader and its behavior models exploration and exploitation of the optimization algorithm search process. The leader decides where the group will go in search for paths and food in its area. The leader’s position is changed towards the direction of the food source, that represents the current best solution.
The units’ positions in D-dimensional search space are mathematically described as a two-dimensional matrix labeled X, while the food source (current best solution) is labeled as F. The following function updates the leader’s position in the j-th dimension [24]:
x j 1 = F j + c 1 ( ( u b j l b j ) c 2 + l b j ) , c 3 0.5 F j c 1 ( ( u b j l b j ) c 2 + l b j ) , c 3 < 0.5 ,
the x 1 denotes leader, F j represents the position of the current best solution (food source), the upper and lower search space boundaries in the j-th dimension are, respectively, u b j and l b j , while c 1 , c 2 and c 3 denote pseudo-random numbers drawn from the interval [ 0 , 1 ] .
The parameters c 2 and c 3 determine the step size and dictate whether the position of the new solution will be generated towards negative or positive infinity. However, the most important parameter is considered to be c 1 due to the reason that it directly influences the exploration and exploitation balance, which is one of the most important factors that influence search process efficiency. The c 1 is calculated as [24]:
c 1 = 2 e ( 4 l L ) 2 ,
where the current iteration is represented as l and the maximum iterations in a run are denoted as L.
The position of followers is updated with the following equation that represents Newton’s law of motion [24]:
x j i = 1 2 a t 2 + V 0 t ,
where x j i denote i-th follower in the j-th dimension and i 2 . Annotation t represents time and a = V f i n a l V 0 , where V = x x 0 t , and the initial speed is V 0 .
Due to the fact that time in any optimization process is modeled as iteration, the disparity between iterations is 1 and V 0 = 0 at the beginning, Equation (3) can be reformulated as:
x j i = 1 2 ( x j i + x j i 1 )

3.2. Cons of the Original Algorithm and Proposed Improved Approach

It is a common case for the basic optimization algorithms to have certain deficiencies and that is also the case with the SSA. Noticed cons of the basic SSA can be summarized as follows: insufficient exploration, average exploitation power (conditional drawback) and intensification-diversification trade-off.
In general, any optimization algorithm can be improved by applying small modifications, for example, minor changes made to the search equation, additional mechanisms, and/or significant changes by hybridization with other algorithm. For the purpose of this study, basic SSA was improved by including novel mechanism, as well as hybridization with another well-known optimization metaheuristics.
Based on the findings from previous research [62,63], as well as on extensive simulations with challenging CEC2013 benchmark instances [25] that were conducted for the purpose of this study, it was discovered that the diversification process of basic SSA exhibits some deficiencies, which leads to the inappropriate intensification-diversification balance, that is on average dis-balanced towards exploitation.
First of all, the SSA exploration is controlled only by dynamic parameter c 1 according to Equation (2) and at the beginning of a run it is shifted towards exploration, while at later iterations it slides towards exploitation. However, this mechanism is applied only to the leader F (current best solution) and the whole search process to some extent depends on the luck. Followers are updated according to Equation (4), which is essentially exploitation between its previous and current positions. If the algorithm was lucky and manages to find a region of the search space where the optimum solution resides, then the search process will eventually converge and satisfying solutions’ quality will be obtained. Conversely, the search will stuck in sub-optimal regions and best solutions will be located far from global optimum at the end of a run.
Therefore, a solution for the above mentioned issue would be to improve exploration in early iterations. For achieving this goal, an exploration replacement mechanism is incorporated into the basic SSA in the following way: in the first r m p iterations, the w r s worst solutions from population are rejected and renewed with randomly generated solutions within upper and lower bounds of the search space according to expression:
x j = l b j + ( u b j l b j ) · r n d ,
where r n d is pseudo-random number drawn from a uniform distribution.
The same expression is utilized in the initialization phase, where a starting random population is generated. This mechanism introduces two additional control parameters: replacement mechanism point ( r m p ), that determines when (in terms of l) the replacement mechanism will be triggered and worst replaced solutions ( w r s ) that controls the number of worst solutions that will be replaced with random ones. If r m p = L , then the enhanced exploration will be performed throughout the whole run, similarly if r m p = 0 , then the SSA search will executed as in its basic version.
By further analysis of the original SSA, it was also determined that the exploitation procedure with the followers (Equation (4)) is relatively simple depending on their current and previous positions. To overcome this, hybridization with another recently proposed metaheuristics, the SCA [14] is performed. In each iteration, the followers are updated either by using basic SSA equation (Equation (4), or SCA search expression for and individual i and component j:
x j i = x j i + r 1 · s i n ( r 2 ) · | r 3 P j x j i | , r 4 < 0.5 x j i + r 1 · c o s ( r 2 ) · | r 3 P j x j i | , r 4 0.5 ,
where r 1 , r 2 , r 3 and r 4 are four randomly generated values from the interval [ 0 , 1 ] , P j represents the j-th component of random individual from population, | | indicates the absolute value and s i n and c o s are standard trigonometric functions.
Similarly, as the original SSA, the SCA employs the following formula to adjust intensification-diversification balance:
r 1 = a l a L ,
where the parameter a represents a constant.
To control whether the followers’ position will be updated using basic SSA or SCA search, pseudo-random number ϕ is used, as it is shown in Algorithm 1.
Encouraged with the introduced modifications, proposed enhanced SSA is named SSA with replacement mechanism and SCA search-SSARM-SCA. Its pseudo-code is shown in Algorithm 1. The flowchart of the algorithm is shown in Figure 1.
Algorithm 1: Pseudocode of SSARM-SCA.
Initialize population X by using Equation (5)
repeat
  Compute the objective function for each solution x i
  Update the best salp (solution) ( F = X b )
  for  i = 1 : N  do
   if  i = = 1  then
     Update the position of salp using Equation (1)
   end if
   if  ϕ < 0.5  then
     Update followers by using SSA search and Equation (4)
   else
     Update followers by using SCA search and Equation (6)
   end if
  end for
  Sort all individuals according to fitness
  if  l < r m p  then
   Replace w r s worst solutions by random ones using Equation (5).
  end if
  Update c 1 using Equation (2)
  Update r 1 using Equation (7)
until ( l < L )
Return the best solution F.

3.3. Complexity and Limitations of Proposed Method

The most computationally expensive operation during metaheuristics algorithm’s execution is fitness function valuation ( F F E ). Accordingly, as established in the most relevant and contemporary computer science publications, the complexity of the algorithm is measured in terms of utilized F F E s [64].
Complexity of both basic SSA and the proposed SSARM-SCA algorithms is the same: O ( N P ) + O ( 2 · N P · T ) , where N P denotes the number of solutions in the population, while T represents the number of iterations. The proposed algorithm in each iteration performs the search either by utilizing the SSA or SCA search equations. In the first r m p iterations the w r s solutions are replaced by pseudo-random solutions, however this does not add additional costs in terms of F F E , as all solutions in the population are being evaluated at the beginning of each iteration.
When the F F E is being considered, the proposed SSARM-SCA algorithm is not more complex than the basic SSA metaheuristics. The algorithm is slightly more complex if the number of floating point operations is taken into account, however this can be disregarded in comparison to F F E , and therefore it is not relevant for the algorithm’s complexity.

4. Validation of the Proposed Method for Standard CEC2013 Benchmarks

Following good practice from modern literature, the proposed SSARM-SCA is first tested on challenging CEC2013 benchmark instances [25] with 30 dimensions ( D = 30 ) before being adapted for the practical feature selection challenge. With the goal of making comparative analysis with other SOTA approaches, which results are published in the recent papers, the same experimental conditions in terms of control parameters as in [65] are kept.
The CEC2013 benchmark suite contains 28 functions that are split into three groups based on its characteristics. Test instances from 1 to 5 are unimodal, benchmarks from 6 to 20 are multimodal, and finally, test bed from 21 to 28 belongs to the category of composite functions. Functions’ details employed in simulations are given in Table 1.
Besides the proposed method and original SSA, for the purpose of comparative analysis, all methods shown in [65] are also implemented and evaluated. All algorithms are tested with 50 individuals in population N = 50 and the number of fitness function evaluations m a x F F E s of 3 × 10 5 is set as termination condition as in [65].
The SSARM-SCA is compared to practical genetic algorithm (RGA) [66], gravitational search algorithm (GSA) [67], disruption GSA (D-GSA) [68], black hole GSA (BH-GSA) [69], clustered GSA (C-GSA) [70] and attractive repulsive GSA (AR-GSA) [65].
Specific SSARM-SCA control parameters are set as follows: r m s = 3 × 10 2 according to expression m a x F F E s / 1000 and w r s = 10 by using formula N / 5 . Values for these parameters are determined empirically. Dynamic parameter c 1 for original SSA and SSARM-SCA are adjusted according to Equation (2) and parameter r 1 of SSARM-SCA is adjusted throughout the run by expression (7). It is noted that in those expressions instead of l and L, the F F E s and m a x F F E s are used, respectively. Other methods implemented for the purpose of comparison are tested with the control parameters suggested in [65].
All algorithms are executed in 51 independent runs and the following metrics in terms of objective function values are captured: best, median, worst, mean and standard deviation. Comparative analysis results are split into three tables based on the function types as follows: Table 2 show results for unimodal, Table 3 presents metrics for multimodal and Table 4 depicts results for composite CEC2013 instances. The best results for each metrics are marked bold in all tables.
First of all, obtained results for all methods for the purpose of this study are similar as in [67], therefore this research validates results reported in [67]. From the comparative analysis results superiority of proposed SSARM-SCA can be unambiguously determined. For most of the benchmarks, including all three types (unimodal, multimodal and composite) in average, the SSARM-SCA obtains the best results for all four indicators among all other SOTA metaheurisitcs. Specifically, when comparing to the original SSA, improvements in terms of convergence speed and results’ quality are substantial.
More insights regarding the convergence speed can be obtain from Figure 2. In the presented figure, convergence speed graphs for some methods included in analysis for 2 unimodal (F1 and F4), 4 multimodal (F7,F12,F14 and F18) and 2 composite (F24 and F28) benchmarks are generated. Provided graphs validate clear improvements of proposed SSARM-SCA over original SSA and other SOTA methods in terms of convergence.
However, to more objectively determine the robustness and efficiency of one approach over others, results should also be compared in terms of statistical tests. For that reason, the Friedman test [71,72], as the primary method for doing as alongside the ranked two-way analysis of variances of the proposed method and other implemented methods for the research, was conducted.
The results achieved by the 8 implemented algorithms over the 28 functions from the CEC2013 benchmark set, including the Friedman and the aligned Friedman test, are presented in the Table 5 and Table 6, respectively.
As observed in Table 6, the proposed SSARM-SCA outperformed all of the other candidates, as well as the basic SSA which averaged the ranking of 133.463. Proposed SSARM-SCA obtained an average ranking of 56.838.
Furthermore, the research [73] provides grounds for the possible improvement in terms of performance in comparison with the χ 2 value. Hence, the Iman and Davenport’s test [74] is used as well. The results of this test are summarized in Table 7.
The results show a value of 2.230 × 10 1 , which demonstrates significantly better results than the F-distribution critical value ( F ( 9 , 9 × 10 ) = 2.058 × 10 0 ). Additionally, the null hypothesis is rejected by Iman and Davenport’s test. The Friedman statistics fared with the score of ( χ r 2 = 1.407 × 10 1 ) resulting in better performance than the F-distribution critical value at the level of significance being α = 0.05 .
The final conclusion is that the null hypothesis can be rejected and that the proposed SSARM-SCA is clearly the best of its competitors.
The rejection of the null hypothesis by both statistical tests performed is followed by the next type of test, Holm’s step-down method which is a non-parametric post-hoc method. The findings of such experiments are displayed in Table 8.
The p value is the main sorting reference for all the methods and they are compared against the α / ( k i ) . The k denote the degree of freedom while the i shows the number of the algorithm, respectively.
This research utilized α parameter at the levels of 0.05 and 0.1. It should be noted that the values of p parameter are displayed in scientific notation.
The summary of testing with Holm’s method by the results provided in the Table 8 stands to prove that the improvement has been achieved for the subjected solution in case of both levels of significance.

5. Feature Selection Experiments

The feature selection belongs to the group of binary problems, hence the well-known V-shaped transfer function was used for mapping continuous search space variables to discrete values 0 and 1. Therefore, if a dataset consists of n f feature, one solution is represented as a binary array of length n f . This is how the proposed SSARM-SCA was adapted for this problem and for the sake of distinguishing binary version from its respecting continuous version it is referenced as the bSSARM-SCA.
Efficiency of proposed method for feature selection challenge was compared to SOTA metaheuristics presented in [22]. For that reason similar experimental conditions as in [22] were established. However, instead of using L = 70 with N = 8 as in [22], the m a x F F E s was used as termination condition and it was set to 560 ( N · L ). This approach is more reasonable since different optimization algorithms consume different number of F F E s in each iteration and respecting the fact that the F F E is the most expensive calculation in optimization process. The other SSARM-SCA control parameters were as follows: r m s = 56 according to formula m a x F F E s / 10 and w r s = 2 by using expression r o u n d ( N / 3 ) .
The bSSARM-SCA performance was tested on the 21 UCI datasets which are often used for bench-marking (Table 9). All datasets are split into training and testing using t r a i n _ t e s t _ s p l i t rule in proportion 80%:20%. Each solution’s fitness is calculated on the training set by utilizing nearest neighbors (KNN) classifier and the following fitness function F as in [22]:
F = α E R ( D ) + β R C ,
where the E R ( D ) represents the error-rate of classification, the selected features number represented as R, and lastly the C shows the sum of all features. The α and β are parameters that establish relative influence of the E R ( D ) and R to the fitness function and they sum to 1 ( α = 1 β ).
From the formulated fitness function it can be seen that the classification error rate, as well as the number of selected features are taken into consideration and that the problem is formulated as minimization optimization challenge. In this study, α is set to 0.9, while β is adjusted to 0.1.
At the end of a run, the solution with best fitness is determined and results of its evaluation on the testing set were reported. All experiments were conducted in 20 independent runs. All methods, including SOTA metaheuristics used in comparative analysis shown in [22], along with original bSSA and bSSARM-SCA are implemented in Python using numpy, pandas, scikitlearn and matplotlib libraries. Moreover, the same performance metrics as in [22] are shown and for all implemented methods, a V-shaped transfer function is used for mapping continuous to binary search space. Algorithms proposed in [22] were tested with the control parameters as suggested in original papers.
Finally, as proposed in [40], four different initialization methods were employed in order to more objectively evaluate proposed method: small, mixed and large. In small initialization, all individuals are generated at the beginning of a run with small number of selected features (about 1/3) and in the case of a large individuals employ most of the features ([2/3,1]). In mixed initialization experiments, generated solutions take into account about 2/3 of all features in the dataset.
In all three experiments mean fitness and accuracy obtained over 20 runs are used as performance metrics and expressions used for its calculation are given in Equation (9) and Equation (10), respectively.
A v g ( f ) = 1 R u n i = 1 R u n f i * ,
where average fitness is denoted as A v g ( f ) , f * designates the individual with the best fitness in the run, while R u n represents the total number of runs.
A v g ( c ) = 1 R u n i = 1 R u n 1 N i = 1 N M a t c h ( C i , L i ) ,
where A v g ( c ) represents the average classification accuracy, N marks the number of instances in the test set, C i represents the classifier output for instance i, and L i denotes the reference class corresponding to the given instance i.
As already noted, for the purpose of comparative analysis, besides original SSA adapted for binary optimization problems (bSSA), the following algorithms, which results are shown in [22], are also used: whale optimization algorithm (WOA) [12], bWOA with sigmoidal transfer function (bWOA-S) and with hyperbolic tangent transfer function (bWOA-V) [22], three versions of binary ant lion optimizer (BALO) [22], particle swarm optimization (PSO) [75], binary gray wolf optimization (bGWO) and binary dragonfly algorithm (bDA) [11].
Mean fitness and classification accuracy for all three initialization strategies and 21 UCI datasets are shown in Table 10, Table 11, Table 12, Table 13, Table 14 and Table 15. In all provided tables, the best results are marked with bold style.
From the provided experimental results, a few important remarks can be deduced. First, similar results for WOA, bWOA-S, bWOA-v, BALO1, BALO2, BALO3, PSO, bGWO and bDA to those reported in [22] were obtained, therefore validity of previous study is confirmed (it is noted that due to stochastic nature of metaheuristics, exactly the same results could not be generated). Second, proposed hybrid bSSARM-SCA for most datasets and benchmark instances outscores original SSA, hence performance improvements over basic implementation are clear. Finally, when compared to all other SOTA approaches encompassed by comparative analysis, proposed bSSARM-SCA in average obtained the best results and proved to be robust method in tackling feature selection challenge in terms of employed fitness function and classification accuracy.
Formulated fitness function takes into account the number of selected features, however only with weighted coefficient of 0.1 (parameter β = 0.1 in expression (8)). For that reason, to further validate propose method the average proportion of selected features (selection size) over 20 runs and all three initialization strategies are shown in Table 16.
Similar to results with an average obtained fitness function and classification accuracy, from Table 16 it can be concluded that on average proposed bSSARM-SCA metaheuristics managed to significantly reduce the number of selected features and this in turn has implications for the classifier’s computational efficiency. Therefore, as a conclusion by performing feature selection with bSSARM-SCA classification computational time can be substantially reduced. In terms of average selection size, only the bDA for some test instances managed to outscore the method proposed in this study.
Box and whiskers diagram visualization of average classification error ( E R ) for all datasets and three initialization strategy is shown in Figure 3. From presented diagram stability of propose bSSARM-SCA can be undoubtedly noticed. For example, when compared with basic SSA, that in some runs misses promising regions of the search space, the superiority of the algorithm proposed in this study is evident.
Finally, to show the performances of the proposed bSSARM-SCA algorithm and compare it to other SOTA SSA versions, the authors have implemented binary versions of three novel SSA modifications. The accuracy of the bSSARM-SCA over 21 datasets was compared to opposition based learning and inertia weight ISSA (bISSA1), proposed by [41], opposition based learning and local search ISSA (bISSA2) proposed in [42], and inertia weight ISSA (bISSA3) given in [43]. Again, it is worth noting that the authors have independently implemented all three mentioned binary ISSA variants and executed the experiments with 21 observed datasets. The obtained results are shown in Table 17, where the best result is marked bold for each category (small, large or mixed initialization). The simulation findings clearly show the superiority of the proposed bSSARM-SCA method, that obtained the best results on 15 out of 21 observed datasets. The second best method was bISSA2 [42], which obtained the best results on four datasets, while the bISSA1 method [41] achieved the best accuracy on two datasets.

6. Conclusions

Research proposed in this study presents a novel SSA algorithm that addresses observed deficiencies of its original implementation. By hybridizing basic algorithm with well-known SCA metaheuristics and by incorporating guided replacement mechanism, a novel SSARM-SCA metaheurisitcs is devised.
Guided by established practice from the modern literature, before its application to feature selection, the proposed enhanced SSA is firstly tested and evaluated on a recognized test-bed with challenging instances of functions having 30 dimensions from the CEC2013 benchmark suite. Afterwards, it is adapted as a wrapper-based approach for feature selection and validated against 21 well-known datasets retrieved from UCI.
According to experimental findings and rigorous comparative analysis with other recent SOTA approaches, proposed SSARM-SCA proves to be an efficient optimizer that significantly improves convergences speed and results’ quality of the basic SSA and also other SOTA algorithms. Moreover, obtained results prove that the proposed method manage to established better classification accuracy and utilization of lesser number of features, therefore it also manages to improve the solution to the feature selection challenge.
The proposed SSARM-SCA algorithm does not increase the complexity of the basic SSA implementation in terms of F F E , while offering significantly better performances for this particular problem. However, according to the no free lunch theorem, the limitation of the proposed solution is that there are no guarantees that it would perform well for other optimization problems.
The possible directions of the future research include testing of the devised SSARM-SCA algorithm on other practical datasets from different application domains, and also applying it to other optimization problems, such as the wireless sensor networks optimization problem and task scheduling in cloud-based systems.

Author Contributions

Conceptualization, N.B. (Nebojsa Bacanin), M.Z. and C.S.; methodology, N.B. (Nebojsa Budimirovic), A.C., A.P. and C.S.; software, N.B. (Nebojsa Budimirovic), A.C., M.Z.; validation, N.B. (Nebojsa Bacanin) and C.S.; formal analysis, M.Z. and A.P.; investigation, N.B. (Nebojsa Budimirovic) and N.B. (Nebojsa Bacanin); data curation, C.S., N.B. (Nebojsa Bacanin) and A.C.; writing—original draft preparation, A.P. and M.Z.; writing—review and editing, M.Z., C.S. and A.C.; visualization, N.B. (Nebojsa Budimirovic), M.Z. and A.C.; supervision, N.B. (Nebojsa Bacanin) and C.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Catalin Stoean was supported by a grant of the Romanian Ministry of Education and Research, CCCDI-UEFISCDI,411PED/2020, project number PN-III-P2-2.1-PED-2019-2271, within PNCDI III.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korani, W.; Mouhoub, M. Review on Nature-Inspired Algorithms. Oper. Res. Forum 2021, 2, 1–26. [Google Scholar] [CrossRef]
  2. Bezdan, T.; Zivkovic, M.; Tuba, E.; Strumberger, I.; Bacanin, N.; Tuba, M. Multi-objective Task Scheduling in Cloud Computing Environment by Hybridized Bat Algorithm. J. Intell. Fuzzy Syst. 2020, 42, 718–725. [Google Scholar]
  3. Strumberger, I.; Minovic, M.; Tuba, M.; Bacanin, N. Performance of elephant herding optimization and tree growth algorithm adapted for node localization in wireless sensor networks. Sensors 2019, 19, 2515. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Katyara, S.; Shaikh, M.F.; Shaikh, S.; Khand, Z.H.; Staszewski, L.; Bhan, V.; Majeed, A.; Shah, M.A.; Zbigniew, L. Leveraging a genetic algorithm for the optimal placement of distributed generation and the need for energy management strategies using a fuzzy inference system. Electronics 2021, 10, 172. [Google Scholar] [CrossRef]
  5. Fathollahi-Fard, A.; Hajiaghaei-Keshteli, M.; Tavakkoli-Moghaddam, R. Red deer algorithm (RDA): A new nature-inspired meta-heuristic. Soft Comput. 2020, 24, 14637–14665. [Google Scholar] [CrossRef]
  6. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  7. Meraihi, Y.; Gabis, A.B.; Mirjalili, S.; Ramdane-Cherif, A. Grasshopper Optimization Algorithm: Theory, Variants, and Applications. IEEE Access 2021, 9, 50001–50024. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Mirjalili, S.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  9. Okwu, M.; Tartibu, L. Moths–Flame Optimization Algorithm. In Metaheuristic Optimization: Nature-Inspired Algorithms Swarm and Computational Intelligence, Theory and Applications; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2021; Volume 927, pp. 115–123. [Google Scholar] [CrossRef]
  10. Fathollahi-Fard, A.; Hajiaghaei-Keshteli, M.; Tavakkoli-Moghaddam, R. The Social Engineering Optimizer (SEO). Eng. Appl. Artif. Intell. 2018, 72, 267–293. [Google Scholar] [CrossRef]
  11. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  14. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  15. Abualigah, L.; Diabat, A. Advances in sine cosine algorithm: A comprehensive survey. Artif. Intell. Rev. 2021, 54, 2567–2608. [Google Scholar] [CrossRef]
  16. Trunk, G. A Problem of Dimensionality: A Simple Example. Pattern Anal. Mach. Intell. IEEE Trans. 1979, PAMI-1, 306–307. [Google Scholar] [CrossRef] [PubMed]
  17. Van Der Maaten, L.; Postma, E.; Van den Herik, J. Dimensionality reduction: A comparative. J. Mach. Learn. Res. 2009, 10, 13. [Google Scholar]
  18. Levine, M.D. Feature extraction: A survey. Proc. IEEE 1969, 57, 1391–1407. [Google Scholar] [CrossRef]
  19. Dhiman, G.; Oliva, D.; Kaur, A.; Singh, K.K.; Vimal, S.; Sharma, A.; Cengiz, K. BEPO: A novel binary emperor penguin optimizer for automatic feature selection. Knowl.-Based Syst. 2021, 211, 106560. [Google Scholar] [CrossRef]
  20. Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
  21. Nguyen, B.H.; Xue, B.; Zhang, M. A survey on swarm intelligence approaches to feature selection in data mining. Swarm Evol. Comput. 2020, 54, 100663. [Google Scholar] [CrossRef]
  22. Hussien, A.G.; Oliva, D.; Houssein, E.H.; Juan, A.A.; Yu, X. Binary Whale Optimization Algorithm for Dimensionality Reduction. Mathematics 2020, 8, 1821. [Google Scholar] [CrossRef]
  23. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  24. Mirjalili, S.; Gandomi, A.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  25. Li, X.; Engelbrecht, A.; Epitropakis, M.G. Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization; Techtechnical Report; Evolutionary Computation and Machine Learning Group, RMIT University: Melbourne, Australia, 2013. [Google Scholar]
  26. Dua, D.; Graff, C. Uci machine learning repository. In The Absenteeism at Work Dataset Was Donated by Andrea Martiniano, Ricardo Pinto Ferreira, and Renato Jose Sassi; University of California: Irvine, CA, USA, 2017. [Google Scholar]
  27. Miao, J.; Niu, L. A Survey on Feature Selection. Procedia Comput. Sci. 2016, 91, 919–926. [Google Scholar] [CrossRef] [Green Version]
  28. Dhal, P.; Azad, C. A comprehensive survey on feature selection in the various fields of machine learning. Appl. Intell. 2022, 28, 4543–4581. [Google Scholar] [CrossRef]
  29. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A Survey on Evolutionary Computation Approaches to Feature Selection. IEEE Trans. Evol. Comput. 2016, 20, 606–626. [Google Scholar] [CrossRef] [Green Version]
  30. Brezočnik, L.; Fister, I., Jr.; Podgorelec, V. Swarm Intelligence Algorithms for Feature Selection: A Review. Appl. Sci. 2018, 8, 1521. [Google Scholar] [CrossRef] [Green Version]
  31. Rebolledo, M.; Stoean, R.; Eiben, A.E.; Bartz-Beielstein, T. Hybrid Variable Selection and Support Vector Regression for Gas Sensor Optimization. In Bioinspired Optimization Methods and Their Applications, Proceedings of the 9th International Conference (BIOMA 2020), Brussels, Belgium, 19–20 November 2020; Filipič, B., Minisci, E., Vasile, M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 281–293. [Google Scholar]
  32. Tao, Z.; Huiling, L.; Wenwen, W.; Xia, Y. GA-SVM based feature selection and parameter optimization in hospitalization expense modeling. Appl. Soft Comput. 2019, 75, 323–332. [Google Scholar] [CrossRef]
  33. Stoean, C. In Search of the Optimal Set of Indicators when Classifying Histopathological Images. In Proceedings of the 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), Timisoara, Romania, 24–27 September 2016; pp. 449–455. [Google Scholar] [CrossRef]
  34. Marinaki, M.; Marinakis, Y. A bumble bees mating optimization algorithm for the feature selection problem. Int. J. Mach. Learn. Cybern. 2016, 7, 519–538. [Google Scholar] [CrossRef]
  35. Kashef, S.; Nezamabadi-pour, H. An advanced ACO algorithm for feature subset selection. Neurocomputing 2015, 147, 271–279. [Google Scholar] [CrossRef]
  36. Jeong, Y.S.; Shin, K.S.; Jeong, M.K. An evolutionary algorithm with the partial sequential forward floating search mutation for large-scale feature selection problems. J. Oper. Res. Soc. 2015, 66, 529–538. [Google Scholar] [CrossRef]
  37. Oreski, S.; Oreski, G. Genetic algorithm-based heuristic for feature selection in credit risk assessment. Expert Syst. Appl. 2014, 41, 2052–2064. [Google Scholar] [CrossRef]
  38. Winkler, S.M.; Affenzeller, M.; Jacak, W.; Stekel, H. Identification of Cancer Diagnosis Estimation Models Using Evolutionary Algorithms: A Case Study for Breast Cancer, Melanoma, and Cancer in the Respiratory System. In Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO’11), Dublin, Ireland, 12–16 July 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 503–510. [Google Scholar] [CrossRef]
  39. Da Silva, S.F.; Ribeiro, M.X.; Batista Neto, J.D.E.S.; Traina, C., Jr.; Traina, A.J.M. Improving the Ranking Quality of Medical Image Retrieval Using a Genetic Feature Selection Method. Decis. Support Syst. 2011, 51, 810–820. [Google Scholar] [CrossRef]
  40. Xue, B.; Zhang, M.; Browne, W.N. Particle swarm optimisation for feature selection in classification: Novel initialisation and updating mechanisms. Appl. Soft Comput. 2014, 18, 261–276. [Google Scholar] [CrossRef]
  41. Ben Chaabane, S.; Belazi, A.; Kharbech, S.; Bouallegue, A.; Clavier, L. Improved Salp Swarm Optimization Algorithm: Application in Feature Weighting for Blind Modulation Identification. Electronics 2021, 10, 2002. [Google Scholar] [CrossRef]
  42. Tubishat, M.; Idris, N.; Shuib, L.; Abushariah, M.A.; Mirjalili, S. Improved Salp Swarm Algorithm based on opposition based learning and novel local search algorithm for feature selection. Expert Syst. Appl. 2020, 145, 113122. [Google Scholar] [CrossRef]
  43. Hegazy, A.E.; Makhlouf, M.; El-Tawel, G.S. Improved salp swarm algorithm for feature selection. J. King Saud Univ.-Comput. Inf. Sci. 2020, 32, 335–344. [Google Scholar] [CrossRef]
  44. Fathollahi-Fard, A.M.; Hajiaghaei-Keshteli, M.; Tavakkoli-Moghaddam, R. A bi-objective green home health care routing problem. J. Clean. Prod. 2018, 200, 423–443. [Google Scholar] [CrossRef]
  45. Fathollahi-Fard, A.M.; Hajiaghaei-Keshteli, M.; Tavakkoli-Moghaddam, R.; Smith, N.R. Bi-level programming for home health care supply chain considering outsourcing. J. Ind. Inf. Integr. 2021, 25, 100246. [Google Scholar] [CrossRef]
  46. Jain, S.; Dharavath, R. Memetic salp swarm optimization algorithm based feature selection approach for crop disease detection system. J. Ambient. Intell. Humaniz. Comput. 2021, 1–19. [Google Scholar] [CrossRef]
  47. Venkatesh Kumar, C.; Ramesh Babu, M. An Exhaustive Solution of Power System Unit Commitment Problem Using Enhanced Binary Salp Swarm Optimization Algorithm. J. Electr. Eng. Technol. 2022, 17, 395–413. [Google Scholar] [CrossRef]
  48. Zivkovic, M.; Bacanin, N.; Tuba, E.; Strumberger, I.; Bezdan, T.; Tuba, M. Wireless Sensor Networks Life Time Optimization Based on the Improved Firefly Algorithm. In Proceedings of the International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020; pp. 1176–1181. [Google Scholar]
  49. Bacanin, N.; Tuba, E.; Zivkovic, M.; Strumberger, I.; Tuba, M. Whale Optimization Algorithm with Exploratory Move for Wireless Sensor Networks Localization. In Proceedings of the 19th International Conference on Hybrid Intelligent Systems (HIS 2019), Bhopal, India, 10–12 December 2019; Springer: Cham, Switzerland, 2019; pp. 328–338. [Google Scholar]
  50. Zivkovic, M.; Bacanin, N.; Zivkovic, T.; Strumberger, I.; Tuba, E.; Tuba, M. Enhanced Grey Wolf Algorithm for Energy Efficient Wireless Sensor Networks. In Proceedings of the Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 26–27 May 2020; pp. 87–92. [Google Scholar]
  51. Bacanin, N.; Arnaut, U.; Zivkovic, M.; Bezdan, T.; Rashid, T.A. Energy Efficient Clustering in Wireless Sensor Networks by Opposition-Based Initialization Bat Algorithm. In Computer Networks and Inventive Communication Technologies; Springer: Singapore, 2022; pp. 1–16. [Google Scholar]
  52. Bacanin, N.; Bezdan, T.; Tuba, E.; Strumberger, I.; Tuba, M.; Zivkovic, M. Task scheduling in cloud computing environment by grey wolf optimizer. In Proceedings of the 27th Telecommunications Forum (TELFOR), Belgrade, Serbia, 26–27 November 2019; pp. 1–4. [Google Scholar]
  53. Zivkovic, M.; Bacanin, N.; Venkatachalam, K.; Nayyar, A.; Djordjevic, A.; Strumberger, I.; Al-Turjman, F. COVID-19 cases prediction by using hybrid machine learning and beetle antennae search approach. Sustain. Cities Soc. 2021, 66, 102669. [Google Scholar] [CrossRef] [PubMed]
  54. Zivkovic, M.; Venkatachalam, K.; Bacanin, N.; Djordjevic, A.; Antonijevic, M.; Strumberger, I.; Rashid, T.A. Hybrid Genetic Algorithm and Machine Learning Method for COVID-19 Cases Prediction. In Proceedings of International Conference on Sustainable Expert Systems; Springer Nature: Singapore, 2021; Volume 176, p. 169. [Google Scholar]
  55. Bezdan, T.; Zivkovic, M.; Tuba, E.; Strumberger, I.; Bacanin, N.; Tuba, M. Glioma Brain Tumor Grade Classification from MRI Using Convolutional Neural Networks Designed by Modified FA. In International Conference on Intelligent and Fuzzy Systems, Proceedings of the INFUS 2020 Conference, Istanbul, Turkey, 21–23 July 2020; Springer: Cham, Switzerland, 2020; pp. 955–963. [Google Scholar]
  56. Basha, J.; Bacanin, N.; Vukobrat, N.; Zivkovic, M.; Venkatachalam, K.; Hubálovskỳ, S.; Trojovskỳ, P. Chaotic Harris Hawks Optimization with Quasi-Reflection-Based Learning: An Application to Enhance CNN Design. Sensors 2021, 21, 6654. [Google Scholar] [CrossRef] [PubMed]
  57. Bezdan, T.; Stoean, C.; Naamany, A.A.; Bacanin, N.; Rashid, T.A.; Zivkovic, M.; Venkatachalam, K. Hybrid Fruit-Fly Optimization Algorithm with K-Means for Text Document Clustering. Mathematics 2021, 9, 1929. [Google Scholar] [CrossRef]
  58. Strumberger, I.; Tuba, E.; Bacanin, N.; Zivkovic, M.; Beko, M.; Tuba, M. Designing convolutional neural network architecture by the firefly algorithm. In Proceedings of the International Young Engineers Forum (YEF-ECE), Costa da Caparica, Portugal, 10 May 2019; pp. 59–65. [Google Scholar]
  59. Milosevic, S.; Bezdan, T.; Zivkovic, M.; Bacanin, N.; Strumberger, I.; Tuba, M. Feed-Forward Neural Network Training by Hybrid Bat Algorithm. In Modelling and Development of Intelligent Systems, Proceedings of the 7th International Conference (MDIS 2020), Sibiu, Romania, 22–24 October 2020; Revised Selected Papers 7; Springer International Publishing: Cham, Switzerland, 2021; pp. 52–66. [Google Scholar]
  60. Bacanin, N.; Bezdan, T.; Venkatachalam, K.; Zivkovic, M.; Strumberger, I.; Abouhawwash, M.; Ahmed, A. Artificial Neural Networks Hidden Unit and Weight Connection Optimization by Quasi-Refection-Based Learning Artificial Bee Colony Algorithm. IEEE Access 2021, 9, 169135–169155. [Google Scholar] [CrossRef]
  61. Bacanin, N.; Alhazmi, K.; Zivkovic, M.; Venkatachalam, K.; Bezdan, T.; Nebhen, J. Training Multi-Layer Perceptron with Enhanced Brain Storm Optimization Metaheuristics. Comput. Mater. Contin. 2022, 70, 4199–4215. [Google Scholar] [CrossRef]
  62. Bezdan, T.; Petrovic, A.; Zivkovic, M.; Strumberger, I.; Devi, V.K.; Bacanin, N. Current Best Opposition-Based Learning Salp Swarm Algorithm for Global Numerical Optimization. In Proceedings of the Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 26–27 May 2021; pp. 5–10. [Google Scholar]
  63. Bacanin, N.; Petrovic, A.; Zivkovic, M.; Bezdan, T.; Chhabra, A. Enhanced Salp Swarm Algorithm for Feature Selection. In International Conference on Intelligent and Fuzzy Systems, Proceedings of the INFUS 2021 Conference, Virtual, 24–26 August 2021; Springer: Cham, Switzerland, 2021; pp. 483–491. [Google Scholar]
  64. Yang, X.S. Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications; Watanabe, O., Zeugmann, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  65. Zandevakili, H.; Rashedi, E.; Mahani, A. Gravitational search algorithm with both attractive and repulsive forces. Soft Comput. 2019, 23, 1–43. [Google Scholar] [CrossRef]
  66. Haupt, R.L.; Haupt, S.E. Practical Genetic Algorithms; John Wiley and Sons: New York, NY, USA, 1998. [Google Scholar]
  67. Rashedi, E.; Nezamabadi-pour, H. Improving the precision of CBIR systems by feature selection using binary gravitational search algorithm. In Proceedings of the 16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP 2012), Shiraz, Iran, 2–3 May 2012. [Google Scholar] [CrossRef]
  68. Sarafrazi, S.; Nezamabadi-pour, H.; Saryazdi, S. Disruption: A new operator in gravitational search algorithm. Sci. Iran. 2011, 18, 539–548. [Google Scholar] [CrossRef] [Green Version]
  69. Doraghinejad, M.; Nezamabadi-Pour, H. Black hole: A new operator for gravitational search algorithm. Int. J. Comput. Intell. Syst. 2014, 7, 809–826. [Google Scholar] [CrossRef] [Green Version]
  70. Shams, M.; Rashedi, E.; Hakimi, A. Clustered-gravitational search algorithm and its application in parameter optimization of a Low Noise Amplifier. Appl. Math. Comput. 2015, 258, 436–453. [Google Scholar] [CrossRef]
  71. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  72. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  73. Sheskin, D.J. Handbook of Parametric and Nonparametric Statistical Procedures; Chapman and Hall/CRC: Boca Raton, FL, USA, 2020. [Google Scholar]
  74. Iman, R.L.; Davenport, J.M. Approximations of the critical region of the fbietkan statistic. Commun. Stat.-Theory Methods 1980, 9, 571–595. [Google Scholar] [CrossRef]
  75. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
Figure 1. Flowchart of the proposed SSARM-SCA algorithm.
Figure 1. Flowchart of the proposed SSARM-SCA algorithm.
Sensors 22 01711 g001
Figure 2. Convergence speed comparison for 8 CEC2013 instances-proposed SSARM-SCA vs. other approaches. (a) SSARM-SCA vs. others-CEC2013 F1. (b) SSARM-SCA vs. others-CEC2013 F4. (c) SSARM-SCA vs. others-CEC2013 F7. (d) SSARM-SCA vs. others-CEC2013 F12. (e) SSARM-SCA vs. others-CEC2013 F14. (f) SSARM-SCA vs. others-CEC2013 F18. (g) SSARM-SCA vs. others-CEC2013 F24. (h) SSARM-SCA vs. others-CEC2013 F28.
Figure 2. Convergence speed comparison for 8 CEC2013 instances-proposed SSARM-SCA vs. other approaches. (a) SSARM-SCA vs. others-CEC2013 F1. (b) SSARM-SCA vs. others-CEC2013 F4. (c) SSARM-SCA vs. others-CEC2013 F7. (d) SSARM-SCA vs. others-CEC2013 F12. (e) SSARM-SCA vs. others-CEC2013 F14. (f) SSARM-SCA vs. others-CEC2013 F18. (g) SSARM-SCA vs. others-CEC2013 F24. (h) SSARM-SCA vs. others-CEC2013 F28.
Sensors 22 01711 g002
Figure 3. Box plots and whiskers diagrams for average error rate including all datasets and three initialization strategies.
Figure 3. Box plots and whiskers diagrams for average error rate including all datasets and three initialization strategies.
Sensors 22 01711 g003
Table 1. CEC2013 benchmark suite details.
Table 1. CEC2013 benchmark suite details.
NoFunctionsInitial Range
Unimodal Functions
1Sphere function [ 100 , 100 ] D
2Rotated High Conditioned Elliptic Function [ 100 , 100 ] D
3Rotated Bent Cigar Function [ 100 , 100 ] D
4Rotated Discus Function [ 100 , 100 ] D
5Different Powers Function [ 100 , 100 ] D
Basic multimodal Functions
6Rotated Rosenbrock’s Function [ 100 , 100 ] D
7Rotated Schaffer’s F7 Function [ 100 , 100 ] D
8Rotated Ackley’s Function [ 100 , 100 ] D
9Rotated Weierstrass Function [ 100 , 100 ] D
10Rotated Griewank’s Function [ 100 , 100 ] D
11Rastrigin’s Function [ 100 , 100 ] D
12Rotated Rastrigin’s Function [ 100 , 100 ] D
13Non-Continuous Rotated Rastrigin’s Function [ 100 , 100 ] D
14Schwefel’s Function [ 100 , 100 ] D
15Rotated Schwefel’s Function [ 100 , 100 ] D
16Rotated Katsuura Function [ 100 , 100 ] D
17Lunacek Bi_Rastrigin Function [ 100 , 100 ] D
18Rotated Lunacek Bi_Rastrigin Function [ 100 , 100 ] D
19Expanded Griewank’s plus Rosenbrock’s Function [ 100 , 100 ] D
20Expanded Schaffer’s F6 Function [ 100 , 100 ] D
Composition Functions
21Composition Function 1 (n = 5, Rotated) [ 100 , 100 ] D
22Composition Function 2 (n = 3, Unrotated) [ 100 , 100 ] D
23Composition Function 3 (n = 3, Rotated) [ 100 , 100 ] D
24Composition Function 4 (n = 3, Rotated) [ 100 , 100 ] D
25Composition Function 5 (n = 3, Rotated) [ 100 , 100 ] D
26Composition Function 6 (n = 5, Rotated) [ 100 , 100 ] D
27Composition Function 7 (n = 5, Rotated) [ 100 , 100 ] D
28Composition Function 8 (n = 5, Rotated) [ 100 , 100 ] D
Table 2. Comparative analysis between SSARM-SCA and other SOTA methods for CEC2013 unimodal benchmarks.
Table 2. Comparative analysis between SSARM-SCA and other SOTA methods for CEC2013 unimodal benchmarks.
SSARGAGSAD-GSABH-GSAC-GSAAR-GSASSARM-SCA
F1
Best 0 × 10 0 1.85 × 10 2 0 × 10 0 6.70 × 10 1 4.55 × 10 13 2.27 × 10 13 0 × 10 0 0 × 10 0
Median 0 × 10 0 2.81 × 10 2 0 × 10 0 9.56 × 10 1 3.64 × 10 12 2.27 × 10 13 0 × 10 0 0 × 10 0
Worst 2.11 × 10 13 3.52 × 10 2 2.27 × 10 13 1.47 × 10 0 5.00 × 10 12 4.55 × 10 13 0 × 10 0 0 × 10 0
Mean 6.94 × 10 14 2.82 × 10 2 7.58 × 10 14 9.75 × 10 1 3.33 × 10 12 2.76 × 10 13 0 × 10 0 0 × 10 0
Std 1.11 × 10 13 3.16 × 10 1 1.08 × 10 13 1.97 × 10 1 1.03 × 10 12 9.44 × 10 14 0 × 10 0 0 × 10 0
F2
Best 9.16 × 10 5 1.09 × 10 7 9.26 × 10 5 7.26 × 10 6 5.25 × 10 5 9.60 × 10 5 1.56 × 10 5 1 . 22 × 10 5
Median 1.69 × 10 6 1.59 × 10 7 1.74 × 10 6 1.12 × 10 7 1.97 E × 10 6 1.75 × 10 6 6.05 × 10 5 5 . 78 × 10 5
Worst 3.51 × 10 6 2.53 × 10 7 3.33 × 10 6 1.80 × 10 7 4.94 × 10 6 3 . 07 × 10 6 6.57 × 10 6 6.48 × 10 6
Mean 1.21 × 10 6 1.69 × 10 7 1.85 × 10 6 1.16 × 10 7 2.01 × 10 6 1.84 × 10 6 1.37 × 10 6 1 . 20 × 10 6
Std 5.22 × 10 5 3.64 × 10 6 5.12 × 10 5 42.17 × 10 6 7.84 × 10 5 4 . 53 × 10 5 1.68 × 10 6 1.45 × 10 6
F3
Best 2.65 × 10 7 3.34 × 10 9 2.78 × 10 7 1.02 × 10 9 4.87 × 10 5 2.86 × 10 7 7.73 × 10 12 7 . 42 × 10 12
Median 7.67 × 10 8 6.28 × 10 9 7.85 × 10 8 2.96 × 10 9 1.62 × 10 6 1.07 × 10 9 1.23 × 10 11 1 . 15 × 10 11
Worst 2.89 × 10 9 2.37 × 10 10 2.95 × 10 9 9.22 × 10 9 2.90 × 10 19 4.38 × 10 9 1.50 × 10 11 1 . 45 × 10 11
Mean 9.71 × 10 8 6.72 × 10 9 9.84 × 10 8 3.55 × 10 9 5.70 × 10 17 1.23 × 10 9 1.19 × 10 11 1 . 07 × 10 11
Std 7.23 × 10 8 2.99 × 10 9 7.14 × 10 8 1.72 × 10 9 4.07 × 10 18 8.40 × 10 8 1.85 × 10 12 1 . 76 × 10 12
F4
Best 5.64 × 10 4 5.16 × 10 4 5.73 × 10 4 5.85 × 10 4 4.99 × 10 4 5.59 × 10 4 4.64 × 10 4 4 . 43 × 10 4
Median 6.57 × 10 4 7.12 × 10 4 6.86 × 10 4 6.87 × 10 4 6.82 × 10 4 6.98 × 10 4 6.49 × 10 4 6 . 23 × 10 4
Worst 7.81 × 10 4 1.03 × 10 5 7.95 × 10 4 7 . 10 × 10 4 9.03 × 10 4 8.64 × 10 4 7.84 × 10 4 7.66 × 10 4
Mean 6.72 × 10 4 7.33 × 10 4 6.85 × 10 4 6.74 × 10 4 6.85 × 10 4 7.06 × 10 4 6.47 × 10 4 6 . 23 × 10 4
Std 5.56 × 10 3 1.20 × 10 4 5.67 × 10 3 3 . 36 × 10 3 8.16 × 10 3 5.19 × 10 3 7.86 × 10 3 7.54 × 10 3
F5
Best 1.56 × 10 12 1.95 × 10 2 1 . 48 × 10 12 2.68 × 10 0 1.92 × 10 11 1.44 × 10 11 2.03 × 10 8 2.36 × 10 8
Median 2.51 × 10 12 3.04 × 10 2 2 . 39 × 10 12 1.49 × 10 1 1.00 × 10 10 2.13 × 10 11 9.96 × 10 8 8.92 × 10 8
Worst 3.88 × 10 12 4.65 × 10 2 3 . 75 × 10 12 6.05 × 10 1 3.37 × 10 10 5.71 × 10 11 1.87 × 10 7 2.94 × 10 7
Mean 2.49 × 10 12 3.05 × 10 2 2 . 40 × 10 12 1.87 × 10 1 1.25 × 10 10 2.36 × 10 11 1.02 × 10 7 1.58 × 10 7
Std 5.67 × 10 13 6.13 × 10 1 5 . 42 × 10 13 1.10 × 10 1 7.15 × 10 11 7.49 × 10 12 3.59 × 10 8 3.61 × 10 8
Table 3. Comparative analysis between SSARM-SCA and other SOTA methods for CEC2013 multimodal benchmarks.
Table 3. Comparative analysis between SSARM-SCA and other SOTA methods for CEC2013 multimodal benchmarks.
SSARGAGSAD-GSABH-GSAC-GSAAR-GSASSARM-SCA
F6
Best 2.56 × 10 1 7.81 × 10 1 2.47 × 10 1 5.61 × 10 1 2.23 × 10 1 1 . 46 × 10 1 3.74 × 10 1 2.95 × 10 1
Median 5.54 × 10 1 1.11 × 10 2 5.69 × 10 1 7.19 × 10 1 3 . 35 × 10 0 5.43 × 10 1 1.77 × 10 1 1.56 × 10 1
Worst 9.35 × 10 1 1.39 × 10 2 9.42 × 10 1 1.35 × 10 2 6 . 86 × 10 1 1.04 × 10 2 8.11 × 10 1 7.87 × 10 1
Mean 5.24 × 10 1 1.13 × 10 2 5.18 × 10 1 7.36 × 10 1 2 . 26 × 10 1 5.13 × 10 1 3.37 × 10 1 3.15 × 10 1
Std 2.67 × 10 1 1 . 20 × 10 1 2.51 × 10 1 2.46 × 10 1 2.68 × 10 1 2.50 × 10 1 2.73 × 10 1 2.58 × 10 1
F7
Best 2.69 × 10 1 4.14 × 10 1 2.74 × 10 1 3.61 × 10 1 4.50 × 10 5 3.06 × 10 1 4.32 × 10 9 4 . 19 × 10 9
Median 4.57 × 10 1 5.57 × 10 1 4.45 × 10 1 5.59 × 10 1 5.23 × 10 1 4.35 × 10 1 2.58 × 10 5 2 . 21 × 10 5
Worst 8.56 × 10 1 6.87 × 10 1 8.48 × 10 1 9.11 × 10 1 2.86 × 10 1 7.39 × 10 1 3.63 × 10 3 3 . 15 × 10 3
Mean 4.93 × 10 1 5.58 × 10 1 4.71 × 10 1 5.70 × 10 1 5.59 × 10 1 4.62 × 10 1 1.55 × 10 4 1 . 12 × 10 4
Std 1.35 × 10 1 5.66 × 10 0 1.19 × 10 1 1.25 × 10 1 7.62 × 10 1 1.08 × 10 1 5.20 × 10 4 5 . 01 × 10 4
F8
Best 2.12 × 10 1 2.08 × 10 1 2.08 × 10 1 2.08 × 10 1 2.08 × 10 1 2.09 × 10 1 2.06 × 10 1 1 . 79 × 10 1
Median 2.15 × 10 1 2.10 × 10 1 2.10 × 10 1 2.10 × 10 1 2.10 × 10 1 2.12 × 10 1 2.10 × 10 1 1 . 96 × 10 1
Worst 2.26 × 10 1 2.10 × 10 1 2.10 × 10 1 2.11 × 10 1 2.10 × 10 1 2.16 × 10 1 2.10 × 10 1 2 . 04 × 10 1
Mean 2.17 × 10 1 2.10 × 10 1 2.10 × 10 1 2.10 × 10 1 2.10 × 10 1 2.12 × 10 1 2.09 × 10 1 1 . 85 × 10 1
Std 4.64 × 10 2 4.67 × 10 2 4.79 × 10 2 5.29 × 10 2 5.62 × 10 2 1.59 × 10 1 7.14 × 10 2 4 . 54 × 10 2
F9
Best 2.36 × 10 1 1.60 × 10 1 2.14 × 10 1 2.11 × 10 1 3.24 × 10 0 2.02 × 10 1 2.37 × 10 7 2 . 24 × 10 7
Median 2.97 × 10 1 2.13 × 10 1 2.73 × 10 1 3.01 × 10 1 7.20 × 10 1 2.86 × 10 1 4.99 × 10 0 4 . 67 × 10 0
Worst 3.89 × 10 1 2.67 × 10 1 3.50 × 10 1 3.76 × 10 1 1.50 × 10 1 3.68 × 10 1 8.91 × 10 0 8 . 84 × 10 0
Mean 2.86 × 10 1 2.16 × 10 1 2.77 × 10 1 3.01 × 10 1 7.78 × 10 0 2.83 × 10 1 5.25 × 10 0 5 . 13 × 10 0
Std 3.64 × 10 0 2.35 × 10 0 3.56 × 10 0 3.92 × 10 0 2.44 × 10 0 3.65 × 10 0 1.98 × 10 0 1 . 86 × 10 0
F10
Best 0 × 10 0 3.54 × 10 1 0 × 10 0 1.22 × 10 0 5.68 × 10 13 3.41 × 10 13 0 × 10 0 0 × 10 0
Median 5.44 × 10 14 5.95 × 10 1 5.68 × 10 14 1.49 × 10 0 1.19 × 10 12 7.40 × 10 3 0 × 10 0 0 × 10 0
Worst 2.17 × 10 2 6.98 × 10 1 2.22 × 10 2 2.20 × 10 0 1.72 × 10 2 2.96 × 10 2 1.48 × 10 2 1 . 36 × 10 2
Mean 5.55 × 10 3 5.91 × 10 1 5.61 × 10 3 1.57 × 10 0 2.56 × 10 3 7.39 × 10 3 1.69 × 10 3 1 . 53 × 10 3
Std 6.41 × 10 3 6.75 × 10 0 6.39 × 10 3 2.68 × 10 1 5.05 × 10 3 6.02 × 10 3 3.83 × 10 3 3 . 67 × 10 3
F11
Best 1.28 × 10 2 1.14 × 10 2 1.33 × 10 2 1.30 × 10 2 8.95 × 10 0 1.43 × 10 2 7.96 × 10 0 7 . 76 × 10 0
Median 1.79 × 10 2 1.45 × 10 2 1.83 × 10 2 1.85 × 10 2 1 . 69 × 10 1 1.84 × 10 2 1.79 × 10 1 1.85 × 10 1
Worst 2.46 × 10 2 1.62 × 10 2 2.34 × 10 2 2.31 × 10 2 3.38 × 10 1 2.34 × 10 2 2.98 × 10 1 2 . 75 × 10 1
Mean 1.95 × 10 2 1.44 × 10 2 1.90 × 10 2 1.87 × 10 2 1.79 × 10 1 1.87 × 10 2 1.83 × 10 1 1 . 74 × 10 1
Std 2.41 × 10 1 9.16 × 10 0 2.35 × 10 1 2.18 × 10 1 5.21 × 10 0 2.14 × 10 1 4.47 × 10 0 4 . 32 × 10 0
F12
Best 1.67 × 10 2 1.42 × 10 2 1.60 × 10 2 1.52 × 10 2 7 . 96 × 10 0 1.47 × 10 2 1.29 × 10 1 1.12 × 10 1
Median 2.12 × 10 2 1.58 × 10 2 2.08 × 10 2 2.09 × 10 2 1 . 39 × 10 1 2.05 × 10 2 2.29 × 10 1 2.12 × 10 1
Worst 2.68 × 10 2 1.75 × 10 2 2.59 × 10 2 2.63 × 10 2 2 . 49 × 10 1 2.62 × 10 2 3.88 × 10 1 3.64 × 10 1
Mean 2.18 × 10 2 1.58 × 10 2 2.07 × 10 2 2.08 × 10 2 1 . 42 × 10 1 2.06 × 10 2 2.36 × 10 1 2.25 × 10 1
Std 2.83 × 10 1 8.64 × 10 0 2.75 × 10 1 2.39 × 10 1 3 . 77 × 10 0 2.34 × 10 1 5.42 × 10 0 5.23 × 10 0
F13
Best 2.47 × 10 2 1.31 × 10 2 2.75 × 10 2 2.50 × 10 2 5.16 × 10 0 2.43 × 10 2 1.15 × 10 1 4 . 89 × 10 0
Median 3.48 × 10 2 1.58 × 10 2 3.30 × 10 2 3.24 × 10 2 2.51 × 10 1 3.37 × 10 2 4.13 × 10 1 2 . 15 × 10 0
Worst 4.56 × 10 2 1.69 × 10 2 4.28 × 10 2 4.27 × 10 2 6.17 × 10 1 4.06 × 10 2 8.74 × 10 1 6 . 10 × 10 1
Mean 3.56 × 10 2 1.57 × 10 2 3.34 × 10 2 3.30 × 10 2 2.77 × 10 1 3.32 × 10 2 4.51 × 10 1 2 . 43 × 10 1
Std 3.45 × 10 1 7.02 × 10 0 3.34 × 10 1 3.79 × 10 1 1.31 × 10 1 3.96 × 10 1 1.83 × 10 1 6 . 58 × 10 0
F14
Best 2.15 × 10 3 4.36 × 10 3 2.22 × 10 3 2.50 × 10 3 1.06 × 10 3 2.20 × 10 3 7.80 × 10 2 7 . 21 × 10 2
Median 3.44 × 10 3 5.03 × 10 3 3.26 × 10 3 3.40 × 10 3 1.63 × 10 3 3.33 × 10 3 1.47 × 10 3 1 . 25 × 10 3
Worst 4.37 × 10 3 5.64 × 10 3 4.30 × 10 3 4.31 × 10 3 2.60 × 10 3 4.61 × 10 3 2.45 × 10 3 2 . 33 × 10 3
Mean 3.45 × 10 3 5.06 × 10 3 3.29 × 10 3 3.38 × 10 3 1.63 × 10 3 3.41 × 10 3 1.49 × 10 3 1 . 34 × 10 3
Std 4.88 × 10 2 2.62 × 10 2 4.98 × 10 2 4.19 × 10 2 3.24 × 10 2 4.87 × 10 2 3.76 × 10 2 2 . 59 × 10 2
F15
Best 2.45 × 10 3 4.56 × 10 3 2.39 × 10 3 2.14 × 10 3 5.12 × 10 2 2.28 × 10 3 5.33 × 10 2 5 . 01 × 10 2
Median 3.39 × 10 3 5.30 × 10 3 3.27 × 10 3 3.36 × 10 3 1.19 × 10 3 3.20 × 10 3 1.20 × 10 3 1 . 04 × 10 3
Worst 4.85 × 10 3 5.94 × 10 3 4.68 × 10 3 4.99 × 10 3 2.27 × 10 3 4.10 × 10 3 1 . 82 × 10 3 2.35 × 10 3
Mean 3.48 × 10 3 5.31 × 10 3 3.31 × 10 3 3.42 × 10 3 1.22 × 10 3 3.28 × 10 3 1.21 × 10 3 1 . 15 × 10 3
Std 5.65 × 10 2 2.91 × 10 2 5.43 × 10 2 4.92 × 10 2 3.90 × 10 2 4.56 × 10 2 3.29 × 10 2 2 . 78 × 10 2
F16
Best 4.23 × 10 4 1.93 × 10 0 4 . 07 × 10 4 6.99 × 10 1 6.07 × 10 4 6.07 × 10 4 5.47 × 10 4 5.29 × 10 4
Median 2.22 × 10 3 2.50 × 10 0 2.11 × 10 3 1.13 × 10 0 3.33 × 10 3 2.56 × 10 3 2 . 06 × 10 3 2.25 × 10 3
Worst 9.54 × 10 3 3.02 × 10 0 9.39 × 10 3 1.73 × 10 0 1.16 × 10 2 9 . 32 × 10 3 1.04 × 10 2 1.36 × 10 2
Mean 2.76 × 10 3 2.46 × 10 0 2.87 × 10 3 1.14 × 10 3 4.00 × 10 3 3.43 × 10 3 2.72 × 10 3 2 . 58 × 10 3
Std 2.34 × 10 3 2.75 × 10 1 2.17 × 10 3 2.27 × 10 1 2.29 × 10 3 2.25 × 10 3 1.84 × 10 3 1 . 34 × 10 3
F17
Best 3.66 × 10 1 1.92 × 10 2 3.74 × 10 1 7.46 × 10 1 3.69 × 10 1 3.58 × 10 1 4.10 × 10 1 3 . 46 × 10 1
Median 4.25 × 10 1 2.11 × 10 2 4.43 × 10 1 1.04 × 10 2 4.62 × 10 1 4 . 33 × 10 1 5.04 × 10 1 4.68 × 10 1
Worst 6.41 × 10 1 2.30 × 10 2 6.67 × 10 1 1.25 × 10 2 5 . 63 × 10 1 5.68 × 10 1 6.53 × 10 1 7.83 × 10 1
Mean 4.62 × 10 1 2.11 × 10 2 4.50 × 10 1 1.02 × 10 2 4.61 × 10 1 4 . 41 × 10 1 5.05 × 10 1 4.58 × 10 1
Std 5.23 × 10 0 9.40 × 10 0 5.06 × 10 0 1.08 × 10 1 4.12 × 10 0 4.37 × 10 0 5.31 × 10 0 4 . 06 × 10 0
F18
Best 3.75 × 10 1 1.85 × 10 2 3.67 × 10 1 1.33 × 10 2 3.93 × 10 1 3.76 × 10 1 4.16 × 10 1 3 . 58 × 10 1
Median 4.64 × 10 1 2.09 × 10 2 4.53 × 10 1 1.73 × 10 2 4.69 × 10 1 4 . 45 × 10 1 5.51 × 10 1 4.47 × 10 1
Worst 5.86 × 10 1 2.28 × 10 2 5.35 × 10 1 1.97 × 10 2 5.89 × 10 1 5.85 × 10 1 7.12 × 10 1 5 . 16 × 10 1
Mean 4.76 × 10 1 2.10 × 10 2 4.52 × 10 1 1.73 × 10 2 4.74 × 10 1 4.56 × 10 1 5.61 × 10 1 4 . 48 × 10 1
Std 3.84 × 10 0 8.88 × 10 0 3.77 × 10 0 1.39 × 10 1 4.05 × 10 0 4.25 × 10 0 7.12 × 10 0 3 . 65 × 10 0
F19
Best 1.69 × 10 0 2.16 × 10 1 1.78 × 10 0 4.34 × 10 0 2.76 × 10 0 1 . 76 × 10 0 2.54 × 10 0 2.35 × 10 0
Median 2.61 × 10 0 2.55 × 10 1 2 . 77 × 10 0 6.47 × 10 0 4.58 × 10 0 3.02 × 10 0 3.54 × 10 0 3.67 × 10 0
Worst 4.01 × 10 0 2.91 × 10 1 4 . 40 × 10 0 1.54 × 10 1 6.24 × 10 0 4.41 × 10 0 6.87 × 10 0 6.94 × 10 0
Mean 2.76 × 10 0 2.54 × 10 1 2 . 95 × 10 0 7.24 × 10 0 4.70 × 10 0 3.03 × 10 0 3.83 × 10 0 3.94 × 10 0
Std 6.76 × 10 1 1.60 × 10 0 6.80 × 10 1 2.73 × 10 0 9.52 × 10 1 6 . 28 × 10 1 8.88 × 10 1 8.53 × 10 1
F20
Best 1.61 × 10 1 1.50 × 10 1 1.41 × 10 1 1 . 41 × 10 1 1.50 × 10 1 1.50 × 10 1 1.49 × 10 1 1.46 × 10 1
Median 1.65 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1
Worst 1.65 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1
Mean 1.65 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1 1.50 × 10 1
Std 1.45 × 10 1 9.93 × 10 6 1.33 × 10 1 1.81 × 10 1 6.30 × 10 8 3.09 × 10 6 1.98 × 10 2 6 . 17 × 10 8
Table 4. Comparative analysis between SSARM-SCA and other SOTA methods for CEC2013 composite benchmarks.
Table 4. Comparative analysis between SSARM-SCA and other SOTA methods for CEC2013 composite benchmarks.
SSARGAGSAD-GSABH-GSAC-GSAAR-GSASSARM-SCA
F21
Best 1.27 × 10 2 4.62 × 10 2 1.00 × 10 2 1.27 × 10 2 2.00 × 10 2 1 . 00 × 10 2 2.00 × 10 2 1.95 × 10 2
Median 3.65 × 10 2 5.62 × 10 2 3.00 × 10 2 3.15 × 10 2 3.00 × 10 2 3.00 × 10 2 3.00 × 10 2 2 . 84 × 10 2
Worst 4.76 × 10 2 6.07 × 10 2 4.44 × 10 2 4.44 × 10 2 4.44 × 10 2 4.44 × 10 2 4.44 × 10 2 4 . 26 × 10 2
Mean 3.36 × 10 2 5.41 × 10 2 3.20 × 10 2 3.40 × 10 2 3.36 × 10 2 3.32 × 10 2 3.26 × 10 2 3 . 12 × 10 2
Std 7.39 × 10 1 4.30 × 10 1 7.28 × 10 1 7.13 × 10 1 9.12 × 10 1 7.97 × 10 1 9.22 × 10 1 4 . 26 × 10 1
F22
Best 3.95 × 10 3 4.31 × 10 3 3.78 × 10 3 4.03 × 10 3 3.28 × 10 2 3.87 × 10 3 3.13 × 10 2 3 . 05 × 10 2
Median 5.39 × 10 3 4.99 × 10 3 5.18 × 10 3 5.39 × 10 3 1.10 × 10 3 5.53 × 10 3 1.11 × 10 3 1 . 05 × 10 3
Worst 7.25 × 10 3 5.75 × 10 3 7.08 × 10 3 7.04 × 10 3 2.18 × 10 3 7.50 × 10 3 2.26 × 10 3 2 . 15 × 10 3
Mean 5.63 × 10 3 5.06 × 10 3 5.35 × 10 3 5.53 × 10 3 1.22 × 10 3 5.51 × 10 3 1.12 × 10 3 1 . 03 × 10 3
Std 8.91 × 10 2 3.43 × 10 2 8.59 × 10 2 7.93 × 10 2 4.09 × 10 2 8.06 × 10 2 3.83 × 10 2 3 . 21 × 10 2
F23
Best 4.32 × 10 3 4.37 × 10 3 4.23 × 10 3 4.86 × 10 3 6 . 01 × 10 2 3.86 × 10 3 1.01 × 10 3 1.28 × 10 3
Median 5.67 × 10 3 5.41 × 10 3 5.50 × 10 3 5.54 × 10 3 1.96 × 10 3 5.49 × 10 3 1 . 84 × 10 3 1.92 × 10 3
Worst 6.89 × 10 3 6.24 × 10 3 6.67 × 10 3 6.38 × 10 3 4.23 × 10 3 6.12 × 10 3 3 . 75 × 10 3 3.85 × 10 3
Mean 5.76 × 10 3 5.40 × 10 3 5.53 × 10 3 5.58 × 10 3 2.10 × 10 3 5.44 × 10 3 1 . 96 × 10 3 2.10 × 10 3
Std 4.59 × 10 2 4.05 × 10 2 4.36 × 10 2 3.22 × 10 2 7.58 × 10 2 4.30 × 10 2 6.01 × 10 2 3 . 13 × 10 2
F24
Best 2.46 × 10 2 2.31 × 10 2 2.20 × 10 2 2.16 × 10 2 2.00 × 10 2 2.29 × 10 2 2.00 × 10 2 1 . 96 × 10 2
Median 2.68 × 10 2 2.37 × 10 2 2.57 × 10 2 2.59 × 10 2 2.00 × 10 2 2.55 × 10 2 2.00 × 10 2 1 . 99 × 10 2
Worst 3.95 × 10 2 2.80 × 10 2 3.90 × 10 2 3.82 × 10 2 2.10 × 10 2 3.87 × 10 2 2 . 00 × 10 2 2.07 × 10 2
Mean 2.81 × 10 2 2.40 × 10 2 2.79 × 10 2 2.71 × 10 2 2.01 × 10 2 2.68 × 10 2 2.00 × 10 2 2.00 × 10 0
Std 4.60 × 10 1 1.12 × 10 1 4.49 × 10 1 3.77 × 10 1 1.18 × 10 1 3.63 × 10 1 2.45 × 10 2 2 . 17 × 10 2
F25
Best 2.22 × 10 2 2.40 × 10 2 2.00 × 10 2 2.09 × 10 2 2.00 × 10 2 2.00 × 10 2 2.00 × 10 2 1 . 89 × 10 2
Median 3.57 × 10 2 2.83 × 10 2 3.43 × 10 2 3.50 × 10 2 2.00 × 10 2 3.41 × 10 2 2.00 × 10 2 1 . 95 × 10 2
Worst 4.11 × 10 2 3.04 × 10 2 3.86 × 10 2 3.86 × 10 2 2.71 × 10 2 3.85 × 10 2 2.00 × 10 2 1 . 99 × 10 2
Mean 3.85 × 10 2 2.72 × 10 2 3.32 × 10 2 3.39 × 10 2 2.12 × 10 2 3.32 × 10 2 2.00 × 10 2 1 . 94 × 10 2
Std 4.55 × 10 1 2.51 × 10 1 4.06 × 10 1 3.71 × 10 1 2.55 × 10 1 4.15 × 10 1 1.86 × 10 5 1 . 82 × 10 5
F26
Best 2.56 × 10 2 2.01 × 10 2 2.34 × 10 2 2.00 × 10 2 1 . 11 × 10 2 2.00 × 10 2 2.27 × 10 2 2.31 × 10 2
Median 3.67 × 10 2 3.40 × 10 2 3.42 × 10 2 3.50 × 10 2 3.00 × 10 2 3.48 × 10 2 2 . 98 × 10 2 3.06 × 10 2
Worst 3.87 × 10 2 3.64 × 10 2 3.78 × 10 2 3.70 × 10 2 3.25 × 10 2 3.71 × 10 2 3 . 19 × 10 2 3.25 × 10 2
Mean 3.43 × 10 2 3.15 × 10 2 3.29 × 10 2 3.33 × 10 2 2 . 85 × 10 2 3.28 × 10 2 2.92 × 10 2 2.99 × 10 2
Std 3.88 × 10 1 6.06 × 10 1 3.74 × 10 1 4.35 × 10 1 4.26 × 10 1 4.76 × 10 1 1.71 × 10 1 1 . 68 × 10 1
F27
Best 5.64 × 10 2 6.15 × 10 2 5.88 × 10 2 6.11 × 10 2 3.00 × 10 2 6.24 × 10 2 3 . 00 × 10 2 3.24 × 10 2
Median 7.51 × 10 2 7.96 × 10 2 7.63 × 10 2 8.43 × 10 2 3.00 × 10 2 7.68 × 10 2 3 . 00 × 10 2 3.27 × 10 2
Worst 9.76 × 10 2 1.02 × 10 3 9.87 × 10 2 1.03 × 10 3 3.04 × 10 2 1.01 × 10 3 3 . 03 × 10 2 3.30 × 10 2
Mean 7.78 × 10 2 7.74 × 10 2 7.84 × 10 2 8.41 × 10 2 3.01 × 10 2 7.84 × 10 2 3 . 00 × 10 2 3.25 × 10 2
Std 1.02 × 10 2 1.33 × 10 2 1.09 × 10 2 1.12 × 10 2 1.14 × 10 0 9.29 × 10 1 4 . 09 × 10 1 4.34 × 10 1
F28
Best 2.24 × 10 3 5.09 × 10 2 2.46 × 10 3 2.83 × 10 3 1 . 00 × 10 2 2.33 × 10 3 3.00 × 10 2 3.23 × 10 2
Median 3.15 × 10 3 8.12 × 10 2 3.13 × 10 3 3.22 × 10 3 3.00 × 10 2 3.17 × 10 3 3.00 × 10 2 3.28 × 10 2
Worst 3.52 × 10 3 1.75 × 10 3 3.68 × 10 3 3.94 × 10 3 1.35 × 10 3 3.93 × 10 3 3 . 00 × 10 2 3.34 × 10 2
Mean 3.05 × 10 3 8.91 × 10 2 3.14 × 10 3 3.25 × 10 3 3.49 × 10 2 3.24 × 10 3 3 . 00 × 10 2 3.27 × 10 2
Std 2.58 × 10 2 3.52 × 10 2 2.71 × 10 2 2.37 × 10 2 2.53 × 10 2 2.91 × 10 2 8 . 77 × 10 9 8.92 × 10 9
Table 5. Friedman test ranks for the compared algorithms over 28 CEC2013 functions.
Table 5. Friedman test ranks for the compared algorithms over 28 CEC2013 functions.
FunctionsSSARGAGSAD-GSABH-GSAC-GSAAR-GSASSARM-SCA
F13847651.51.5
F218576432
F347368521
F4684.534.5721
F518274356
F648671532
F767583421
F825.55.55.55.5831
F974583621
F1058473621
F118475.525.531
F1254781632
F1384752631
F1448563721
F1548673521
F1648376521
F1758274163
F1858274361
F1928176345
F2084444444
F2118376542
F2284573621
F2384672.5512.5
F2484763521
F25845.5735.521
F2645781623
F27745.5825.513
F2854683712
Average Ranking5.0357142866.1607142864.756.6785714293.6607142865.1252.6964285711.892857143
Rank57483621
Table 6. Aligned Friedman test ranks for the compared algorithms over 28 CEC2013 functions.
Table 6. Aligned Friedman test ranks for the compared algorithms over 28 CEC2013 functions.
FunctionsSSARGAGSAD-GSABH-GSAC-GSAAR-GSASSARM-SCA
F1641926568676662.562.5
F28223122221311109
F3473622451.51.5
F4216221195.532195.52191514
F551194528554535556
F6109167115153731128784
F7148155147157791467170
F8123132.5132.5132.5132.5136130113
F9144139142145951439392
F1010316410210510110410099
F11175156170168.544168.54543
F12171159173174401724241
F1319050182179381803937
F14187218197199302012927
F15193220200202241982322
F16127138126137129128125124
F17821837515877748376
F18691815917761607857
F19107150106135114108110111
F20140119119119119119119119
F2149189729791898158
F22217206212214182131716
F2321520320720820.52041920.5
F24176152166163881628636
F2517894160.516548160.54746
F26981411511548014990
F27188184185.519134185.53335
F2820531209211282102526
Average Ranking133.4642857156.0178571133.4285714148.464285775.625134.87561.2857142956.83928571
Rank58473621
Table 7. Friedman and Iman–Davenport statistical test results summary ( α = 0.05 ).
Table 7. Friedman and Iman–Davenport statistical test results summary ( α = 0.05 ).
Friedman Value χ 2 Critical Valuep-ValueIman-Davenport ValueF-Critical Value
8.866 × 10 1 1.407 × 10 1 1.110 × 10 16 2.230 × 10 1 2.058 × 10 0
Table 8. Results of the Holm’s step-down procedure.
Table 8. Results of the Holm’s step-down procedure.
Comparisonp-ValuesRankingAlpha = 0.05Alpha = 0.1H1H2
SSARM-SCA vs. D-GSA 1.33227 × 10 13 00.0071428570.014285714TRUETRUE
SSARM-SCA vs. RGA 3.53276 × 10 11 10.0083333330.016666667TRUETRUE
SSARM-SCA vs. C-GSA 3.96302 × 10 7 20.010.02TRUETRUE
SSARM-SCA vs. SSA 7.90191 × 10 7 30.01250.025TRUETRUE
SSARM-SCA vs. GSA 6.37484 × 10 6 40.0166666670.033333333TRUETRUE
SSARM-SCA vs. BH-GSA0.00346232550.0250.05TRUETRUE
SSARM-SCA vs. AR-GSA0.10982193760.050.1FALSEFALSE
Table 9. Experimental setup datasets.
Table 9. Experimental setup datasets.
No.NameFeaturesSamples
1Breastcancer9699
2Tic-tac-toe9958
3Zoo16101
4WineEW13178
5SpectEW22267
6SonarEW60208
7IonosphereEW34351
8HeartEW13270
9CongressEW16435
10KrvskpEW363196
11WaveformEW405000
12Exactly131000
13Exactly 2131000
14M-of-N131000
15vote16300
16BreastEW30569
17Semeion2651593
18Clean 1166476
19Clean 21666598
20Lymphography18148
21PenghungEW32573
Table 10. Mean fitness statistical metric using small initialization with the 21 utilized datasets.
Table 10. Mean fitness statistical metric using small initialization with the 21 utilized datasets.
No.WOAbWOA-SbWOA-vBALO1BALO2BALO3PSObGWObDAbSSAbSSARM-SCA
10.0630.0470.0520.0780.0960.0890.0590.0330.0330.1530.025
20.3270.2230.3150.3470.3530.3350.3320.2480.2170.2750.224
30.2420.1350.2220.4130.3970.4170.2580.1250.0520.6750.154
40.9390.9070.9360.9540.9620.9530.9270.8840.8790.9150.836
50.3410.2910.3410.3530.3930.3740.3610.2790.2520.1990.634
60.3320.2040.3130.3750.3740.3680.3040.1560.1810.0970.063
70.1360.1210.1340.1720.1760.1850.1430.0970.1270.1730.116
80.2910.2530.2760.2970.3050.2860.2840.1930.1630.1940.129
90.3800.3620.3780.3930.3960.3940.4030.3550.3360.4980.289
100.3950.0880.3760.4230.4160.4180.4220.0780.0570.2940.033
110.4340.1940.4380.4960.4970.5160.4360.1820.1840.1710.149
120.3230.2930.3350.3480.3330.3350.3180.3170.2030.3680.133
130.2440.2420.2370.2390.2660.2410.2450.2460.2380.2570.222
140.2980.1340.2980.3570.3520.3560.2820.1350.0710.2450.033
150.1290.0650.1420.1530.1560.1750.1350.0640.0510.0640.047
160.0520.0460.0570.0840.0820.0840.0540.0360.0320.0260.049
170.0980.0380.0960.0920.0910.0970.0970.0240.0340.1870.715
180.2940.1530.2970.3590.3780.3660.2920.1110.1460.9750.099
190.0830.0460.0860.1270.1330.1350.0850.0360.0470.3860.231
200.2980.2050.2740.3740.3160.3780.3020.1870.1680.2560.135
210.4670.1820.4450.6150.6010.6070.4490.1440.1730.1320.112
Table 11. Classification accuracy using small initialization for the 21 utilized datasets.
Table 11. Classification accuracy using small initialization for the 21 utilized datasets.
No.WOAbWOA-SbWOA-vBALO1BALO2BALO3PSObGWObDAbSSAbSSARM-SCA
10.8640.6450.7410.8320.8110.8440.8620.9650.7520.8560.974
20.6580.7830.6720.5900.5930.5830.6210.7460.6850.6940.780
30.7440.8420.7770.4580.4740.4490.5850.8680.8140.6210.665
40.0420.0500.0280.0120.0150.0130.0340.0870.0320.3310.343
50.6270.6680.6090.5630.5560.5540.5840.7060.6440.7690.754
60.6390.7140.6570.5430.5470.5460.6050.8350.6970.9030.915
70.8440.8370.8350.7840.7740.7630.8220.8930.8290.8670.874
80.6750.6440.6330.6050.5980.6090.6550.7950.6570.7630.831
90.5820.5810.5860.5580.5420.5720.5660.6230.5810.9100.938
100.5800.9120.6080.5140.5170.5130.5470.9180.7830.9170.916
110.5520.8030.5530.3920.4030.3990.3910.8120.7420.7950.830
120.6360.6690.6120.5890.6210.6180.6550.6540.6450.6780.712
130.7290.7250.7080.7480.6960.7020.7260.7260.7140.7460.764
140.6920.8480.8490.7220.7270.7030.8170.9340.8730.8570.953
150.8610.9170.8340.7230.7230.7030.8180.9330.8790.7450.935
160.8950.6910.7250.8050.8270.8340.8990.9620.7850.9890.977
170.8980.9620.8960.8780.9030.9090.8950.9710.9580.8970.914
180.6800.8180.6770.5900.5820.5870.6440.8720.7950.8740.891
190.9080.9560.9080.8410.8430.8480.8830.9610.9530.8850.904
200.6770.7350.6540.5170.5560.5240.6120.7920.7080.7020.878
210.4960.7420.4910.2840.2970.3010.4170.8020.7220.8250.894
Table 12. Mean fitness statistical metric using large initialization with the 21 utilized datasets.
Table 12. Mean fitness statistical metric using large initialization with the 21 utilized datasets.
No.WOAbWOA-SbWOA-vBALO1BALO2BALO3PSObGWObDAbSSAbSSARM-SCA
10.1350.1280.1670.1850.1480.2260.1630.0310.0390.1590.033
20.2160.2090.2080.2460.2440.2410.2040.2120.2080.2190.191
30.1430.1340.1300.1630.1250.1890.1760.1020.0760.1460.068
40.9260.9270.9230.9370.9390.9230.9290.9050.8860.8560.829
50.3150.3170.3130.3200.3280.3180.3180.3040.2430.3830.214
60.3040.2870.2990.2740.2950.2850.2730.2580.1940.2750.138
70.1680.1640.1840.1660.1780.1640.1630.1550.1240.0940.116
80.3440.3330.3410.3460.3540.3480.3440.2880.1770.2110.133
90.4030.4080.3930.4070.4050.3870.3950.3730.3420.0510.039
100.0650.0770.0740.0710.0760.0760.0680.0620.0530.0710.062
110.1920.1980.1950.1930.1990.1940.1860.1890.1860.1750.159
120.3070.3080.3130.3090.3070.3030.3080.3070.2090.2580.192
130.2560.2500.2610.2670.2610.2620.2570.2550.2430.2590.227
140.1390.1340.1370.1460.1310.1380.1220.1240.0650.1940.054
150.0830.0940.0870.0850.0920.0950.0820.0870.0580.0650.051
160.2180.2230.1530.1040.1540.2080.2030.0460.0310.1050.143
170.0420.0420.0490.0410.0490.0460.0430.0330.0320.1450.075
180.1810.1810.1810.1880.1980.1940.1880.1780.1360.0910.074
190.0530.0500.0570.0540.0550.0550.0540.0430.0470.0360.022
200.2390.2350.2230.2450.2300.2370.2370.2220.1430.2500.161
210.2690.2440.2770.2720.2610.2780.2330.2290.1810.2760.156
Table 13. Classification accuracy using large initialization for the 21 utilized datasets.
Table 13. Classification accuracy using large initialization for the 21 utilized datasets.
No.WOAbWOA-SbWOA-vBALO1BALO2BALO3PSObGWObDAbSSAbSSARM-SCA
10.6150.6170.6120.6780.6940.6670.7410.9500.7810.9250.948
20.7930.7980.7970.7440.7360.7400.7410.7610.6640.7730.814
30.8340.8300.8350.8130.8430.7930.8150.8970.7850.8710.920
40.0570.0560.0560.0410.0530.0640.0630.0810.0350.0520.094
50.6630.6770.6600.6640.6600.6720.6690.6800.6490.6690.714
60.6910.7020.6900.7150.6950.7010.7250.7480.7030.7260.794
70.8360.8320.8120.8370.8270.8340.8330.8530.8110.8710.856
80.6450.6540.6330.6440.6320.6350.6410.6930.6530.6120.715
90.5990.5870.5940.5830.5850.5980.5890.6290.5860.5700.645
100.9350.9390.9350.9120.9280.9240.9300.9300.7710.8560.918
110.8140.8030.8170.8090.8050.8110.8110.8190.7430.8110.830
120.6920.6830.6880.6850.6840.6720.6830.6850.6490.6510.707
130.7410.7440.7440.7260.7210.7280.7350.7370.7100.7790.786
140.8680.8650.8620.8370.8300.8370.8570.8660.7270.8190.886
150.9060.9060.9030.9080.9090.9090.9080.9180.8840.8950.931
160.6180.6160.6140.7120.6920.6540.7110.9370.7690.7150.898
170.9660.9670.9650.9600.9610.9660.9680.9710.9520.9450.933
180.8140.8100.8160.8230.8050.8130.8150.8390.8010.7960.861
190.9570.9510.9570.9530.9560.9500.9590.9500.9500.9660.978
200.7440.7530.7630.7370.7490.7570.7430.7730.7140.7260.752
210.7430.7570.7300.7280.7430.7310.7610.7710.7350.7590.793
Table 14. Mean fitness statistical metric using mixed initialization with the 21 utilized datasets.
Table 14. Mean fitness statistical metric using mixed initialization with the 21 utilized datasets.
No.WOAbWOA-SbWOA-vBALO1BALO2BALO3PSObGWObDAbSSAbSSARM-SCA
10.0530.0500.0780.1040.0980.0730.0330.0370.0310.0670.025
20.2210.2060.2120.2480.2530.2440.2070.2160.2070.2190.191
30.1500.1440.1220.1820.1470.1420.0770.0900.0740.1150.059
40.9260.9280.9110.9390.9390.9330.8830.9020.8830.9260.892
50.3170.3030.2870.3130.3250.3150.2400.2810.2560.3020.293
60.3050.2820.2590.2740.2940.2890.1690.2310.1950.2600.154
70.1570.1510.1540.1570.1630.1680.1150.1480.1250.1510.136
80.3220.3040.2500.3130.3260.3080.1560.2360.1680.2560.137
90.3880.3890.3730.3970.3950.3830.3330.3540.3420.3580.337
100.0760.0780.0820.0780.0710.0770.0410.0630.0550.0280.031
110.1910.1950.1940.1940.1970.1930.1840.1850.1830.1710.155
120.3000.3070.3050.3050.3090.3070.1550.2760.2220.2540.176
130.2470.2410.2560.2360.2460.2540.2360.2450.2420.2520.223
140.1350.1340.1570.1530.1560.1390.0270.1140.0710.0950.025
150.0880.0800.0890.0800.0930.0820.0480.0630.0570.0640.056
160.0890.0590.0630.0850.0800.0830.0390.0560.0360.0570.031
170.0430.0470.0340.0470.0450.0490.0310.0350.0310.0430.042
180.1950.1820.1780.1880.1970.1960.1330.1570.1430.1670.149
190.0560.0530.0430.0570.0560.0530.0440.0460.0460.0510.034
200.2350.2340.2250.2540.2450.2350.1350.2170.1630.2120.123
210.2630.2400.2410.2730.2600.2750.1440.2110.1830.2230.128
Table 15. Classification accuracy using mixed initialization for the 21 utilized datasets.
Table 15. Classification accuracy using mixed initialization for the 21 utilized datasets.
No.WOAbWOA-SbWOA-vBALO1BALO2BALO3PSObGWObDAbSSAbSSARM-SCA
10.7840.6120.6240.7490.7260.7230.8030.9690.7880.9880.995
20.7860.7930.7850.6850.6840.6870.7220.7640.6780.7840.826
30.8470.8340.8270.6520.7050.6840.7870.9030.7730.9490.987
40.0680.0560.0570.0370.0340.0350.0380.0880.0310.0520.079
50.6720.6750.6640.6360.6220.6200.6530.7030.6400.6350.694
60.6900.7090.7080.6440.6360.6420.7220.7650.7060.6870.789
70.8330.8380.8360.8120.8070.8030.8340.8690.8280.9180.894
80.6540.6560.6540.6280.6280.6250.6610.7530.6510.6540.795
90.5950.5880.5930.5740.5560.5790.5830.6360.5740.5870.611
100.9360.9320.9130.7680.7640.7560.7950.9430.7520.9760.971
110.8170.8020.8030.6470.6430.6450.7660.8150.7480.8210.851
120.6830.6880.6970.6420.6520.6430.6620.7080.6440.6950.696
130.7370.7470.7380.7310.7140.7020.7270.7390.7160.7500.776
140.8620.8620.8340.7310.7370.7420.7630.8860.7250.7920.910
150.9170.9040.9030.8220.8250.8240.8890.9310.8650.8540.893
160.7680.6180.6100.7350.7430.7260.8120.9470.7610.7890.956
170.9690.9660.9630.9260.9360.9280.9550.9750.9530.9090.934
180.8190.8150.8080.7270.7220.7250.8040.8420.7980.7470.823
190.9580.9540.9550.9070.9130.9170.9560.9630.9510.9670.980
200.7500.7530.7430.6380.6760.6590.7010.7880.7080.7230.799
210.7420.7510.7210.5540.5630.5610.7630.7830.7330.6910.796
Table 16. Average selection size with various datasets for the compared algorithms with the three different initialization methods.
Table 16. Average selection size with various datasets for the compared algorithms with the three different initialization methods.
No.WOAbWOA-SbWOA-vBALO1BALO2BALO3PSObGWObDAbSSAbSSARM-SCA
10.608860.638610.567410.475380.502710.508680.638000.638640.506750.585130.39845
20.775120.970840.755670.618430.637230.620910.529000.791410.804530.718530.41776
30.661360.763850.606820.620820.617760.625370.600000.591960.471860.595210.59631
40.625750.699640.583160.558740.561870.542820.644000.582020.470520.595420.58343
50.646720.739310.591720.544520.598150.569530.565000.628200.459860.582410.43273
60.647450.663470.556840.605810.605520.623860.527000.621400.438320.599840.39566
70.602780.668670.592510.545450.556430.540140.561000.612050.406350.591320.41883
80.555910.545550.541810.517560.457730.478540.613000.575100.417680.495860.40025
90.532550.584470.546740.508050.525830.504980.426000.628240.442910.522300.41532
100.704630.903720.679150.619430.625850.623230.575000.763430.533230.728520.67424
110.733330.905620.707480.626770.631450.630890.753000.799860.586780.735150.70038
120.640960.726820.697830.516430.542040.542910.479000.622480.618360.622530.45673
130.499510.467660.615920.394800.403710.446800.474000.429630.178910.476310.25214
140.724780.878480.691510.622030.608110.621510.696000.764420.634730.741250.59421
150.667520.746910.602940.591620.566810.610870.521000.610760.378890.598470.46316
160.572030.623860.602530.518810.495810.510470.558000.607910.488910.552740.45746
170.667910.799450.597230.621710.625930.623680.859000.641880.500800.658420.52873
180.692740.794300.588560.621310.619270.623940.653000.649420.485310.657520.47841
190.668560.770010.575410.624710.624920.627820.782000.685870.487560.649680.47485
200.662410.727750.600960.605540.589720.590540.497000.625430.504830.624300.47124
210.648480.711640.536850.621250.621820.623230.553000.491620.474850.514290.46991
Table 17. Classification accuracy of the proposed bSSARM-SCA method and three recent ISSA variants for the 21 utilized datasets.
Table 17. Classification accuracy of the proposed bSSARM-SCA method and three recent ISSA variants for the 21 utilized datasets.
Small InitializationLarge InitializationMixed Initialization
No.bSSARM-SCAbISSA1bISSA2bISSA3bSSARM-SCAbISSA1bISSA2bISSA3bSSARM-SCAbISSA1bISSA2bISSA3
10.9740.9560.9620.9350.9480.9490.9510.9260.9950.9590.9740.942
20.7800.7690.7710.7620.8140.7960.8010.7850.8260.8020.8060.793
30.6650.6520.6490.6380.9200.8970.8860.8690.9870.9620.9440.926
40.3430.3100.3240.2980.0940.0850.0810.0790.0790.0740.0720.068
50.7540.7280.7390.7200.7140.6960.6980.6530.6940.6800.6820.647
60.9150.8930.9100.8850.7940.7720.7850.7510.7890.7690.7740.748
70.8740.8590.8910.8510.8560.8430.8630.8360.8940.8870.9020.879
80.8310.8170.8200.8030.7150.7020.7050.6980.7950.7810.7860.769
90.9380.9140.9120.9060.6450.6280.6230.6150.6110.6020.6010.594
100.9160.9080.9170.9020.9180.9110.9170.9090.9710.9560.9730.949
110.8300.8140.8110.8090.8300.8130.8090.8060.8510.8370.8350.825
120.7120.7090.7090.7050.7070.7030.7040.6990.6960.6900.6910.682
130.7640.7560.7710.7530.7860.7790.7920.7720.7760.7680.7890.761
140.9530.9420.9370.9330.8860.8630.8660.8570.9100.9020.8980.883
150.9350.9160.9190.9040.9310.9090.9140.8960.8930.8680.8720.859
160.9770.9670.9630.9580.8980.8840.8790.8710.9560.9410.9360.928
170.9140.9180.9150.9040.9330.9360.9260.9170.9340.9380.9280.925
180.8910.8630.8690.8580.8610.8370.8420.8310.8230.8060.8090.793
190.9040.8970.9010.8840.9780.9569.9710.9480.9800.9680.9730.962
200.8780.8820.8660.8590.7520.7610.7350.7280.7990.8060.7830.775
210.8940.8730.8990.8650.7930.7840.8040.7790.7960.7890.8140.787
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zivkovic, M.; Stoean, C.; Chhabra, A.; Budimirovic, N.; Petrovic, A.; Bacanin, N. Novel Improved Salp Swarm Algorithm: An Application for Feature Selection. Sensors 2022, 22, 1711. https://doi.org/10.3390/s22051711

AMA Style

Zivkovic M, Stoean C, Chhabra A, Budimirovic N, Petrovic A, Bacanin N. Novel Improved Salp Swarm Algorithm: An Application for Feature Selection. Sensors. 2022; 22(5):1711. https://doi.org/10.3390/s22051711

Chicago/Turabian Style

Zivkovic, Miodrag, Catalin Stoean, Amit Chhabra, Nebojsa Budimirovic, Aleksandar Petrovic, and Nebojsa Bacanin. 2022. "Novel Improved Salp Swarm Algorithm: An Application for Feature Selection" Sensors 22, no. 5: 1711. https://doi.org/10.3390/s22051711

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop