Comparing SSALEO as a Scalable Large Scale Global Optimization Algorithm to High-Performance Algorithms for Real-World Constrained Optimization Benchmark

The Salp Swarm Algorithm (SSA) outperforms well-known algorithms such as particle swarm optimizers and grey wolf optimizers in complex optimization challenges. However, like most meta-heuristic algorithms, SSA suffers from slow convergence and stagnation in the best local solution. In this study, a Salp swarm algorithm (SSA) is combined with a local escaping operator (LEO) to overcome some inherent limitations of the original SSA. SSALEO is a novel search technique that accounts for population diversity, the imbalance between exploitation and exploration, and the SSA algorithm’s premature convergence. By implementing LEO in SSALEO, the search slowdown in SSA is eliminated, and the local search efficiency of swarm agents is improved. The proposed SSALEO method is tested using the CEC 2017 benchmark with 50 and 100 decision variables, seven CEC2008lsgo test functions with 200, 500, and 1000 decision variables, and its performance was compared to other metaheuristic algorithms (MAs) and advanced algorithms, including seven Salp swarm variants. The comparisons show that SSA greatly benefits from LEO by enhancing the quality and accelerating its solutions’ convergence rate. The SSALEO was then assessed using a benchmark set of seven well-known constrained design challenges in various engineering domains defined in the CEC 2020 conference benchmark. Friedman and Wilcoxon rank-sum statistical tests are also used to examine the results. According to experimental data and statistical tests, the SSALEO algorithm is very competitive and often superior to the algorithms used in the studies. Further, the proposed approach can be viewed as a special LSGO optimizer whose performance exceeds that of specialized state-of-the-art algorithms like CMA-ES and SHADE.


I. INTRODUCTION
In real-world applications, challenges such as reducing 22 time, energy, costs, and errors or optimizing efficiency, 23 performance, and quality can be classified as optimization 24 A population-based algorithm can avoid local solutions 80 and exhibit better exploratory behavior. They are, however, 81 computationally more expensive and necessitate information 82 sharing between numerous solutions. 83 Population-based algorithms employ a variety of ran- 84 dom operations such as crossovers to boost exploration and 85 exploitation abilities [23], mutation [24], and selection [24]. 86 Because of the benefits outlined above, population-based 87 metaheuristics are quite popular and frequently employed 88 today. Several MAs have thus been created for use in 89 biomedicine [25], bioinformatics [26], cheminformatics [27], 90 feature selection [28], engineering issues [29], [30], pattern 91 recognition, text clustering [31], and wireless sensor net-92 works [32]. On the other hand, all meta-heuristic (MA) algo- 93 rithms need to strike the equilibrium between the exploration 94 and exploitation stages. If they don't, the solutions either 95 don't converge or become stuck in local optima [33], [34]. 96 Such issues can arise as a result of randomization during the 97 solution-finding process. 98 In 2017 [35], Mirjalili et al. came up with the idea for 99 a contemporary population-based metaheuristic search algo-100 rithm and called it the Salp Swarm Algorithm (SSA). This 101 metaheuristic algorithm attempts to simulate the behavior 102 of deep-sea salps, namely their swarming and foraging pat-103 terns. Even though the mathematics that underpins SSA is 104 rather straightforward, it may be more effective than other 105 contemporary algorithms at solving challenging engineering 106 optimization issues. These include GWO, ABC, CSA, and 107 others [23]. 108 The SSA algorithm has fewer parameters and a simple 109 implementation [23]. The SSA demonstrated solving both 110 large and minor issues [36]. In addition, SSA is distin-111 guished by its adaptability and stochasticity [23]. However, 112 the SSA, in addition to other optimization methods, has two 113 disadvantages. In the first place, the convergence speed is 114 insufficient to generate accurate solutions. Another drawback 115 is that it lacks the exploratory possibilities of evolutionary 116 algorithms that use crossover operators. This is a significant 117 limitation. These issues frequently arise in most optimization 118 strategies, particularly in complicated and high-dimensional 119 situations [37]. As a result, various attempts have been made 120 to solve the problem [38]. 121 Researchers have devised several different modified SSA 122 versions that improve the standard SSA's efficiency, eliminate 123 any faults, and expand its possibilities despite any inherent 124 restrictions it may have. This paper presents a modified Salp 125 Optimization (SSALEO) version based on a local escaping 126 operator (LEO). Modifying nature-inspired algorithms is a 127 popular way to address those faults by strengthening the 128 inventive optimizer's exploitation and exploration capabili-129 ties. For example, unknown search regions can be visited, and 130 the local optima problem can be avoided using the new math-131 ematical technique of the ''local escaping operator,'' which 132 is used in local searches to find an effective solution [39]. 133 The suggested method has been validated using a series of 134 VOLUME 10, 2022 LSGO problems have been created. Currently, the solutions 190 can be classified into two types [40]: Non-Decomposition 191 methods and cooperative coevolution (CC) methods based on 192 the dimension decomposition optimization strategy. 193 Potter and De Jong [41] proposed the Cooperative Coevo-194 lution (CC) approach (1994). While when applying the CC 195 method, the LSGO problem is broken down into a series of 196 low-dimensional problems. The solutions to those problems 197 are combined to form a high-dimensional optimization chal-198 lenge. Following that, researchers classified CC approaches 199 into two types based on variable grouping strategies for 200 LSGO problems: static and dynamic grouping methods. First, 201 Potter used static grouping-based CC approaches on the evo-202 lutionary algorithm to produce a decent solution, the first CC 203 algorithm to address LSGO problems. Then, by mixing the 204 solutions from each subcomponent, the n-dimensional solu-205 tion is created. Next, Yang et al. [42], [43], [44] solved LSGO 206 issues with 500 and 1000 dimensions using a DE-based coop-207 erative coevolving (CC) technique dubbed DECC-G, employ-208 ing random grouping of decision variables. The multilevel 209 CC approach, also known as MLCC, uses decomposers with 210 an adjustable group size depending on their performance. 211 Other similar algorithms in the literature are CCPSO [45] and 212 CC-CMA-ES [46]. 213 Non-Decomposition-based algorithms, on the other hand, 214 avoid the divide-and-conquer strategy in favor of a range 215 of successful methods for improving algorithm performance. 216 The most frequent methodologies are local search-based 217 [47], [48], evolutionary computation-based [49], [50], and 218 swarm intelligence-based approaches [51] are the most com-219 mon categories. For instance: a modified CSO (MCSO) [52] 220 algorithm with two-thirds of search agents updated by 221 a competitive try criterion is proposed. The MCSO was 222 chosen to address large-scale optimization issues, and 223 the findings revealed that it outperformed state-of-the-art 224 algorithms. 225 This paper presents a modified PSO based on a population-226 based approach to address the LSGO problem [53]. The 227 whale optimization algorithm (WOA) [54] uses quadratic 228 interpolation to handle large-scale issues, which aids in 229 increasing the algorithm's exploitation capabilities [55]. 230 Cano and GarciaMartinez [56] tackle 100 million dimension 231 issues in large-scale global optimization using an evolution-232 ary computation technique and a modern GPU. Cano, Garcia-233 Martinez, and Ventura [57] also published a MapReduce 234 implementation of the MA-SW-Chains algorithm as a new 235 method version. The approaches solve 10 million dimensions 236 of CEC functions for the first time. 237 In [58], the authors combine the MA-SW-Chains algorithm 238 with the Local search strategy to create a high-performance 239 memetic solution for high-dimensional issues. Furthermore, 240 a modified SCA called DSCA 241 Although it can boost optimizations on a massive scale, 242 CC comes with several main limitations: 243 Its performance is influenced by the decomposition 244 strategy used.

TABLE 1. AQ:6
Modifications and hybridizations to the SSA. VOLUME 10, 2022  With the use of an efficient operator, this study has 360 overcome the shortcomings of the classic SSA, such as 361 (1) not getting stuck in local optima, (2) keeping explo-362 ration and exploitation in balance, and (3) increasing 363 convergence speed.

364
A novel mathematical technique called a local escap-365 ing operator (LEO) is a local search for developing 366 an effective solution that intends to explore unob-367 served search regions and escape from the local optimal 368 problem.

369
High-dimensional and engineering design-constrained 370 test functions were used to evaluate the proposed 371 SSALEO algorithm. According to the statistical test 372 analysis, research shows that the proposed method 373 effectively deals with those issues. As a result, the 374 results reveal that the new method is superior in most 375 situations.

377
The local escape operator and fundamental Salp swarm 378 algorithm (SSA), as well as its analogies and mathematical 379 models, are discussed in this section. In Ahmadianfar et al. [39], the LEO is proposed as a local 382 search algorithm that is utilized to improve the ability of 383 an optimization method, especially the Gradient-based Opti-384 mizer (GBO), to explore new search regions that are required 385 in difficult real-world challenges. LEOs improve the overall 386 quality of solutions by maintaining their positions according 387 to a set of criteria. The behavior of the algorithm's conver-388 gence is enhanced as a direct result of this feature, which 389 prevents the algorithm from being trapped in local optima. 390 LEO creates high-quality alternative solutions (X LEO ) by 391 combining many different solutions, such as the best position 392 X best , two randomly created solutions X m r1 and X m r2 two ran-393 domly selected solutions X 1 m n and X 2 m n , and a new randomly 394 generated solution X m k . This allows LEO to develop solutions 395 that perform exceptionally well. As a consequence of this, 396 the value X LEO may be determined by utilizing Equations (1) 397 and (2), which, in mathematical terms, can be expressed as 398 follows: (2) 408 VOLUME 10, 2022 ally, u 1 , u 2 , and u 3 are three variables that are generated by 422 random processes in the following manner: and 0 therwise), µ 1 represents a number in the range 0 and 1.

428
In addition, ρ 1 is implemented to maintain a healthy equi-429 librium between the searching processes of exploration and 430 exploitation, and it can be described as follows: The current iteration is denoted by T, while the maximum  The following strategy is suggested as a means of locating 441 the valueX m k in Equation.
(1) and Equation. (2): where µ 2 is a number between 0 and 1, X m p is an illustration 444 of a solution chosen randomly from the salps population 445 (p ranges from 1 to N). X rand is a new solution that may be 446 found by following the above equation (11).
Eq. (10) can be written as follows: Mirjalili et al. [35] developed the Salp swarm algorithm 456 (SSA), which is one of the most recently published swarm 457 optimization methods. The SSA algorithm's core idea is to 458 emulate the swarming behavior of salps in the water using 459 the salps chain concept. Salps are barrel-shaped organisms 460 that belong to the Salpidae family. Furthermore, the tissues 461 and movements of salps are similar to jellyfish [84]. During 462 their lives in the water, salps display a peculiar swarming 463 behavior called a ''salp chain'' activity. This activity, which 464 can be exploited in the salps' motions as they look for food, 465 can be observed throughout their lives.

466
The members of SSA can be broken down into two cate-467 gories: leaders and followers. The leader of the Salps chain is 468 responsible for determining movement directions, selecting 469 food places, leading the SSA chain to the food, and regularly 470 updating the sites. The term ''followers'' is used to refer to 471 the remaining members of the population. Each follows the 472 leader in turn to establish the chain structure. Each salps 473 point in the search space is characterized by n dimensions, 474 where n represents the number of variables involved in the 475 problem. In addition, the food supply denoted by the letter F 476 is a metaphor for the salps' search aim. The following is one 477 possible representation of this process: x 1 j represents the chain Salps leader position with the jth 480 dimension. Fj stands for the food position with the jth dimen-481 sion, ubj and lbj stand for the upper and lower bounds of Salps 482 position components, respectively. r2 and r3 are two scalars 483 that have been chosen at random from the range [0,1]. During 484 the iteration process, the most important control parameter to 485 pay attention to is r1, which is what stabilizes the exploration 486 and exploitation phases. The following is the expression for 487 the variable r1: where the numbers t and T respectively signify the current 490 number of iterations and the maximum number of possible 491 iterations. The following equation is used to calculate an 492 update to the Salps chain of followers' positions in such a 493 way that i ≥ 2: Following Isaac Newton's theory of motion: where, x i j is the position of the i-th follower in the j-th dimen-498 sion, t denotes the time, s 0 is the initial speed, and k =

B. THE DIFFERENT SCENARIOS FOR THE SSALEO UPDATE 558
Two different sets of conditions determine the technique for 559 updating the salp position. First, using equation (13), con-560 struct an agent solution based on the food position obtained 561 up to this point, and store the results. During this phase, 562 the initial SSA is completed as a matter of course. Then, 563 in the second scenario, the solution is upgraded to improve 564 efficiency by applying the LEO technique. This is done in the 565 second scenario. The conditional nature of the LEO differ-566 entiation between the two paths is illustrated by Equations 1 567 and 2, respectively. If (rand is less than 0.5), then the first 568 path is selected as the one to take to continue the process 569 of updating the solution, as shown in Eq (1). Otherwise, the 570 second option, Equation (2), will be utilized to locate the new 571 solution.

573
To enhance the overall quality of the succeeding solutions, 574 it is necessary to perform this step at the beginning of each 575 iteration to assess the vector of solutions produced in the 576 preliminary phase. As a direct consequence of this, within the 577 existing population, SSALEO determines the fitness value, 578 denoted by the notation Fitness( P) of each salp position. 579 The best-scoring solution Xbest is determined, saved, and 580 extracted at the updating stage.  Table 2. As seen in Table 2, SSALEO was evaluated with eight 607 other competitors in this subsection to assess their ability 608 to compute time-intensive experiments included in the CEC 609 2017 benchmarks. Due to the time-consuming nature of the 610 calculation method, it is essential that each participant carry 611 out each function a total of thirty times and then report the 612 outcomes in Table 2. In addition, the data in the table demon-613 strate that the computation of SSALEO takes a more extended 614 time since the integration method, which requires a greater 615 amount of processing resources, is utilized. On the other 616 hand, SSALEO can beat certain algorithms while requiring 617 less time. These algorithms include CSO, BAT, PSO, MFO, 618 SCA, SSA, and GWO. SSALEO has substantial advantages 619 over other algorithms, despite being rather time-consuming. 620

621
Using benchmark functions with various properties is a com-622 mon approach while conducting tests on optimization algo-623 rithms with a stochastic nature. Benchmark functions have 624 known global optima and mimic real-world optimization 625 problems.  with all experiments being carried out under comparable 650 circumstances. Consequently, every algorithm was written in 651 Python 3 and evaluated on a computer equipped with an Intel 652 Core i3-7100 CPU operating at 3.90 GHz and 4 gigabytes 653 of random access memory. The evaluations were conducted 654 with CEC 2017 benchmark functions with 50 and 100 dimen-655 sions and CEC2018LSGO benchmark functions with 200, 656 500, and 1000 dimensions. These evaluations covered uni-657 modal, multimodal, hybrid, and composite tasks. To guaran-658 tee consistency and fairness across all tests, we perform every 659 experiment thirty times, with each function being treated 660 independently. To produce metrics supported by sound statis-661 tics for each function, the experiment is carried out thirty 662 times, and the population size (N) and the maximum number 663 of iterations (Max iter) are each set to thirty and two thousand 664 five hundred, respectively.     outcomes using the formula below: Intuitively, the average number can be interpreted as a reflec-678 tion of the algorithm's effectiveness in optimizing its perfor-679 mance and its ability to avoid making computational errors.

680
Standard deviation is a measure of dispersion, and the lower 681 it is, the more durable and strong the algorithm will be.
682 Table 5 contains the SSALEO and other algorithms' parame-

707
To demonstrate the suggested SSALEO's exploitation capa-708 bilities, the testing results for both the SSALEO and com-709 peting algorithms are presented in Table 6, which can be 710 found here. The evaluation of the exploratory capacity of 711 optimization algorithms is a strong suit for F3-F9 in par-712 ticular. They have multiple local optima that increase in 713 size exponentially as the dimension increases. The proposed 714 SSALEO approach, as indicated in Table 7 , can solve these 715 benchmark functions on dimensions 50 and 100. The results 716 in Table 6 and 7 demonstrate that including an SSA in 717 LEO results in a higher convergence rate for the algorithm, 718 avoiding a standstill in local optima dynamically and greatly 719 enhancing exploration and exploitation. To prevent the algorithm from becoming trapped in a local 722 optimal solution, the hybrid and composite functions are 723 essential in determining how much exploration and exploita-724 tion should happen hand in hand. For hybrid functions 725 F10-F19, the proposed SSALEO outperforms competition 726 techniques, as shown in Table 8. In addition, the SSALEO 727 strategy outperforms the other competing approaches in 728 the composite optimization benchmark functions F20-F29, 729 as seen in Table 9. Combining an SSA with a LEO, as the 730 findings in Tables 8 and 9, increase the algorithm's conver-731 gence rate and ensures that exploration and exploitation are 732 appropriately balanced.     pared to its rivals. Each sample is given a rank, and the sum 778 of those ranks is calculated during the rank-sum experiment.

779
There is no significant difference in the overall performance intelligence algorithms, Rj represents the average rank of 800 algorithm j, and n represents the total number of swarm 801 intelligence algorithms.  Although the computational complexity of the proposed algo-808 rithm has already been discussed in the previous sections, 809 this section will explain how long it takes to complete the 810 CEC2017 benchmark routines. The conclusions of SSALEO 811 were evaluated and contrasted with those of its rival orga-812 nizations. As part of the labor-intensive computing method, 813 competitors must complete each benchmark thirty times and 814 record their outcomes in Table 2. Regarding performance and 815 timing, SSALEO can outperform and surpass SCA, MFO, 816 SSA, BAT, CSO, WOA, and PSO. In addition, when com-817 pared to other algorithms, SSALEO has a higher overall 818 efficiency than the others.        As shown in Figure 3, the proposed SSALEO starts with 879 a high exploration ratio and a low exploitation ratio; however, 880 it quickly transitions into an exploitation technique during 881 the majority of the iterations in the majority of the selected 882 functions. Consequently, the SSALEO that has been devel-883 oped achieves a healthy equilibrium between exploitation and 884 exploration.

885
The average global fitness curve depicts the variation ten-886 dency of SSALEO's fitness during the iterative technique in 887 fig.3(d). If you look closely at SSALEO's average fitness 888 curve, you'll notice that it sways a lot. This is because the 889 average fitness value decreases over time, and the frequency 890 of the oscillation is inversely proportional to the number of 891 times it is run. This assures SSALEO will reach a conclusion 892 quickly and conduct an exact search in the anaphase.

904
A comparison is made in this subsection between the 905 efficiency of the proposed SSALEO and HIWOA [94], 906 LJA [95], WFOA [97], RW-GWO [93], QSSALEO, and 907 LNMRA [98]. Additionally, SSALEO is evaluated against 908 other optimization methods, such as CPSO [96], PPSO [100], 909 and PPSO_W [100]. In the tests that are detailed in this 910 subsection, the CEC2017 test functions were used. For accu-911 rate comparisons, the population size (N) and the maximum 912 number of iterations (Max iter) have been set at 30 and 2500, 913 respectively. After running each method 30 times, the mean 914     It is important to note that the p-value of the Wilcoxon 921 rank-sum test is used in this subsection to compare the effec-922 tiveness of the suggested approach to that of other currently 923 used strategies. This is done to determine which technique 924 is the most efficient. Table 14 presents [80]. These enhanced variations 947 of the SSA highlight their major advantages by providing 948 novel approaches that can be used to improve the standard 949 version of the SSA. The population size of each algorithm 950 is set to 30, and the maximum number of iterations that can 951 occur is 25000. After 30 iterations, the performance of each 952 algorithm is analyzed by comparing the mean optimization 953 result, the standard deviation, and the median of the results. 954 Finally, the statistics are compiled and presented in Table 16. 955 Table 5 provides a review of the important parameters 956 that are involved in each methodology. The outcomes of 957 the Wilcoxon signed-rank test and the Freidman test are 958 presented in Tables 17 and 22, respectively. The conver-959 gence graphs of the techniques being considered are shown 960 in Figure 4. Table 16 demonstrates that when dealing with 961 unimodal functions F1 and F2, SSALEO performs better than 962 other alternatives. The strategy that has been proposed raises 963 the basic SSA's exploitation potential in comparison to the 964 potential of different SSA variants. SSALEO provides the 965 lowest solutions, including multimodal functions such as F3, 966 F4, F5, F6, F7, F8, and F9. The results in Table 16 demon-967 strate that the proposed SSALEO algorithm can solve the 968 benchmark functions. In addition, Table 16 illustrates that 969 implementing the suggested method increases the capabilities 970 of the initial SSA in terms of exploration and exploitation. 971 In addition to being competitive in hybrid functions F10-F19 972 and composite functions F20-F29, the search agents that 973 SSALEO built are also competitive in these areas. According 974 to Table 16, the new search technique has also significantly 975 improved the ability of the original SSA to find an appropriate 976 mix of exploration and exploitation for the algorithm to avoid 977 becoming trapped in a local optima.

978
When the overall efficacy of each improved SSA is com-979 pared, Table 16 shows that SSALEO performs better than the 980 other enhanced SSAs in more than half of the functions (OE). 981 The SSALEO is the most efficient algorithm for all test func-982 tions, with an overall effectiveness score of 62.06 percent. 983 Figure 4 from the CEC2017 presents a visual representa-984 tion of the convergence behavior exhibited by the proposed 985 SSALEO as well as other approaches after 2500 iterations. 986 VOLUME 10, 2022  In addition, as seen in Figure 5, the SSALEO strategies 991 converge on the same solution nearly as soon as the other 992 methods. In a similar vein, the convergence curves for hybrid 993 and composite functions demonstrate that SSALEO has the 994  in Table 18, which shows that the SSALEO is superior to its 1002 rivals and ranks first compared to other algorithms.  Comparative studies may also use additional algorithms other 1015 than PSO and SSA, such as others. Table 19 shows the results 1016 of the calculations and parameter values for methods that 1017 comply with Section 6.2. According to the results presented 1018 in Table 19, SSALEO is the most efficient approach for 1019 performing the majority of functions when compared to other 1020 state-of-the-art algorithms. According to Table 23, the new 1021 integration technique boosted the effectiveness of the previ-1022 ous method in selecting the optimum combination of explo-1023 ration and exploitation to avoid becoming entrapped within a 1024 local optima. When the overall performance of the algorithms 1025 is compared, Table 19 shows that SSALEO performs better 1026 than the other algorithms in more than half of the functions 1027 (OE). With an overall efficiency (OE) of 66.66 percent, the 1028 SSALEO methodology is the most successful strategy for all 1029 test functions with 200, 500, and 1000 dimensions.

1030
The findings of the Friedman test are presented in Table 20, 1031 and the SSALEO ranks first for most of its functions. 1032 Table 21 contains the Wilcoxon rank-sum p-values, empha-1033 sizing the p-values that are more than 0.05 through under-1034 lining. As a consequence, the null hypothesis is refused for 1035 every function, and in comparison to other methodologies, 1036 the results that SSALEO produces are statistically significant. 1037 In addition, it can be seen in Figure 6 that the SSALEO 1038 algorithm converges almost as quickly as the other algo-

1056
The results of the Friedman test are presented in Table 22,  Table 23. Table 23 contains a collection of statistics that 1060 demonstrate that, in comparison to other advanced algo-1061 rithms, SSALEO is the method that performs the majority 1062 of functions most effectively. In addition, the Wilcoxon rank-1063 sum p-values are presented in Table 24, with p-values that are 1064 greater than 0.05 being highlighted.

1065
Consequently, the null hypothesis cannot be accepted for 1066 any functions. Compared to other research approaches, the 1067 outcomes produced by SSALEO are statistically significant. 1068 As a consequence of this, one can conclude that the method 1069 that has been proposed can keep an excellent level of opti-1070 mization accuracy and robustness even when dealing with 1071 issues that are on a large scale. The experiments' findings 1072 indicate that SSALEO can avoid dimensional catastrophe 1073 and possesses a high optimization efficiency when solving 1074 problems involving functions with a high dimension.  [114] are frequently solved using optimization approaches. 1078 This section first describes the benchmark set of seven 1079 well-known constrained design challenges in various engi-1080 neering domains defined in the CEC 2020 conference 1081 VOLUME 10, 2022 benchmark set of real-world problems (CEC2020) [115].  This tension/compression spring design problem aims to min-1107 imize the spring's weight f(x) while considering limitations 1108 such as minimum deflection, shear stress, surge frequency, 1109 outside diameter limits, and design factors. The mean coil 1110 diameter D (x2), the wire diameter d (x1), and the number 1111 of active coils P are the design factors (x3) (see Fig. 7d). This 1112 problem's mathematical formulation is as follows:

VII. SSALEO FOR ENGINEERING DESIGN PROBLEMS
A cost-optimal pressure vessel design problem was solved 1144 using SSALEO and other state-of-the-art algorithms in 1145  Table 30. Based on the findings, the proposed algorithm is 1146 more efficient and solves hybrid decision variables faster than 1147 others. 1148 VOLUME 10, 2022

1149
It is well known that the three-bar truss design problem is The SSALEO algorithm outperformed most other state-of-1163 the-art algorithms to solve the three-bar truss problem using 1164 the optimal decision variables to get optimal truss weight, 1165 as shown in Table 27.

1195
A cost-optimal welded beam structure design problem was 1196 solved using SSALEO and other state-of-the-art algorithms 1197 in Table 27. Based on the findings, the proposed algorithm 1198 is more efficient and solves hybrid decision variables faster 1199 than others. Multiple disk clutch brake problem is a Constrained mechani-1202 cal design problem (see Fig. 7e); this problem's mathematical 1203 formulation is as follows: Subject to:

1224
The SSALEO algorithm outperformed most of the other 1225 state-of-the-art algorithms to solve the Multiple disk clutch 1226 brake problem using the optimal decision variables to get 1227 optimal truss weight, as shown in Table 28.

1255
The SSALEO algorithm outperformed most of the other 1256 state-of-the-art algorithms to solve the speed reducer design 1257 problem using the optimal decision variables to get opti-1258 mal minimizing the weights of a speed reader, as shown in 1259  Table 29.      where a 1 toa 12 and their values are listed in Table 30.

1275
The SSALEO algorithm outperformed most of the other 1276 state-of-the-art algorithms to solve the speed reducer design 1277 problem using the optimal decision variables to get opti-1278 mal minimizing the weights of process design, as shown 1279 in Table 31. Data mining and machine learning work more efficiently 1283 and effectively when dimensions are reduced [123]. Dimen-1284 sionality reduction includes feature extraction and selection. 1285 Feature extraction creates a new set of attributes. A feature 1286 selection process eliminates superfluous or useless attributes 1287 to enhance learning efficiency [123]. FE approaches are 1288 less popular in machine learning than FS. Dimensionality 1289 reduction is crucial when learning about high-dimensional 1290   In addition, various benchmark high-dimensional datasets are 1317 used to illustrate the effectiveness of algorithms in large fea-1318 ture spaces, including central nervous system (CNS) [129], 1319 ovarian cancer [130], and Colon cancer [131]. We've com-1320 piled a summary of the datasets we looked at in Table 32. 1321 In addition, for a fair comparison, all procedures use the same 1322 seed data. Table 5 contains the rest of the parameters for 1323 each method. A total of twenty repetitions of each procedure 1324 were performed on a machine equipped with 4 GB of RAM 1325 and an Intel Core i3 processor to eliminate the potential 1326 of chance. We start with a population of 10, then run for 1327 50 iterations.  The specifics of the seven real-world engineering design difficulties. D represents the total number of decision variables in the problem, g represents the number of inequality constraints, h represents the number of equality constraints, and f (x * ) represents the best-known feasible objective function value.
The variable denotes the solution to the problem of select-1342 ing features X Binary while the random number denotes the 1343 threshold N random .

1346
The objective function must be well thought out before con-1347 structing the optimization problem. Wrapper feature selec-1348 tion approaches, for instance, can be used to reduce the 1349 total amount of features while simultaneously increasing 1350 the precision of a learning process. These two competing 1351 VOLUME 10, 2022   objectives need to be factored into the objective function. This   where ρ and ϕ are constants that adjust the precision of 1357 the classification and the level of feature reduction, respec-1358 tively. The result was generated using the k-Nearest Neighbor 1359 (k-NN) classifier with K = 5. The error rate, denoted by 1360 Err (D), represents the percentage of incorrect identifications 1361 in the recognized subset. |T | is the total number of features 1362 and, |F| is the size of the subset of features that have been 1363 determined. In this investigation, ρ is set to 0.99 [133], and 1364 ϕ= 1−ρ. Furthermore, Three criteria are utilized to evaluate 1365 the suggested strategy in comparison to existing ones: Accu-1366 racy in classification is determined by averaging the results 1367 of twenty iterations on the test dataset using the same set of 1368 features, as well as the fitness values and the average number 1369 of features for each of the methods used.      to alternative methods, SSALEO yields the highest feature 1387 selection accuracy for these datasets. The new approach may 1388 be to blame since it strikes a better balance between discovery 1389 and exploitation.

1390
The effectiveness of the suggested method is compared to 1391 that of other preceding algorithms using the Wilcoxon rank-1392 sum test.