An Opposition-Based Chaotic Salp Swarm Algorithm for Global Optimization

The salp swarm algorithm (SSA) is a bio-heuristic optimization algorithm proposed in 2017. It has been proved that SSA has competitive results compared to several other well-known meta-heuristic algorithms on various optimization problem. However, like most meta-heuristic algorithms, SSA is prone to problems such as local optimal solution and a slow convergence rate. To solve these problems, a chaotic salp swarm algorithm based on opposition-based learning (OCSSA) is proposed. The application of opposition-based learning (OBL) guarantees a better convergence speed and better develops the search space. The chaotic local search (CLS) method is also introduced, which can improve the performance of the algorithm to obtain the global optimal solution. The performance of OCSSA is compared with that of the original SSA and some other meta-heuristic algorithms on 28 benchmark functions with unimodal or multimodal characteristics. The experimental results show that the performance of OCSSA, with an appropriate chaotic map, is better than or comparable with the SSA and other meta-heuristic algorithms.


I. INTRODUCTION
Meta-heuristic algorithms have become popular due to their advantages of simple and easy implementation, effective avoidance of local optimization, and good scalability. Many meta-heuristic algorithms have shown efficient and powerful performance in solving high-dimensional and nonlinear optimization problems [1].
Without considering its structure, a meta-heuristic algorithm can to some extent be divided into the two main phases of exploration and exploitation [2]. In the exploration phase, algorithms conduct random expansion exploration on the whole search space to increase the diversity of solutions. Following this, the exploitation phase aims to improve the quality of the solution by performing local searches around promising areas that have been identified during the exploration phase. It is important to maintain a good balance The associate editor coordinating the review of this manuscript and approving it for publication was Huiping Li. between exploration and exploitation to avoid a suboptimal solution that is locally optimal.
Meta-heuristic algorithms can be divided into evolutionary and swarm intelligence algorithms. They are designed in accordance with the collective and intelligent behavior of insects, animals, humans, and other social creatures. Among the most prominent are particle swarm optimization (PSO) [3], the whale optimization algorithm (WOA) [4], the artificial bee colony algorithm (ABC) [5], and the grey wolf optimizer (GWO) [6], [7]. Meta-heuristic algorithms that have emerged in recent years include the butterfly optimization algorithm (BOA) [8], the vampire bat optimizer (VBO) [9], and the salp swarm algorithm (SSA) [10].
Mirjalili et al. recently proposed a bio-inspired metaheuristic algorithm, SSA, that mimics salp swarm mechanisms. It has the advantages of simple implementation, few parameters, and low computational cost. In terms of optimal solution accuracy and convergence rate, SSA provides better results than common methods such as PSO, VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ genetic algorithm (GA) [11], firefly algorithm (FA) [12], and bat algorithm (BA) [13]. One of the key works on SSA was conducted by Sayed et al. in [14]. The authors proposed a new chaotic SSA (CSSA) to deal with feature selection tasks. The simulation result demonstrated that the CSSA can be regarded as a good optimizer compared to some previous methods. Asaithambi and Rajappa [15] integrated SSA with sin cosine algorithm (called HSSASCA) to improve the convergence performance with the exploration and exploitation stage.
The simulation results revealed that the proposed algorithm achieves the best accuracy with least runtime in comparison with other meta-heuristics. SSA was also applied in electric engineering, Singh et al. [16] proposed a hybrid SSA to optimize the sizing of a CMOS differential amplifier and the comparator circuit. The experiment showed that the proposed SSA with CMOS analog IC designs outperformed other existing methods. However, SSA still has its own limitations, it is more prone to exploitation phase, so it cannot always conduct global search well and, in some cases, cannot find the global optimal solution [14]. Furthermore, although SSA has a competitive performance in single-objective problems, it still has room for improvement in dealing with multi-objective problems.
The chaotic local search is one of the most common method employed to boost the performance of metaheuristic algorithms [17]. Many researchers have applied chaotic local search techniques to different optimization algorithms [18], [20]. Arora and Anand [18] introduced chaotic local search into the grasshopper optimization algorithm, which effectively balances development and exploration, and reduces the rejection or attraction between grasshoppers in the optimization process. Similarly, Kohli and Arora [19] added many chaotic maps to the grey wolf algorithm, adjusting key parameters to control the exploitation and exploration phases in the optimization process. Jordehi [20] combined with chaos theory and bat swarm optimization algorithm, using the ergodicity and non-repetition of chaotic functions diversified the bat population and mitigated the problem of premature convergence. Additionally, the theory of opposition-based learning has been confirmed by Tizhoosh [21] that the opposite number is closer than a random number to the optimal value and can enhance the search ability and accelerate convergence. This mathematical method was widely used by many researchers in different meta-heuristic algorithms [22], [25]. Kang et al. [22] introduced the opposition-based learning in his letter, aiming to solve the problem of premature convergence and low population diversity in traditional PSO. Zhang et al. [23] used the elite opposition-based learning to improve the performance of original gray wolf optimizer, the experiment results show the efficiency of their proposed algorithm compared with the original GWO and other meta-heuristic algorithms in terms of convergence rate and search ability.
The major contributions of this work are as follows. 1) A hybridization approach based on SSA, chaotic local search, and opposition-based learning is proposed.
2) Ten widely used chaotic maps are integrated into SSA, and their performance is compared with the original SSA.
3) The performance of the best opposition-based learning chaotic SSA is compared with various metaheuristics that have shown excellent performance on various benchmark functions. The rest of the paper is arranged as follows: Section 2 presents the original SSA algorithm, mainly introducing its mathematical model. Section 3 describes the detail of proposed algorithm OCSSA. Also, the concepts of oppositionbased learning and chaotic local search are illustrated in this section. Section 4 discusses the experimental results on global benchmark problems. Finally, a brief conclusion and recommendations for future work are offered in Section 5.

II. AN OVERVIEW OF SSA
SSA is one of several random population-based algorithms proposed in 2017, based on the population mechanism of a salp swarm foraging in the ocean. In the deep sea, a salp swarm usually forms a long chain. At the front of chain is the leader, whereas the rest of salps are considered as followers.

A. MATHEMATICAL MODEL OF SSA
The optimization process of SSA is determined by the three steps of population initialization, leader position updating, and follower position updating, which basically mimic the real clustering behavior of a salp swarm in the ocean. The operation of SSA is discussed in the next three sections.

1) POPULATION INITIALIZATION
Let the predation space be an N × D dimensional Euclidean space, where N is the scale of the salp swarm, and D is the spatial dimension. There is a food F = [F 1 , F 2 , · · ·, F D ] T in space, and the salp position can be expressed as x n = [x n ,1 , x n ,2 , ··· , x n ,D ] T , n = 1, 2, ···, N . The upper bound of the search space is expressed as ub = [ub 1 , ub 2 , · · ·, ub D ] T , and the lower bound is lb = [lb 1 , lb 2 , ···, lb D ] T . We randomly initialize the population: In the population, the states of leaders and followers in d − th dimension are x 1,d and x m,d , respectively, where m = 2, 3, · · · , N .

2) LEADER POSITION UPDATE
The leader of a salp swarm is responsible for searching for food in the environment and guiding the movement of the whole group. Its position is updated randomly by where c 2 and c 3 are random numbers in the interval [0, 1]. The parameters enhance the randomness of leader movement, the global search ability, and the individual diversity, and c 1 is the main parameter in (2) which exists in all meta-heuristic algorithms and often called the convergence factor. It balances the exploration and exploitation ability of the algorithm in the iterative process. When the convergence factor is greater than 1, the algorithm performs global exploration. In contrast, when the factor is less than 1, the algorithm starts to develop the local part and obtains the accurate estimation value. To make the algorithm search globally in the first half of the iteration and develop accurately in the second half of the iteration, the value of the convergence factor is usually a decreasing number from 2 to 0. The expression of the convergence factor c 1 in SSA is where l is the current iteration number, and l max is the maximum number of iterations.

3) FOLLOWER POSITION UPDATE
In SSA, there is no random movement of followers; they follow in a chain sequence. Therefore, the position of followers is only related to their initial position, motion speed, and acceleration. The motion mode conforms to Newton's law of motion, so the motion distance R of followers can be expressed as Because the time t is called iteration in optimization process, the discrepancy between iterations is equal to 1, i.e. t = 1. v 0 is the follower speed, which is 0 at the beginning of each iteration; and a is the acceleration of the followers between the beginning and end of an iteration, i.e. a = v final − v 0 t, since a follower only follows the salp immediately in front of it. So, the movement speed and the update equation of follower location is where x l m,d is the d dimensional position of the m−th follower in the l − th iteration, and x l+1 m,d is the follower location in the (l + 1) − th iteration. Algorithm 1 provides the pseudo-code of the standard SSA.

III. THE PROPOSED OCSSA
This section introduces opposition-based learning (OBL) and chaotic local search (CLS), which are used to enhance SSA algorithm performance, and then the improved OCSSA is described.
Update the position of the leader salp by (2) 9. else 10.
Update the position of the follower salp by (6) 11. end if 12. end for 13. Compute the fitness value of every salp. 14. Update X * if there is a better solution. 15. end while 16. return the best solution X * and its fitness value.

A. OPPOSITION-BASED LEARNING
Traditional meta-heuristic algorithms start the search process by using a set of randomly generated numbers as the initial solution. The convergence rate of the algorithm is not stable and will be slow in most cases. To avoid these problems, opposition-based learning is introduced, and both randomly generated and reverse solutions are considered. The OBL properties are defined as follows. Let P = (y 1 , y 2 , · · ·, y D ) be a point in D space, where y 1 , y 2 , · · ·, y D ∈ R, ∀i ∈ {1, 2, · · ·, D}. Then the opposite point of P is OP = (oy 1 , oy 2 , · · ·, oy d ), where oy i = a i + b i − y i .

B. CHAOTIC LOCAL SEARCH
To improve the performance of SSA in obtaining the global optimal solution, a chaotic local search method based on search strategy is introduced, which accelerates the search process and forces it to advance to a region where the optimal solution is more likely to be obtained, enhancing the ability of algorithm exploitation [26]. CLS ends when a better solution or local search termination condition is reached.
Chaos is a common phenomenon in nonlinear systems in nature, and its ergodic property, namely traversing all states within a certain range without repetition, is frequently used as an optimization mechanism to escape from local optimum. In this paper, 10 chaotic maps, as shown in Table 1, are used to generate corresponding chaotic sets. The initial point of these chaotic mappings can be any number between 0 and 1. The initial value of the chaotic map used here is set to 0.7. We adopted the initial values as in [27].

C. PROPOSED OCSSA
OCSSA adds chaotic local search and opposition-based learning to solve the problem of local optimal solution and low convergence speed. Because chaotic local search has the characteristics of not repeatedly traversing the search VOLUME 8, 2020 area and opposition-based learning can bring the algorithm closer to the global optimal solution, the performance and convergence speed of OCSSA will be improved into various degrees compared to SSA. The next two sections describe the improvement scheme of parameter initialization and a leader position update stage.

1) IMPROVED POPULATION INITIALIZATION
SSA uses the random generation of salp population position for population initialization; hence, its performance is unstable. If the generated initial population position is close to that of the global optimal solution, then the convergence speed of the algorithm and its ability to obtain the global optimal solution will be good. But randomly generated initial solution positions are rarely ideal. Therefore, the OCSSA algorithm adds chaotic local search, which can provide a more reliable initial population position when the population of salps is initialized, so as to ensure that the convergence speed of the algorithm will not fluctuate greatly, and to improve the performance of the algorithm to a certain extent. The salp population X is built by the chaotic local search using the following equation: where x i ∈ X , i = 1, 2, . . . , N , j = 1, 2, . . . , d, l i and u i represent the lower boundary and the upper boundary of the salp x i ∈ X ,respectively. The ch ij is the chaotic map value constructed using the equation listed in Table 1.
In addition, adding opposition-based learning in the population initialization can make the function of chaos more powerful. By comparing the fitness value of the initial population position before and after the change of chaotic local search, the better individual position can be selected to improve the performance of the algorithm in terms of convergence speed.
Calculate the fitness value of X i . 9. X * = the best search agent. 10. while stopping criteria not reached 11.
Update the position of the leader salp by (2) 15. else 16. Update the position of the follower salp by (6) 26. end while 27. Return the best solution X * and its fitness value f (X * ).

Equation 2
shows that the position of leaders in the population changes according to the position of food, while the position of followers is adjusted according to the position of leaders. So, the position of the leader is critical to the algorithm. If the algorithm has ideal exploration ability, the leader's position is equal to the food position, i.e., the algorithm achieves the global optimal solution. Food processing also differs significantly between OCSSA and SSA.
To optimize the position of leaders, OCSSA introduces chaotic local search and opposition-based learning. If the leader position remains the same or changes only slightly, the algorithm will fall into a local optimal solution. OCSSA uses chaotic local search to escape this issue. At this stage we set a threshold to control the number of CLS, and use the equation 7 to change the position of food. When a better position is found or the search upper limit is reached, the algorithm ends the chaotic search and performs the OBL phase. X * new updates its position according to the description in this paragraph. If the algorithm is always in a nonconvergent state, it can more quickly reach the position near the global optimal solution, to improve the convergence speed and accuracy. Algorithm 2 is the pseudo-code of the improved salp swarm algorithm.

A. EXPERIMENT PLATFORM
All the algorithms compared in this section are performed on MATLAB 2017 installed over Windows 10 (64bit) operating system that runs on a core i5 personal computer with 8GB RAM.

B. INTRODUCTION OF BENCHMARK FUNCTIONS 1) THE CLASSICAL CEC2005 BENCHMARK FUNCTIONS
We selected a set of 13 functions from the CEC2005 benchmark functions to compare the performance of the proposed algorithm with original SSA and other well-known algorithms. This benchmark functions can be divided into VOLUME 8, 2020  two types; one is called the unimodal functions (F1-F7) which can be used to examine the optimization accuracy and convergence rate of an algorithm. These functions have only one extreme value in the search area. The other is multimode functions (F8-F13) and they are used to evaluate the ability of an algorithm to avoid local optimal solutions. They have more than one extreme value in the given area domain. The mathematical formulas and related properties of these functions are listed in Table 2. The dimension of each function is set to 20.

2) THE CEC2015 BENCHMARK FUNCTIONS
From the CEC2015 functions, we selected a set of 15 functions as a second benchmark to test the performance of algorithms which are used in this paper. They can be used to deal with the competition on single objective optimization problems. In addition to the 2005 test functions, these test functions include some new features, such as new basic problems, the shifted and rotated problems. A brief description of these benchmark problems is listed in Table 3.

C. PARAMETER SETTING
All algorithms used in the paper have a population of 30 and the maximum number of iterations is 500. For the statistical analysis, each benchmark function is carried out for 50 independent runs to minimize the statistical error of the results.
The relevant parameters in WOA, GWO and ABC algorithms adopt the values set in the original algorithm. The mutation probability in DE is 0.5 and the weight factor value is 0.9. The learning factor value of PSO is 2, the inertia factor is 0.6, and the maximum velocity of particle is 10, which is as same as OCSSA's step distance.

D. COMPARISON OF OCSSA AND ORIGINAL SSA
In this part, we compare the SSA and the proposed OCSSA algorithm on 13 classical benchmark functions in terms of numerical characteristics (mean, standard deviation, and statistical best), algorithm diversity and algorithm computational complexity and runtime. The specific information is described in the following sections.

1) NUMERICAL CHARACTERISTICS
A set of 13 functions from CEC2005 benchmark is used in this section to evaluate the performance of original SSA and the proposed algorithm OCSSA. OCSSA1, OCSS2, . . ., OCSSA10 correspond to the 10 chaotic maps listed in Table 1. The comparison results for original SSA and ten versions of OCCSA using the unimodal and the multimodal functions are shown in Tables 4-5.   functions (F1-F6), OCSSA5 has one best mean value on F7. From the perspective of standard deviation, OCSSA4 achieved the 5 best results, and OCSSA3 and OCSSA5 each has an optimal standard deviation on 7 test functions. It is worth mentioning that OCSSA4 has achieved theoretical optimal values on the F1 and F3 test function. Table 5 shows the performance of the original SSA and the proposed algorithm on multimodal functions. In terms of mean value, OCSSA4 performed the best, achieving 4 statistical bests on 6 test functions (F9-F12), OCSSA9 ranked second, and achieved 3 optimal values (F9, F11, F13), OCSSA2 (F9, F11) and OCSSA7 (F8, F11) both have two optimal mean value, while OCSSA5 and OCSSA8 both obtain the best average value on F8. From the perspective of standard deviation and statistical optimality, OCSSA4 is still the best performing improved algorithm.
It can be seen from the results of Table 4 and Table 5 that OCSSA4 which combined with the sinusoidal chaotic map performs best among many different improved versions of OCSSA algorithm. The improved algorithm has better optimization effect than the original SSA algorithm in both unimodal and multimodal functions.

2) DIVERSITY OF THE ALGORITHMS
In order to evaluate the effect of chaotic local search and opposition-based learning on the exploration and exploitation  of OCSSA, diversity plots are presented in Figs.1. The diversity plots represent the average distance between each search agent in the optimization process. large average distance between search agents indicates high population diversity and vice-versa [28]. As can be analyzed in Figs.1, OCSSA4 can keep high population diversity in the initial phases of optimization process. this allows OCSSA4 to avoid the local optimal solution phenomenon, and make the algorithm converge to the global optimal solution direction to obtain a more accurate solution. At the same time, OCSSA4 has a VOLUME 8, 2020 smaller population diversity during the exploration stage, which means that OCSSA4 can achieve higher accuracy than original SSA. this conclusion is confirmed in the previous section. it should be noted that during the exploration phase of the algorithm, the diversity of OCSSA is lower than the original SSA, which can be explained. because OCSSA introduces chaotic local search and opposition-based learning during the population initialization stage, which results in a population that tends to be more optimal, which will reduce the diversity of the population. although the convergence speed of the two is not much different from the Figs.1, from the perspective of convergence accuracy, the performance of the proposed algorithm is better, which means that the proposed algorithm has a stronger ability to find the global optimal solution.

3) COMPUTATIONAL COMPLEXITY AND RUNTIME
The computational complexity of an optimization algorithm is a key metric for evaluating the runtime of an algorithm. The computational complexity can be defined based on the structure of the algorithm. The computational complexity of SSA depends on the number of salps, dimension of the problem and maximum number of iterations. Overall, by analyzing the steps of algorithms, the computational complexity of the original SSA algorithm is O (t (d * n + Cof * n)). OCSSA adds CLS and OBL during the population initialization phase and iterative optimization process, respectively. In the population initialization phase, the time complexity of OCSSA is O (d * n + Cof * n), and the time complexity of the optimization iteration process is O (t (d * (n + max C) + Cof * n)). Combine the above two parts, the time complexity of OCSSA is O (t (d * (n + max C) + Cof * n)), where t is the number of iterations, d shows the number of dimension, n indicates the number of search agents, Cof is the cost of objective function, and max C is the maximum iteration number of CLS.
It can be concluded that the limitation of the proposed algorithm is still the computational complexity which needs to be reduced. The reason for this complexity results from two components 1) The OBL strategy 2) The CLS method, since both are applied to the whole population.
In addition, we analyzed the results of the two algorithms from the aspect of experimental execution time. The detailed results are shown in Table 6. In this table, the runtime of algorithms is the average of 50 experimental results, and SR represents the probability that algorithms can reach the ideal global optimal value. From algorithms runtime of the 13 test functions in the table, it can be seen that the execution time of OCSSA4 is longer than the original SSA algorithm. The main reason is that OCSSA algorithm adds two methods based on the original algorithm, so the long runtime is unquestionable. However, from the perspective of the optimal value obtained by the algorithm, the effect of OCSSA is much better than the original SSA algorithm. We sacrificed the execution time of the algorithm to improve the accuracy of its optimization.

E. COMPARISON OF OCSSA AND ORIGINAL SSA
In this part, we use the two types of benchmark functions (CEC2005 and CEC 2015 functions) to evaluate the performance of the proposed algorithm and compare it with several other algorithms which including ABC, DE, GWO, PSO, WOA. Here we choose the fourth of 10 different versions of OCSSA, that is, OCSSA4 as the algorithm of this paper compared with other algorithms. Meanwhile, in order to add further analysis to our results, a nonparametric Wilcoxon's rank sum (WRS) test is used to give a statistical value to determine if the two being compared are significantly different. Experimental data and discussion are explained in detail in the following sections.

1) NUMERICAL CHARACTERISTICS
The comparison results of OCSSA4 and other meta-heuristic algorithms on the two types of benchmark functions are shown in Tables 7-8, also, Figures 2-3 show the convergence curves for the algorithms. Table 7 shows the results of the OCSSA4 and other optimization algorithms on the CEC2005 classical functions. The best performing values in each evaluation criterion are bold. The simulation results in Table 6 show that OCSSA4 has the best mean value among the 11 benchmark functions (F1, F2, . . ., F7, F9, . . ., F12), and the WOA and ABC algorithms rank second, respectively achieving the ideal mean value in F8 and F12. From the perspective of optimal value, OCSSA4 obtained the optimal value in 12 test functions (F1-F7, F9-F12), and WOA and GWO obtained the optimal value in four (F8-F11) and two (F9, F11) benchmark test functions, respectively. Comparing the results of the standard deviation, it can be found that OCCSSA4 still has a large advantage, while DE, ABC, and WOA each obtain the optimal standard deviation in one test function. Table 8 displays the performance of the OCSSA4 and other optimization algorithms on the CEC2015 benchmark functions. From an average point of view, OCSSA4 has the best performance, achieving the best performance among 14 (F1-F13, F15) functions, and the second is the ABC algorithm, which obtains the best average value on the F14 function. From this table, we can see that OCSSA4 obtained the optimal standard deviation on 10 test functions (F1, F2, F4, F6-F10, F12, F15). The second one was ABC, which performed best on three test functions (F5, F13, F14). The two algorithms WOA and DE were on F11 and F3, respectively get the best standard deviation. In terms of the statistically optimal values that the algorithm can obtain, OCSSA4 performs best on the majority of test functions compared to several other algorithms. The experimental results also confirm the NFL (No Free Lunch) theorem [29] from the side, i.e., an algorithm cannot perform optimally on all optimization problems.
To clearly observe and analyze the convergence curves of OCSSA and other algorithms, these algorithms were run 50 times independently, and the convergence performance of the algorithms was tested using two set of benchmark functions. Fig.2 shows the convergence performance of the algorithm on the CEC2005 test function. On unimodal functions (F1, F7), OCSSA's convergence speed is better than several other algorithms, and on multimodal functions (F9, F11), Although WOA and GWO can finally reach the I deal optimal value, the convergence speed of OCSSA is still the fastest. Fig.3 displays the convergence speed of the algorithm on the CEC2015 test function. It can be seen from the figure that the convergence speed of OCSSA is still relatively good.
By synthesizing the performance of OCSSA and several other algorithms, it can be concluded that OCSSA performs very well both in terms of convergence accuracy and convergence speed. The main reason lies in the two methods introduced, which perform chaotic transformation of the population position during the initialization stage. And OBL, which makes the initial position of the population better than the randomly generated location, which accelerates the convergence rate to a certain extent, and during the optimization process, CLS can avoid the algorithm from falling into the local optimal solution, and OBL speeds up the convergence speed of the algorithm.

2) STATISTICAL TEST
In order to prove that the performance of the OCSSA4 algorithm is significantly different from several other algorithms on the test function, this section introduces Wilcoxon's rank sum test. The detailed results are shown in Tables 9-10 for the CEC2005 and the CEC2015 benchmark functions, respectively. These two tables show the p-values and H-values of several comparison algorithms. When the P value is less than 0.05, H = 1, which means the null hypothesis, that is, the two have significant differences. Conversely, when the p-value is greater than 0.05, the null hypothesis does not hold, and H = 0, the difference between the two is not obvious. From Table 9, it can be concluded that OCSSA4 has significant differences in most test functions compared to other algorithms except F5. On the CEC2015 from table 10 test functions, OCSSA4 also shows significantly different performance from other algorithms.

F. REAL WORLD APPLICATIONS
This section applies the OCSSA proposed in this paper to a real-world problem, the tension/compression spring  design problem. The optimal solution obtained by the algorithm should not violate many constraints.
The tension/compression problem consists of minimizing the weight f − → x of a tension/compression spring (Fig.4) subject to constraints on minimum deflection, shear stress, surge frequency, limit on outside diameter and on design variables. This problem has three decision variables, namely average coil diameter D, the wire diameter d, and number of effective coils N .
Formally, the problem can be expressed as: Variable range: This test case was solved using either mathematical techniques (for example, constraints correction at constant cost [30] and penalty functions [31]) or meta-heuristic techniques such as PSO [32], WOA, DE [33], GWO, ABC [34]. The comparison of results of these techniques and GWO are provided in Table 11. A different penalty function constraint handling strategy was applied in order to perform a fair comparison with literature [35]. It can be seen from Table 11 that OCSSA finds a design with the minimum weight for this problem, and it performs better than most algorithms except ABC and GWO.

V. CONCLUSION
In the present article, a novel OCSSA algorithm which combines OBL and CLS strategies is proposed to solve the global optimization problem. The OBL is introduced in the proposed algorithm to approximate the closer candidate solution to the global optima and CLS is employed for the exploitation of promising search regions of search space. The simulation results show that OCSSA performs better than SSA in optimizing 28 benchmark functions (including CEC2005 and CEC2015 functions) and maintains a fair balance between exploration and exploitation, which makes it robust. It also can be observed that the OCSSA algorithm combined with Sinusoidal chaotic mapping has the strongest competition. Moreover, the best-performing version of OCSSA is used to compare with other meta-heuristics. The OCSSA algorithm show the superiority over other algorithms in terms of optimization accuracy and convergence speed. In addition, we used Wilcoxon's rank sum to statistically test the algorithm, and the results indicate that the algorithm proposed in this paper is significantly different from several other algorithms in most benchmark functions. Finally, we apply the OCSSA algorithm to classic engineering problems, and the results show that the algorithm can solve real-world problem well. The future work may include adjusting SSA algorithm control parameters to optimize algorithm performance. In addition, more chaotic maps are also worth employed to OCSSA. YANPENG CUI is currently pursuing the master's degree with the Xi'an University of Posts and Telecommunications, Xi'an, China. His research interests include the technology and application of the Internet of Things. VOLUME 8, 2020