A dual biogeography-based optimization algorithm for solving high-dimensional global optimization problems

Biogeography-based optimization (BBO) cannot effectively solve high-dimensional global optimization problems due to its single migration mechanism and random mutation operator. To overcome these limitations, this paper propose a dual BBO based on sine cosine algorithm (SCA) and dynamic hybrid mutation, named SCBBO. Firstly, the Latin hypercube sampling method is innovatively used to improve the initial population ergodicity. Secondly, a nonlinear transformation parameter and a inertia weight adjustment factor are designed into the position update formula of SCA to make SCBBO suitable for high dimensional environments. Then, a dynamic hybrid mutation operator is designed by combining Laplacian and Gaussian mutation, which helps the algorithm to escape from local optima and balance the exploration and exploitation. Finally, the dual learning strategy is integrated, so the convergence accuracy is further improved by generating dual individuals. Meanwhile, A sequence convergence model is established to prove the algorithm can converge to the global optimal solution with probability 1. Compared with other state-of-the-art evolutionary algorithms, SCBBO effectively improves the optimization accuracy and convergence speed for high-dimensional optimization problems. To further show the superiority of SCBBO, the performance is compared on 1000, 2000, 5000 and 10000 dimensions, respectively. The comparsions show that SCBBO’s optimization results on these dimensions are basically the same. SCBBO also applied to engineering design problems, and the simulation results demonstrate that the proposed method is also effective on constrained optimization problems.


I. INTRODUCTION
Optimization is the most common problem in engineering, science, economy and society, such as system control, mechanical design, network design, large-scale integrated circuit design, biopharmaceutical and economic model. The application of optimization technology in the above-mentioned domains has produced enormous economic and social benefits. Practice shows that, under the same conditions, the system efficiency, resources, economic benefits and other aspects have been significantly improved through the treatment of optimization technology, and the larger the scale of the problem, the more obvious the corresponding effect. With the development of engineering technology and science, optimization problems tend to be large-scale, multi-peak, nonlinear and strongly constrained. The objective function is discontinuous and non-differentiable, and some of them even have no clear function form.Traditional optimization methods, such as Newton method, quasi-Newton method, conjugate gradient method, variable scale method and tunnel method, are no longer applicable due to the following problems: a). The traditional method requires continuous and differentiable objective function. b). Before the algorithm is implemented, the first-order or even higher-order derivatives and inverse matrices of the function are required. The more complex the objective function is, the more calculation is required. c). The results of the problem are closely related to the selection of initial values. d). The algorithm lacks generality, and users need to select the most appropriate method for specific problems. In order to solve the above problems, many scholars have been inspired by nature and designed various algorithms based on population by imitating biological mechanisms or natural phenomena, which are called swarm intelligence evolutionary algorithms (EAs). EAs are not constrained by the restriction conditions (such as differentiability, continuity etc.) and do not need derivative and other auxiliary information. They have the characteristics of high efficiency, simple operation, strong universality and meets the objective requirements of current optimization problems. Therefore, EAs have become the mainstream optimization method.
In the past 40 years, swarm intelligence EAs have developed rapidly and have been widely used in communication [1], finance [2], power grid [3], military [4], control system [5], [6] and other fields. Among them, the biogeographybased optimization (BBO) proposed by Dr. Simon of the United States in 2008 is a new swarm heuristic algorithm [7]. Simon uses the mechanism of biological migration and information interaction between habitats to establish mathematical modeling to realize the optimization process of the problem. BBO has the advantages of few parameters, easy implementation, and good stability when searching for the global optimal solution. So it has been widely favored by scholars from all over the world since it was proposed. Up to now, there are still many scholars doing in-depth research on the algorithm. They have improved or applied it to practical problems, and achieved some results.
In the field of algorithm improvement, people mainly make improvements in three directions. The first direction is to adjust the migration rate model of BBO. Ma [8] designed six migration rate models and found that the cosine migration model has the best optimization performance, and concluded that the performance of the nonlinear migration model is far better than that of the linear migration model. Wei et al. [8] inspired by the cosine migration model and designed a more complex hyperbolic sine cosine migration model, which further improves the optimization performance of BBO. The second direction is to improve the operator of BBO. BBO's framework is mainly composed of three operators: selection, migration and mutation. For the selection operator, Feng et al. [9] designed a selection operator with a random ring topology in 2017, which reduces the possibility of better solutions being destroyed by inferior solutions. Later, Zhang et al. [10] deleted the original BBO roulette selection operator in 2019, and adopted a example learning method to select better habitats for migration, thereby improving the convergence accuracy of the algorithm. An et al. [11] put forward a probabilistic selection operator based on nondominated sorting in 2021, so that BBO can effectively solve the multi-objective flexible job-shop scheduling problem. For the migration operator, Literature [12] desigened an enhanced biogeography-based optimization (BBO) referred to as POLBBO. In POLBBO, an efficient operator named polyphyletic migration operator is proposed. This operator can not only generate new features from more promising areas in the search space, but also effectively increase the population diversity. Then, Bansal et al. [13] proposed a new operator, namely the disruption operator to improve the capability of exploration and exploitation in BBO. This new algorithm is called DisruptBBO (DBBO). literature [14] designed a disturbed migration operator and obtained the PBBO. PBBO increases the local development capabilities of BBO. Literature [15] designed a novel BBO by integrating opposition-based learning mechanism (OBBO). In OBBO, the opposite individuals are merged into BBO population to improve the diversity, and the optimization performance of this algorithm is obviously better than that of standard BBO. Recently, Reihanian et al. [16] introduced a new two-stage migration operator in the framework of BBO to enable the algorithm to search the problem space effectively. For the mutation operator, the harmony search (HS) [18] process was added to the mutation operator of BBO in literature [17], and HSBBO was obtained. HSBBO not only effectively increases population diversity, but also improves the convergence accuracy. Zheng et al. [19] directly deleted the random mutation operator of BBO and adopted the differential mutation mechanism to conduct effective search, thus improving the algorithm's ability to develop new solutions. The last direction of improvement is to integrate BBO with other EAs. Literature [20] presented a biogeography-based krill herd (BBKH) algorithm to solve complex optimization problems. Literature [21] proposed the BBOTS based on tabu search algorithm (TS) [22]. It stores the performed migrations in a taboo table and prohibits reverse migration of populations to previous habitats. Yogesh et al. [23] integrated particle swarm optimization (PSO) [24] into BBO, and applied it to speech signal and emotional stress recognition. Zhang et al. [25] presented a novel hybrid algorithm based on BBO and grey wolf optimizer (GWO) [26], named HBBOG. In recent two years, with the continuous emergence of new EAs, many scholars have integrated some new algorithms into BBO. Farswan et al. [27] fused fireworks algorithm (FWA) [28] with BBO to obtain the FBBO, which has two search mechanisms. Then, Hamid [29] merged the firefly algorithm (FA) [30] with BBO in 2021, and obtained the hybrid algorithm FABBO. FABBO is essentially a two-stage method. In the first stage, FA is used for preliminary optimization, and some better solutions are found by searching the problem space through a limited iteration. In the second stage, BBO is used to conduct more refined search for these better solutions to obtain the optimal solution with higher accuracy.
In the field of algorithm application, BBO has been applied in many aspects at present. For example, literature [31] solved the function optimization problem of discrete variables by improving BBO. Literature [32] proposed a novel BBO based on population competition strategy to solve the substation location problem. Then, Pal and Saraswat [33] introduced an innovative method for categorization of histopathological images using an enhanced bag-of-feature framework. To obtain the optimal visual words in bag-offeatures, they proposed a new spiral BBO variant which introduces a spiral search and random search in the mutation operator to generate the suitability index variables. By 2021, BBO has been applied to fields such as industrial production and financial optimization. For instance, Rostami et al. [34] designed an optimal feature selection method for SAR image classification based on BBO, artificial bee colony (ABC) and support vector machine (SVM) in order to solve the feature selection problem. Harrabi et al. [35] designed a hybrid BBO algorithm to solve job-shop scheduling problems with general time delays. Literature [36] also proposed a hybrid meta-heuristic method based on BBO and PSO to estimate the currency demand in Iran. And Recently, Taghizadeh et al. [37] proposed a maheuristic-based data replica placement mechanism using BBO for data-intensive IoT applications on the fog ecosystem.
Although BBO is easy to implement and has few parameters, it is easy to fall into local optimal solution and difficult to escape [38]. Especially in the late evolution, the convergence rate of BBO is very slow. The improvement of scholars from various countries has reduced the possibility of BBO falling into local optimal solution, and the algorithm has been improved to a certain extent [39]. However, the convergence speed and optimization accuracy of BBO still need to be improved, especially in the late evolution, the search speed has not been effectively improved. However, according to our extensive investigation, the present BBO and its variants are not effective in solving high-dimensional optimization problems. We review hundreds of studies on BBO in the last decade and find that none of them solved problems with more than 100 dimensions. The vast majority of variants have only been tested on 30, 50 or 100 dimensions. With the progress of society, practical problems put forward higher and higher requirements for algorithms. In order to achieve a breakthrough in this field, aiming at the shortcomings of BBO, this paper proposes a dual BBO with sine cosine algorithm (SCA) [64] and dynamic hybrid mutation mechanism. As it integrates another evolutionary algorithm, it mainly belongs to the third category in the direction of improvement, so we named it SCBBO. SCBBO improves the original algorithm migration operator and mutation operator respectively, and integrates SCA and dual learning strategy. The above improved methods make BBO adapt to high dimensional optimization environment, and greatly improve the convergence performance of BBO. The main contributions of this paper are as follows: (1). In this paper, SCA algorithm and BBO algorithm are innovatively combined to obtain a hybrid migration algorithm. At the same time, a dynamic hybrid mutation operator is designed to effectively balance the exploration and exploitation of the algorithm. In addition, this paper designs a dual learning strategy and combines it into BBO for the first time. A sequence convergence model is established to prove the convergence of SCBBO. This is a new proof.
(2). The ability of SCBBO to solve global optimization problems can reach 10000 dimensions. To further show the superiority of SCBBO on high dimensions, we test SCBBO's optimization ability on 1000, 2000, 5000, 10000 dimensions. The results prove that the algorithm in this paper has good stability and excellent optimization ability on highdimensional environment.
(3). SCBBO is applied to solve the engineering design optimization. It solves pressure vessel design, tension/compression spring design and welded beam design respectively. By comparing the results of other literatures and algorithms, we conclude that that SCBBO has better applicability and optimization capabilities in engineering design problems.
The remaining sections of this paper are as follows: section II introduces the standard BBO and its calculation process in detail; In section III, the algorithm SCBBO is proposed. In section IV, the global convergence of SCBBO is proved. In section V, the computational complexity of SCBBO is analyzed. Then, section VI includes numerical experiments and results analysis. section VII is the application of SCBBO, using SCBBO to solve engineering design optimization problems. Finally, section VIII summarizes the whole work and points out the direction of future work. The graphical abstract of this paper is shown in FIGURE 1.

II. STANDARD BBO
Simon proposed the BBO in 2008. The basic idea is that biological populations live in different habitats and are affected by rainfall, vegetation diversity, geological diversity, climate and so on. The suitability of each habitat is different, and biological populations are distributed and migrated accordingly. In an optimization problem, a habitat corresponds to a candidate solution, and the habitat suitability index (HSI) corresponds to the objective function value of the candidate solution. The aforementioned factors affecting HSI are called suitability index variables (SIVs), which correspond to independent variables of candidate solutions. If the candidate solutions are considered as individuals in the population, the good individuals are like the habitats with high HSI, and the bad individuals are like the habitats with low HSI. Good individuals are more likely to share their independent variables with poor individuals, and poor individuals are more likely to accept the characteristics of good individuals. The addition of new features may improve the quality of individuals, and obtain a better function target value, which is the mathematical idea of BBO [38], [39]. BBO is an EA, which is mainly accomplished by the following three steps.

A. INITIALIZATION
BBO uses Eq. (1) to randomly generate NP habitats as the initial population, and each habitat contains D variables.
Where, i=1,2,. . . ,NP; j=1,2,. . . ,D. x ij is the j-th variable of habitat x i , so x i =(x i1 ,x i2 ,. . . ,x iD ). x jmax and x jmin are VOLUME 4, 2016 FIGURE 1: Graphical abstract of this paper the upper and lower limits of the j-th variable respectively. After population initialization, HSI of each habitat can be calculated based on fitness function of the actual problem.

B. MIGRATION
The population is sorted in descending order by the HSI of each x i , so the original habitat x i will be assigned a new i. The species number P i of habitat x i sorted can be calculated using Eq. (2): Where, S max is the maximum number of species and is given as an initial value.Immigration rate and emigration rate are calculated based on the species number of habitats. Habitats with high HSI have a high probability of sharing features with habitats with low HSI to improve the quality of those low HSI habitats. At the same time, habitats with high HSI will resist change, so their immigration rate is low [7].Therefore, habitats with high HSI have higher emigration rates and lower immigration rates than habitats with low HSI, while habitats with low HSI have the opposite results. In general, the migration process follows the migration rate model. The original BBO used the linear migration model to calculate the immigration and emigration rate, but literature [8] proved that complex migration model has better optimization performance than linear model. So, this paper adopts the cosine migration model which is more consistent with the natural law. Compared with linear migration rate model, it can better reflect the nature of ecosystem migration and increase species diversity. Therefore, the immigration rate λ i and emigration rate µ i of habitat x i are calculated by Eq. (3): (3) where, I is the maximum immigration rate and E is the maximum emigration rate, both of which are given as initial values.
For each habitat x i , the characteristic variables to be immigrated should be determined according to λ i during the migration process. The specific operation is to generate a random number between (0,1) for each variable of habitat x i . If it is smaller than λ i , this variable needs to be replaced. Then, in the remaining NP-1 habitats, the habitat x k to be emigrated is determined by roulette according to µ k . Finally, the variable of x k is used to replace the corresponding variable of x i . Algorithm 1 shows the BBO migration process.

Algorithm 1 The migration operator of BBO
x ij = x kj end if end for end for

C. MUTATION
Catastrophic events (e.g. famine, natural disaster, etc.) that suddenly changes the HSI of a habitat, or an outbreak that causes a species to move to another habitat, or a genetic mutation that directly creates a new species, are all referred to as mutation. Firstly, the species probability P i of each habitat is calculated from the immigration rate λ i and emigration rate µ i through Eq. (4).
The mutation rate of a habitat is inversely proportional to its species probability [38]. Therefore, the relationship between the mutation rate m i and the species probability P i of each habitat is as follows: where, m max is the maximum mutation rate, which is given as an initial value. For each habitat x i , a number between (0,1) is randomly generated, and if it is smaller than the mutation rate m i , x i needs to be mutated. Then for each independent variable of x i , a random number in the range of values is generated to replace the original variable value. Algorithm 2 gives the mutation process of BBO.

Algorithm 2
The mutation operator of BBO for i = 1 to NP if rand(0,1) < m i for j = 1 to D updata x ij by Eq. (1) end for end if end for

D. THE CALCULATION PROCESS OF BBO
Algorithm 3 presents the detailed calculation process of BBO.

E. DISCUSS OF BBO
Since the main work of this paper is to use BBO to solve high dimensional global optimization problems, we mainly discuss the advantages of using BBO to solve high dimensional numerical problems. Through the analysis of BBO to explain our motivation to improve BBO. Then, we analyze the reasons why the presentt BBO is not suitable for the highdimensional optimization environment, so as to point out the direction for the next improvements.
The advantages of BBO: (1). Unlike other evolutionary algorithms (e.g. genetic algorithm, differential evolution algorithm, immune algorithm), BBO does not need to breed or cross to produce the next generation of population, so it can greatly reduce the complexity of the algorithm. Even if solving highdimensional numerical problems, it does not consume too much memory.
(2). BBO does not require complex parameter tuning like particle swarm optimization, ant colony optimization or artificial bee colony algorithm. BBO's parameters are basically fixed and do not need to be reset according to the nature of the problem. Therefore, the parameters do not affect the convergence performance of the algorithm when solving high-dimensional optimization problems.
(3). BBO has good utilization ability of population information. It uses information from current population to migrate species and evolve. Therefore, on the high-dimensional environment, the population can still complete feature sharing in all dimensions, so as to evolve towards the optimal solution.
The defects of BBO: (1). According to Eq. (1), BBO uses random initialization to generate initial population. This method makes it difficult to disperse the population in high dimensional space. The population has no ergodicity, so the algorithm converges slowly on solving high dimensional problems.
(2). BBO uses roulette to select habitats for emigration. It cannot avoid the immigration of inferior individuals to superior individuals. If habitat x i will be immigrated, it is likely to be immigrated by habitat x j (j > i). It means that habitats with lower HSI will immigrate to habitats with higher HSI, and the features of poorer individuals will replace the features of better individuals, thus reducing the fitness of superior individuals. Therefore, the random selection of BBO will reduce population diversity and thus not suitable for high-dimensional environment.
(3). BBO's search capability is weak. BBO searches the problem space using information sharing between species. This mechanism works well in low-dimensional environment. However, when solving high-dimensional problems, a large number of new individuals are needed to search the space, and only the new solution generated by the migration operator is far from enough. (4). BBO uses random mutation to escape from the local optima. However, for the individuals with high fitness, random mutation can easily destroy them, leading to worse individuals and lower population diversity. This mutation method is blind and cannot guarantee the mutation to the direction of the optimal solution. But, when solving highdimensional optimization problems, it is difficult to find the global optimal solution if the search is blind. (5). BBO can not balance the exploration and exploitation effectively. It only relies on the substitution of several variables to search the problem space. Therefore, the algorithm cannot effectively switch between local search and global search in high-dimensional space.

III. PROPOSED ALGORITHM (SCBBO)
Although many variants of BBO have been put forward by scholars from various countries, these variants still have many defects on solving some complex problems. Especially for the high-dimensional global optimization problems, there is no variant can solve them effectively. In this section, some VOLUME 4, 2016 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. for each habitat, calculate the P i by Eq. (4), the m i by Eq. (5) perform the migration operator by Algorithm 1 perform the mutation operator by Algorithm 2 end while output the optimal solution existing BBO variants are deeply studied, and the defects of different variants are analyzed. According to the defectss of BBO and its variants, a dual BBO based on sine cosine algorithm and dynamic hybrid mutation mechanism is proposed and named SCBBO. We will introduce the design principle and calculation process of SCBBO in detail.

A. LATIN HYPERCUBE SAMPLING METHOD
The convergence speed and accuracy of the algorithm will be affected by the quality of the initial population. From subsection 2.5, The initial population of standard BBO is randomly generated, so the diversity and the rationality of the distribution in the search space cannot be guaranteed. Most BBO variants use random initialization to generate initial populations, which is the main reason why they are not suitable for high-dimensional environment. At present, the improvement of population initialization strategy mainly uses chaotic mapping [40], [41]. However, chaotic mapping can only reduce the number of individuals distributed in the edge region of the search space, but can not effectively improve the ergodicity, and will increase the amount of computation. In this paper, the Latin hypercube sampling method is introduced to generate more uniform distribution of initial points without additional calculation.
Latin hypercube sampling is a multi-dimensional stratification technique that can efficiently sample in the distribution interval of variables [42]- [44]. The essence of this method is to divide the interval into N equally spaced non-overlapping sub-intervals, and conduct independent equal probability sampling for each sub-interval, so as to ensure that the sampling points are evenly distributed in the whole interval. Taking the interval [0,1] as an example, random sampling and Latin hypercube sampling are carried out respectively. In the case of small sample numbers, random distribution does not disperse the population well to the whole interval. The comparison of the distribution of Latin hypercube sampling and random sampling is shown in FIGURE 2, in which 10 points are extracted from the interval [0,1] throughing the two methods. It can be observed that Latin hypercube sampling can also spread over the entire space for a small number of samples. Therefore, this paper uses Latin hypercube sampling method to generate the initial population to improve the ergodicity.

B. HYBRID MIGRATION OPERATOR BASED ON SCA
Migration strategy has great influence on the search performance of BBO. Although the discrete migration mechanism of BBO can effectively utilize the current population information, direct substitution of solution variables will lead the blindness of migration. The variable of the better solution is likely to be replaced by the inferior solution, thus reducing the quality of the population, leading to the poor ability of the algorithm to mine new solutions. Moreover, the BBO variants proposed by other scholars still have some drawbacks. For example, although the PRBBO [9] reduces the possibility that the better solution destroyed by the inferior solution, the selection operator of the random ring topology makes migration only take place between adjacent habitats, which largely reduces the ability of utilize population information. EMBBO [10] adopts the example learning method to select a better habitat than the current habitat for migration, thus improving the convergence accuracy of the algorithm. However, the whole population moves towards the local optimal solution, which is prone to fall into the local optima. TDBBO [45] designs a two-stage differential migration operator, which effectively balances the search and development capabilities of the algorithm, but has defects of high computational complexity and slow convergence. Therefore, in order to enhance the search ability, a hybrid migration operator based on sine cosine algorithm is proposed in this paper.

1) immigration refusal mechanism
From subsection II-E, the standard BBO is easy to migrate the features of the inferior solution to the superior one, so that the superior habitat is destroyed. In order to avoid the destruction of the inferior solution to the superior one, we design an immigration refusal mechanism. The specific operation is to set a threshold τ for the emigration rate µ k of habitat x k . When the emigration rate µ k of habitat x k is less than the threshold τ , habitat x i will reject the variables from habitat x k . The emigration rate of each habitat is proportional to its HSI. Therefore, the higher the emigration rate, the higher the HSI of the habitat, that is, the better the objective function value. Setting a threshold τ can ensure that all habitats used for emigration have high HSI, thus avoiding the destruction of superior habitats. Taking a population with As shown in FIGURE 3, threshold τ is similar to a dividing line. Habitats x 1 , x 2 , x 3 with an emigration rate greater than τ can share variables with other habitats, while habitats x 4 , x 5 , x 6 with an emigration rate less than τ can only accept variables from other habitats and cannot share information.

2) Convex migration operator
The immigration refusal mechanism effectively avoids the damage of inferior solutions to the better ones, but does not greatly improve the convergence speed and accuracy of BBO. Therefore, when the emigration rate µ k of habitat x k selected by roulette is less than the threshold τ , a convex migration operator is adopted for the migration of habitat x i . The features of x i no longer just copy the features of x k , but are replaced by a "convex combination" of x k and the optimal solution of the current population x best : There are three reasons for adopting convex migration. Firstly, good individuals are less likely to degenerate as a result of migration, because some of their original characteristics are retained during migration. Secondly, poor individuals will accept at least part of the characteristics from good individuals in the migration. Finally, such migration ensures that the population evolves towards the direction of the optimal value of each generation, no longer blindly searches, and can converge quickly. The parameter θ can be either deterministic or dynamically changing. Through a large number of experiments, this paper proposes the strategy of changing parameter θ dynamically and randomly. Because the current optimal solution x best is likely to be the local optimal solution, dynamic random adjustment of θ can improve the probability of the algorithm escaping from the local optima.

3) Sine-cosine migration operator
From subsection II-E, BBO only migrates the features of habitats whose random number is less than the immigration rate, so its search capability is weak. The features of most habitats remain unchanged, so the population diversity does not increase significantly, which is also the main reason why the algorithm is not suitable for high-dimensional optimization environment. Therefore, this paper proposes a method of migration in the form of "sine and cosine function waves", which together with immigration refusal mechanism and convex migration operator constitutes the hybrid migration operator.
The sine cosine algorithm (SCA) [46] has a simple structure and is easy to implement. Its most significant feature is that it achieves optimization based on the change of the value of sine function and cosine function. Each individual in SCA is updated through Eq. (7): 1],and MaxIt is the maximum iteration number. In order to take full advantage of SCA's search capabilities, we analyzed and discussed SCA in depth. In SCA, there are mainly four parameters (r 1 , r 2 , r 3 , r 4 ). Among them, the most critical is the adaptive parameter r 1 , which controls the transformation of the algorithm from global search to local development. When the value of r 1 is large, the algorithm tends to search globally. When r 1 is small, the algorithm tends to local develope. Therefore, SCA uses the periodicity of sine and cosine for global search and local development. FIGURE 4 shows the graphs of r 1 sin(r 2 ) and r 1 cos(r 2 ) when a=2, MaxIt=500 and a=2, MaxIt=1000.
It can be seen from FIGURE 4 that when r 1 > 1, the values of r 1 sin(r 2 ) and r 1 cos(r 2 ) may be greater than 1 or smaller  than -1. When r 1 ≤ 1, the values of r 1 sin(r 2 ) and r 1 cos(r 2 ) must be between -1 and 1. According to SCA design principle, the algorithm performs global search first and then local search. The volatility of r 1 sin(r 2 ) and r 1 cos(r 2 ) and their corresponding relationship with the algorithm search strategy as shown in FIGURE 5. When |r 1 sin(r 2 )|> 1 or |r 1 cos(r 2 )|> 1, the algorithm performs global search. When |r 1 sin(r 2 )|≤ 1 or |r 1 cos(r 2 )|≤ 1, the algorithm performs local search. According to r 1 = a − at/MaxIt, when the number of iterations t > (1 − 1/a)M axIt, r 1 < 1 and the algorithm no longer performs global search. Therefore, the original r 1 is a monotone decreasing function, which is not good at balancing the global and local search ability of the algorithm. In the middle and late stage, the algorithm is mainly developed locally in a small area, which is easy to fall into the local optimal state. To overcome it, most scholars have studied the modification of it [47]- [50]. Inspired by the waveform change of sine function, we proposes a nonlinear amplitude regulating factor r * 1 which is calculated by Eq. (8).
It can be seen from Eq. (8) that r * 1 and the original r 1 are both decreasing functions. However, based on the fluctuation of sine function, r * 1 is beneficial to improve the convergence accuracy of the algorithm for multi-modal and irregular problems while meeting the requirement of large value in the early stage and small value in the late stage of iteration. At the initial stage of iteration, r * 1 decreases slowly, which is beneficial to the population to search for the optimal solution with a large step size and accelerate the convergence rate. At the end of iteration, the rate of r * 1 decline is accelerated, which is conducive to more accurate search of the algorithm in the optimal value neighborhood and improved convergence accuracy. In order to fully prove the effectiveness and superiority of parameter r * 1 , we compared r * 1 with other four expressions of r 1 , as shown in TABLE 1. FIGURE 6 shows a graph of these parameters.
As can be seen from TABLE 1, under the same conditions, when the number of iterations reaches half, r 1 can be obtained. The global search ability increases successively, while the local development ability weakens. From FIGURE 6, r 1 pays more attention to local search, and the algorithm searches for the optimal solution of the problem in a small range most of the time, so it is easy to fall into the local optimal solution. On the contrary, only a small part of values of r 1 and r (4) 1 are between [0,1], which indicates that the algorithm conducts global search most of the time. Although the search speed can be improved, the algorithm cannot conduct greatly accurate search in the optimal value neighborhood, thus reducing the convergence accuracy. By contrast, the r ( * ) 1 proposed in this paper can better balance the global search and local development capabilities of the algorithm. While ensuring the convergence speed, the algorithm can conduct more accurate search in a small area.
According to Eq. (7), the original SCA uses individual x i to guide itself, which has the problem of slow convergence. In addition, x i may have a low HSI, thus reducing the population quality and affecting the algorithm search. Therefore, this paper uses the elite guidance approach to update the position, so as to speed up the convergence. The specific operation is to replace the guide with the optimal individual x best of the current population, which accelerates the search speed through the x best . Furthermore, in order to make the position information of the current optimal individual x best gradually be fully utilized with the number of iterations, we design a dynamic inertia weight ω inspired by cosine function waveform curve, which makes the algorithm not limited to learning the global optimal value and improves the convergence accuracy. Eq. (9) defines the expression of inertia weight ω.
where, γ is the weight adjustment factor and its value is 0.5. FIGURE 7 shows the variation of inertia weight ω with the iterations. In the early stage of evolution, a larger weight is needed to make the particles move to the optimal direction. In the middle of evolution, the inertia weight ω becomes smaller, which preventing the algorithm falling into the local optimal value prematurely, and ensuring the survival and development ability of individuals with low HSI. At the end of evolution, the population gradually does not need elite guidance, so the position update mode gradually degenerates to the unguided update mode with the decrease of the inertia weight ω.
To sum up, for habitat x i , when a random number generated on a variable is greater than the immigration rate, this variable still needs to be migrated. The new location update mode is implemented through Eq. (10).
The migration operator of SCBBO is obtained by combining the immigration refusal mechanism, convex migration operator and sine-cosine migration operator. Algorithm 4 presents the migration process of SCBBO.

C. DYNAMIC HYBRID MUTATION OPERATOR
From subsection II-E, BBO can not balance the exploration and exploitation effectively. The standard BBO uses random mutation to generate new individuals, which is weak in generating new solutions with high HSI. Especially in the late stage of evolution, the solution set is close to the theoretical optimal solution, so random mutation is not only difficult to explore better solutions, but also easy to produce more poor solutions. This is also one of the main reasons why BBO cannot effectively solve the high-dimensional optimization problems. In recent years, many scholars have also improved the BBO mutation operator, but they all have different defects. For instance, MTBBO [51] proposed in 2020 divides the population into three different grades, and carries out dif-ferent mutations for individuals in different grades. Although it can effectively improve the convergence accuracy, it also increases the computational complexity. The algorithm needs to perform three mutation operators on the population, which consumes a lot of time. Both PRBBO [9] and HGBBO [52] adopted gaussian mutation help BBO to improve population diversity. However, the step size of gaussian distributed random number is short, which can not greatly help the algorithm to escape from the local optimal solution. In addition, NBBO [16], EMBBO [10] and WRBBO [53] directly delete the mutation operator to avoid random mutation generating inferior solutions. Although the computation is reduced, the algorithm only relies on the migration operator to search new solutions, which has the problem of slow convergence speed, and the population diversity decreases rapidly, and the algorithm is easy to fall into the local optimal state. Therefore, this paper proposes a Laplace-Gauss hybrid mutation strategy that dynamically adapts the iterations, which can balance the search and development of the algorithm and help it escape from local extremums.
The probability density functions of Laplacian distribution [54] and Gaussian distribution are defined as Eq. (11) and Eq. (12) respectively.
Where, α ∈ (−∞, ∞) and β > 0 are location parameter and proportion parameter respectively. µ is the mean, σ is the variance. The Laplacian distribution Lap(α, β) is defined by the distribution function shown in Eq. (13), which is always symmetrically distributed with respect to α.
In order to effectively utilize more random numbers in the search space, α = 1, β = 2 in the Laplacian distribution Lap(α, β) are set. µ = 0, σ = 1 in the Gaussian distribution G(µ, σ). Therefore, the mutation formula based on dynamic hybrid strategy is defined as Eq. (14).
Laplacian random number Lap(1, 2) has a larger fluctuation range than Gaussian distribution random number G(0, 1), as shown in FIGURE 8. Lap(1, 2) searches in a larger range of the current optimal value, which is beneficial to maintain the population diversity and help the algorithm escape from the local optimal value. G(0, 1) can search more accurately within a small range of the current optimal value, which is beneficial to improve the convergence accuracy. Meanwhile, w 1 , w 2 are used to adjust the weight between Laplacian random number and Gaussian random number. FIGURE 9 shows the change of weight coefficient with the iterations. w 1 and w 2 are responsible for better exploration and exploitation over the process of iteration. w 1 is used to complete global search, that is exploration; w 2 is used to complete local search, that is exploitation. In Eq. (14), the weight coefficient w 1 of Lap(1, 2) has a large value in the early stage of evolution, so the algorithm can utilize more random numbers and explore better solutions near the current optimal solution with a large step. At the late stage of evolution, the population will converge to the theoretical optimal solution region, and with the increase of the iterations, w 1 gradually decreases, while the weight coefficient w 2 of G(0, 1) keeps increasing. The mutation step size of G(0, 1) is small, which is convenient for the algorithm to search precisely in the optimal solution neighborhood. It not only enhances the local development ability of the algorithm, but also has little influence on the convergence speed in the later period. Therefore, the hybrid mutation strategy avoids falling into the local optimal solution by dynamically adjusting the weight coefficients and improves the search efficiency. Algorithm 5 gives the calculation process of the dynamic hybrid mutation operator.

Algorithm 5 The mutation operator of SCBBO
Ergezer et al. [15] proposed the oppositional biogeographybased optimization (OBBO) in 2014, which applied the opposite-based learning strategy into BBO. OBBO merges the reverse individuals of the population into BBO to improve the optimization ability. Opposite-based learning is similar to dual learning, which was first proposed by Collard and Aurand [55]. They designed a genetic algorithm based on dual learning (DGA) to generate dual individuals by taking  [56] suggested that only the part of poor individuals in the population should be selected. In this paper, when improving BBO, the dual learning strategy is integrated into the algorithm for the first time.
In SCBBO, when the dual learning strategy is applied to the algorithm, only Nd individuals from the worst part of the population are selected to generate dual individuals. Because it's unlikely that a good individual will produce a better dual individual than the original one. In other words, the individual closer to the optimal value is not worth generating its dual. Randomly generating dual individuals will not only waste the evaluation times of function, but also reduce the population quality. Therefore, only the dual of the inferior individual will be generated. The specific operation is to take any point x o on the line segment from W k to the current optimal individual x best , and then the centrosymmetric point of W k centered on x o is the dual individual W k . The advantage of this way is to ensure that the dual of the poor individual moves in the direction of the optimal value, and the population evolves in a good direction, thus improving the convergence speed of the algorithm. Algorithm 6 gives the calculation flow of dual learning operator.

Algorithm 6
The dual learning strategy of SCBBO {W k } ← {The worst Nd habitats in the population} (N d = N P/2) for each habitat W k r = rand(0,1)

E. GREEDY SELECTION FOR THE BEST SOLUTION
Greedy selection strategy is not the innovation of this paper, but it is essential. Algorithm 7 gives the specific operation. The reason for designing Algorithm 7 is that the optimal individual of the current population is used by the hybrid migration operator, dynamic hybrid mutation operator and dual learning strategy. Only when the optimal individual of each generation does not degenerate can we ensure that the population does not degenerate during evolution. Therefore, in order to achieve the goal without increasing the computation, we only perform greedy selection on the current optimal individual.
Algorithm 7 Greedy selection for the best individual In summary, this paper proposes a dual BBO based on sine cosine algorithm and dynamic hybrid mutation mechanism. SCBBO improves the standard BBO from the contents of the four subsections of this section. Algorithm 8 shows the calculation flow of SCBBO.

IV. CONVERGENCE PROOF OF SCBBO
At present, the convergence of most evolutionary algorithms is proved by Markov model or dynamic model. In this section, we will prove the global convergence of SCBBO with a new method. The establishment of sequence convergence model to prove the convergence of SCBBO is also a major innovation of this paper. For a global optimization problem, assuming that its optimal solution is x * , then f (x * ) is the global optimal value. The optimal solution of SCBBO in the t-th iteration is x t best , and f (x t best ) is the current optimal value. According to the sequence convergence theorem, the equivalent condition that SCBBO can find the global optimal value f (x * ) is that a certain f (x t best ) is in the δ domain of f (x * ), that is, |f (x t best ) − f (x * )| ≤ δ. During the evolution of SCBBO, each iteration exists a best individual. The set formed by these individuals is: where, MaxIt is the maximum iteration number. Thus, sequence A can be constructed according to Eq. (16) : As can be seen from subsection III-E, in SCBBO, the optimal value of each generation will must be better or equivalent that of the previous generation. Therefore, the following formula must be true: With the evolution, the population will gradually move closer to the range where the optimal solution exists, that is, the probability of the optimal individual in the population entering the δ domain of the global optimal solution increases gradually. Eq. (18) is used to express the probability that the optimal value f (x t best ) of the current population converges to the global optimal value f (x * ): p t = f x t best − f (x * ) ≤ δ , t = 1, 2, · · · , M axIt (18) According to Eq. (17) and (18), the following relationship must exist: therefore, after t-th iteration, the probability that the current optimal value does not converge to the global optimal value is: From Eq. (19), it can be seen that p t is monotone and does not decrease, so the following formula is true: and, since p 1 is the probability, so 0 ≤ p 1 ≤ 1, then we have 0 ≤ (1 − p 1 ) ≤ 1. After many iterations, Eq. (22) can be set up. lim According to Eq. (22), after a large number of iterations, the probability that the algorithm does not converge to the optimal value is 0. Therefore, as the iteration number t increases, SCBBO will eventually converge to the global optimal value f (x * ) in the form of probability 1. The proof is completed.

V. SCBBO COMPLEXITY DISCUSS
With reference to BBO, the computational complexity of SCBBO is compared and analyzed. As for the calculation process of the algorithms, comparing Algorithm 3 and Algorithm 8, it can be seen that SCBBO moves the calculation of habitat immigration rate, emigration rate, species probability and mutation rate out of the iteration loop. Because they are all based on rankings, so there is no need to double count. However, the original BBO does not avoid repeated calculation. In each iteration, it recalculates the immigration rate, emigration rate, species probability and mutation rate of each habitat, and the total calculation times is 4·M axIt·N P . On the contrary, SCBBO only calculates these data once in the whole evolutionary process, so the total number of calculations is 4 · N P . Therefore, SCBBO greatly reduces the computational complexity of migration operator and mutation operator, and saves the calculation of 4 · (M axIt − 1) · N P times at least. Although the sine-cosine migration operator of SCBBO adds some judgment steps, it does not bring additional loops. Finally, the dual learning strategy is added to SCBBO. In each iteration, Nd individuals with low HSI will be selected to generate their dual individuals. For VOLUME 4, 2016 Algorithm 8 The calculation procedure of SCBBO initialize SCBBO parameters, which involve S max , I, E,NP,m max , τ and Nd initialize the population of SCBBO by latin hypercube sampling for each habitat, calculate the HSI and sort from best to worst for each habitat, calculate the S i by Eq. (2), the λ i and µ i by Eq. (3) for each habitat, calculate the P i by Eq. (4), the m i by Eq. (5) while (the termination condition is not met) perform the hybrid migration operator by Algorithm 4 perform the dynamic hybrid mutation operator by Algorithm 5 perform the dual learning strategy by Algorithm 6 perform the greedy selection by Algorithm 7 for each habitat, calculate the HSI and sort from best to worst end while output the optimal solution every dual individual generated, two additional calculations are performed. So the number of computations increases to 2 · N d ≤ 2 · N P in each generation, and the total number of computations does not exceed 2 · M axIt · N P . Although SCBBO increases the computation in the dual learning strategy, this has been compensated in the migration and mutation operators.
To sum up, SCBBO fully reduces the computation amount from migration and mutation operators, thus achieving the purpose of reducing the running time. Although the process of generating dual individuals increases some computational complexity, it is fully compensated in the process of migration and mutation. SCBBO as a whole saves at least (2 · M axIt − 4) · N P calculations compared with BBO.

VI. EXPERIMENT AND ANALYSIS
In order to verify the effectiveness and superiority of SCBBO on solving high-dimensional global optimization problems, we carry out a series of simulation experiments. As shown in TABLE 2, this paper summarizes and selects 24 classical benchmark functions. They contain complex functions such as unimodal, multimodal, irregular, rotating and noisy, which can test the comprehensive ability of the algorithms. The experimental environment for all numerical experiments in this paper is MATLAB 2020a.

A. COMPARISON BETWEEN SCBBO AND TWO CONSTITUENT ALGORITHMS
This subsection compares SCBBO with its two constituent algorithms to verify that SCBBO improves the performance of BBO and SCA. SCBBO does not increase additional function evaluation numbers compared with BBO. So the same iterations number means the same function evaluation number. Therefore, we set M axIt = 1000, and the maximum mutation rate m max of BBO is 0.05 [57]. To avoid contingency, each algorithm runs 50 times independently on 24 benchmark functions on 30 ,50, and 100 dimensions, respectively. Finally, the mean (Mean) and standard deviation (Std) of the 50 errors are calculated, and the rank-sum test will be performed at a significance level of 0.05 TABLEs 3 and 4 show the rank-sum test results, and the last line summarizes the result of this comparison. The test results are obtained as "(w/t/l)", whose representative meaning is: w(+:win)/t(≈ :tie)/ l(-:lose). Where, "-" means that the performance of the contestant algorithm is worse than that of SCBBO, "+" means that the performance of the contestant algorithm is superior, and "≈" means that the performance of contestant algorithm and SCBBO is similar.
As can be seen from TABLEs 3 and 4, the overall performance of SCBBO is significantly better than BBO and SCA on 30, 50 and 100 dimensions, so the performance of two original algorithms are greatly improved. BBO performs worse than SCBBO on all benchmark functions. SCA and has similar performance to SCBBO on two benchmark functions (f 9 with D = 30, f 10 with D = 30 and 50), but has a bigger error mean on the remaining 22 functions. Horizontal comparison: SCBBO's convergence accuracy is much higher than BBO and SCA. It can be seen that the improvement strategies in this paper effectively improves the search ability of the algorithm. The hybrid migration operator makes the population close to the optimal solution quickly, and the dynamic mutation operator keeps the diversity of the population and balances the exploration and balance of the algorithm effectively. Longitudinal comparison: The convergence accuracy of SCBBO basically does not decrease with the increase of the dimensions. This shows that the improvement strategies in this paper makes BBO suitable for high-dimensional optimization environment and has good scalability. This is because the Latin hypercube sampling method makes SCBBO populations on different dimensions have good ergodicity, while the dual learning strategy can make the poor individuals in the population close to the optimal solution. At the same time, the hybrid migration operator and dynamic mutation operator enable SCBBO to switch freely between global search and local search, so that it is not easy to fall into the local optima, and can still converge quickly in high dimensional environment.
Next, we compare the three algorithms' convergence speed and stability. According to the 50 times optimization results of the three algorithms on 24 benchmark functions on 30,  50 and 100 dimensions, boxplots are drawn to compare the stability. At the same time, a search with the smallest error and the errors of each generation are selected to make the best convergence graphs of three algorithms to compare the convergence. As shown in FIGURE 3. From FIGURE 3, SCBBO converges faster than BBO and SCA on different benchmark functions. Although SCA is similar to that of SCBBO on some benchmark functions, the convergence curve of SCA fluctuates greatly (e.g. f 7, f 9, f 14, f 17, f 18 and f 24) and the algorithm is unstable. On functions f 7, f 9, f 11 and f 24, SCA does not converge. This shows that SCA is easy to fall into the local optimal solutions. On the contrary, the convergence curve of SCBBO basically does not fluctuate, and the algorithm can successfully escape from the local optima. This is because the hybrid migration operator accelerates the convergence speed of SCBBO, while the dynamic mutation operator helps it escape from the local optima. At the same time, greedy selection ensures that the population does not degenerate, so the convergence curve of SCBBO always decreases. This also verifies the convergence proof in Section IV. At the same time, the 50 error values obtained by SCBBO on different benchmark functions are very close, and the boxplot can hardly be seen. However, BBO and SCA both have multiple outliers, and the search performance is unstable. Therefore, by carefully observing the convergence curves and boxplots, it can be seen that SCBBO has better convergence and robustness than BBO and SCA, which verify the effectiveness of the improvement strategies again.

B. COMPARISON BETWEEN SCBBO AND OTHER BBO VARIANTS
This section compares the SCBBO with seven excellent BBO variants. They are all proposed in the past five years, and TABLE 5 shows the detailed information of them. Consistent with subsection VI-A, eight algorithms search the optimal values on 24 benchmark functions. The performance of these BBO variants on low-dimensional optimization problems is fully verified in their original reference articles. So we mainly compare the performance of SCBBO and them on high VOLUME 4, 2016  Similarly, in order to avoid contingency, each algorithm independently runs 50 times, then the mean and standard deviation of 50 errors are used as evaluation indexes. The mean reflects the searching ability and the standard deviation reflects the stability of the algorithm. Therefore, the mean is the focus of comparison. TABLE 6 shows the optimization results of SCBBO and seven BBO variants for 24 benchmark functions on 200 dimensions. Where, the boldface represents the best result of the eight algorithms, and the last column represents the rank-sum test results, which has the same meaning as that in TABLEs 3 and 4.
As can be seen from TABLE 6, SCBBO obviously has the best overall performance among all the BBO variants. The error mean of BBOSB is better that of SCBBO on only one benchmark function (f 21), and the results are not as good as SCBBO on the remaining 23 functions. Then, FBBO gets the same result as SCBBO on f12 and worse on all remaining functions. While TDBBO, WRBBO, DCGBBO and FABBO have larger mean and standard deviation than SCBBO on all benchmark functions. In contrast, HGBBO is more competitive. It has better mean and standard deviation than SCBBO on three benchmark functions (f 20, f 21 and f 24), and the same result as SCBBO on f 12. It can be seen that although these BBO variants have excellent performance in low-dimensional environment, they cannot effectively solve high-dimensional global optimization problems. They are not suitable for high-dimensional environment.  This shows that SCBBO has good scalability and is suitable for high-dimensional optimization environment. Latin hypercube sampling method makes the initial population of SCBBO still have good ergodicity in high dimensional space. Hybrid migration operator enables SCBBO to search effectively and speeds up convergence. Then, dynamic hybrid mutation operator makes SCBBO not easy to fall into the local optima. Duality learning strategy helps the poor individuals of the population move quickly to the optimal solution. Similarly, to clearly compare the convergence process between SCBBO and the seven BBO variants, we plot the convergence curves and boxplots of them on different benchmark functions. As shown in FIGURE 11, SCBBO has the fastest convergence speed on all functions, and it is not easy to fall into the local optima . On f 1, f 2, f 4, f 6, f 11 and f 22, SCBBO's convergence speed is faster than other algorithms at the beginning of iteration. When the iteration enters the middle, SCBBO rapidly converges to the optimal value, and the number of iterations is at least 1000 times less than other algorithms. Especially for functions f 13, f 14, f 15, f 18 and f 19, the convergence curves of SCBBO are almost invisible. From boxplots, SCBBO has excellent robustness and stable search performance in the face of different high-dimensional benchmark functions.
In summary, the overall performance of SCBBO is better than that of BBOSB, TDBBO, WRBBO, FBBO, HGBBO, DCGBBO and FABBO. SCBBO has higher convergence accuracy, faster convergence speed and better stability on high-dimensional global optimization problems.

C. COMPARISON BETWEEN SCBBO AND OTHER EAS
To further verify the superiority of SCBBO for solving highdimensional global optimization problems, we compare it with seven state-of-the-art EAs proposed in the past few years: WOA [64], SSA [65], I-ABC [66], RCGA-rdn [67], DOBL [68], HEA [69], SDSA [70]. Among them, WOA and SSA are highly cited advanced algorithms in "Web of Science". I-ABC, RCGA-rdn and DOBL are the improved algorithms of ABC, GA and DE proposed in recent three years, which are highly competitive and representative in the same kind of algorithms. HEA and SDSA are advanced algorithms proposed by "IEEE TCYB" in recent two years, which are suggested solving high-dimensional global optimization problems. Therefore, SCBBO can further verify its superiority by comparing with these outstanding algorithms.
Then, eight algorithms search for optimization on 24 benchmark functions on 500 dimensions, and record the optimal value searched when MFEs is reached. Similarly, in order to avoid contingency, each algorithm independently runs 50 times, and the mean and standard deviation of 50 times errors are used as evaluation indexes.    19 and f 23). So, although these advanced algorithms show excellent performance on low-dimensional problems, their performance significantly decreases when solving high-dimensional optimization problems. On the contrary, even when D = 500, SCBBO converges precisely to the theoretical optimal value on 19 functions. A careful comparison between TABLE 6 and TABLE 7 shows that the convergence results of SCBBO on 200 dimensions are almost the same as on 500 dimensions. Therefore, the improvement strategies in this paper makes BBO suitable for highdimensional optimization environment, and the algorithm performance has good ductility.
For a better evalution of SCBBO and the compared algorithms, FIGURE 12 shows convergence curves of the eight algorithms on different functions. It can be observed that SCBBO algorithm converges much faster than other EAs on different benchmark functions, saving at least 800 iterations and not falling into the local optima. Especially for functions f 15, f 18 and f 23, SCBBO converges rapidly and the convergence curve is almost invisible. Then, the convergence curve of WOA on f 7 is unstable, while SCBBO maintains a smooth convergence cruve. This is because the greedy selection ensures that the population does not degenerate, but always converges towards the optimal solution. Therefore, even compared with the advanced evolutionary algorithms, the algorithm proposed in this paper also shows the optimal search performance and stability.
In a word, on high-dimensional global optimization problems, the performance of SCBBO is significantly better than that of WOA, SSA, I-ABC, RCGA-rdn, DOBL, HEA and SDSA in both solution quality and convergence speed.

D. PERFORMANCE COMPARISON OF SCBBO ON DIFFERENT HIGH DIMENSIONS
With the rapid development of the present society, the practical problems in life have higher and higher requirements for algorithms. An algorithm should converge quickly and be able to solve high dimensional problems. To further analyze and compare the performance of the proposed SCBBO on high-dimensional global optimization problems, SCBBO is optimized in the high-dimensional environment of D=1000,D=2000,D=5000 and D=10000, respectively. Similarly, to avoid contingency, the algorithm runs 50 times on each benchmark function, and the mean and standard deviation of the 50 errors are recorded. As shown in TABLE 8, to facilitate comparison, the results obtained by SCBBO on 500 dimensions are also included.
From TABLE 8, SCBBO can still converge precisely on 10000 dimensions, and the error obtained on 19 benchmark functions are 0. Except for the multi-modal function f24, the solution accuracy of SCBBO is basically unchanged on different high dimensions. Furthermore, FIGURE 13 shows     the convergence curves of SCBBO on some benchmark functions with different dimensions. It can be seen that with the increase of dimensions, the convergence curves of SCBBO are basically the same. Therefore, the algorithm proposed in this paper has powerful searching ability, and its performance is basically not affected by dimensions, which can effectively solve the high-dimensional global optimization problems.

VII. APPLICATION OF SCBBO ON ENGINEERING DESIGN PROBLEMS
This dection further verifies the effectiveness and advancement of SCBBO by solving three constrained real optimization problems in engineering design (pressure vessel design, tension/compression spring design and welded beam design).
These engineering design problems have been extensively studied and solved in many literatures, so we selected the methods and research results in the past few years for com- parison to better clarify the performance of SCBBO. In SCBBO, the population size NP is 50, and the maximum number of iterations MaxIt only needs 100.

A. PRESSURE VESSEL DESIGN
The goal of the pressure vessel design problem is to minimize the cost of fabrication (pairing, molding and welding). The design of the pressure vessel is shown in FIGURE 14. Both ends of the pressure vessel are capped with a cap, and the cap at one end of the head is hemispherical. L is the section length of the cylinder part without considering the head, R is the inner wall radius of the cylinder part, T s and T h are the wall thickness of the cylinder part and the head respectively. Therefore, L, R, T s and T h are the four optimization variables for the pressure vessel design problem. The objective function and four optimization constraints of the problem are expressed as follows: We apply SCBBO to solve pressure vessel design problem, and compare its results with 16 excellent algorithms proposed in the past decade, as shown in TABLE 9. As can be seen from TABLE 9, the results obtained by SCBBO in pressure vessel design are superior to other comparison algorithms, and the cost is minimal.

B. TENSION/COMPRESSION SPRING DESIGN
Spring is an important part in industrial production, and there are many factors that affect the structural performance of it. As shown in FIGURE 15, the design problem of the tension/compression spring is to minimize the weight of the spring while meeting the constraints of minimum deflection, vibration frequency, and shear stress. The problem consists St. g 1 (X) = 1 −    variable constrained optimization problem. In order to obtain better spring design parameters, many scholars applied the improved EAs to this engineering optimization problem.
To illustrate the design effect of SCBBO, 16 outstanding algorithms of nearly ten years are also applied to the objective function and the optimization results are compared. The spring design parameters and objective function results obtained by each algorithm are shown in TABLE 10. As can be seen, the results obtained by SCBBO in the design of tension/compression spring are better than other EAs, and the spring weight is the minimum.

VIII. CONCLUSIONS
In order to improve the performance of BBO for highdimensional global optimization functions, this paper proposes a new BBO variant based on sine cosine algorithm and dual learning strategy, named SCBBO. The uniqueness and innovation of this paper can be summarized as follows: (a). This paper uses Latin hypercube sampling method to generate initial population, which improves ergodicity of population distribution. At the same time, the shortcomings of the position updating formula of SCA are analyzed. Then the nonlinear transformation parameters and inertia weight adjusting factor are designed, which are combined with the original BBO's migration operator to obtain a hybrid migra-tion operator that adjusts the search state with the iterations. (b). By combining Laplacian random number and Gaussian random number, a dynamic hybrid mutation operator is obtained, and the dual learning strategy is integrated into BBO, which effectively balances the exploration and exploitation of the algorithm and helps it improve the convergence speed and accuracy. (c). A sequence convergence model is established to prove that SCBBO has global convergence, and the computational complexity of SCBBO is analyzed by comparing with the original BBO. (d). 24 benchmark functions are used for comparative simulation experiments, and the results prove that the ability of SCBBO to solve global optimization problems can reach 10000 dimensions. In engineering design optimization problems, SCBBO can obtain better design parameters, which shows that SCBBO has higher practical application value.
In the future work, SCBBO can be combined with more complex optimization problems in other fields, such as image processing, neural networks, support vector machines, etc.