TDSD: A New Evolutionary Algorithm Based on Triple Distinct Search Dynamics

Spherical evolution is a recently proposed nature-inspired meta-heuristic algorithm which is proven to have nontrivial efficiency and effectiveness in solving complex optimization problems. However, it has some limitations caused by its inherent scale factor and dimension factor. Hypercube search and chaotic local search are two kinds of effective search mechanisms. To construct an algorithm which has better exploration and exploitation abilities, we propose a novel algorithm which contains triple distinct search dynamics (TDSD), i.e., spherical search, hypercube search and chaotic local search. Effective control among them enhances search performance of TDSD. It is verified on thirty CEC2017 benchmark functions and three real-world optimization problems.


I. INTRODUCTION
More and more optimization algorithms have been proposed to solve complex optimization problems in recent years [1]- [3]. Most of them have complex dynamic systems including many variable parameters and some features such as rotation characteristics, high dimension, dynamical and hard to measure, and even difficult to make a mathematical model. It is a challenging work for researchers to pay attention to figure out more effective and efficient optimization algorithm.
Nature-inspired meta-heuristic (NMH) algorithms are well-known for promising results in combinatorial optimization. Evolutionary algorithms (EAs) as population-based computational intelligence are widely applied to various optimization problems, which show tremendous potential and make great effort in the field of optimization [4], [5]. However, optimization problems are becoming more and more complex, thus a traditional algorithm which has single search mechanism generally suffers from weak search ability or slow The associate editor coordinating the review of this manuscript and approving it for publication was Chao Shen . convergence speed, which leads to unsatisfying results. In these scenarios, hybridization of algorithms is a powerful method. Every algorithm has its inherent pros and cons. Researchers have been trying to improve their search patterns and search styles in order to figure out their weaknesses or limitations and then combine them with other effective methods [6]- [8].
Differential evolution (DE) [9] is a heuristic random search algorithm. Wang et al. have embedded DE with multiobjective sorting-based mutation operators [10]. Its fitness and diversity information are simultaneously considered for selecting parents. As a result, a good balance between exploration and exploitation can be achieved to greatly enhance the performance of DE. Gravitational search algorithm (GSA) [11] is an adaptive search technology based on Newtonian gravity. In GSA, candidate solutions of a population are modeled as swarm objects. These objects tend to move towards the heaviest object. Yin et al. introduce a hybridization of K-harmonic mean method into GSA to solve clustering problems [12]. Particle swarm optimization (PSO) [13] is wellknown for simulating social behaviors of wild animals such as birds' flocking and fishes' schooling. Tian et al. have successfully combined PSO with DE to arrange rescue vehicles to extinguish forest fire problem [14]. Brain storm optimization (BSO) [15] is inspired by the human brainstorming process. CBSO [16] and ASBSO [17] are two kinds of hybridization of BSO with chaotic local search and memory-based selection. At present, there are plenty of researches demonstrating that hybrid methods gain the great success in resolving complex optimization problems. Some of them are listed in Table 1. Table 1 has shown diversity, flexibility and effectiveness of meta-heuristic hybridization in their applications. However, to take insight into the mechanism of hybridization, there is no quintessential difference among them. The only difference is search operators. Two main factors can be considered: search pattern and search style. Researchers study advantages and disadvantages of certain algorithm to reinforce its search ability in order to get an excellent hybrid algorithm. From state-of-the-art hybrid algorithms, a spiral structure can help to realize an effective strategy in some scenarios, such as sine cosine algorithm (SCA) [29], artificial algae algorithm (AAA) [30], and so on [31].
Recently, a novel NMH algorithm named spherical evolution (SE) [32] has been proposed for solving continuous optimization problems. It has a spherical search style. Experiments have demonstrated that it is a powerful optimization tool for function optimization and real-world problems. However, there are some limitations in SE. SE is sensitive to some initial parameters and the angle of spherical search style influences its search behavior. Thus, although it is a promising algorithm, its performance is limited by its single spherical search.
Hypercube search (HS) style has been widely used in various NMH algorithms [9], [13], [15]. By the first order difference, individuals gradually search towards global optimum. In addition, chaos is a common natural phenomenon. Chaotic local search (CLS) makes use of its randomicity and ergodicity. By combining CLS, many traditional NMH algorithms get significant improvement in their performances, such as chaotic BSO [16], DE combined with CLS [33], [34], and chaotic GSA [35]. They demonstrate that CLS can greatly enhance their search ability and avoid trapping into premature.
In this paper, for the first time we propose a novel algorithm which contains triple distinct search dynamics (TDSD), i.e., SE, HS and CLS. We take advantage of them to implement more sufficient and effective search. Control strategies among three kinds of search styles are conducted to provide a good balance between exploration and exploitation phases. Thus, the performance of TDSD is satisfying. Thirty CEC2017 benchmark functions and three real-world optimization problems are used to evaluate TDSD. Experimental results demonstrate that TDSD has the optimization potential.
The main contributions of this paper can be summarized as follows: (1) We make first attempt to well organize SE, HS and CLS. Triple distinct search dynamics greatly improve the search ability of algorithm. (2) We implement good control strategies for balancing these three search mechanisms. (3) Sufficient experiments are conducted to verify the performance of TDSD.
The remainder of this paper is organized as follows. Section II gives brief introduction of SE. Section III proposes TDSD in details. Section IV presents simulations and analyses of results. Section V summarizes the research and gives future work.

II. SPHERICAL EVOLUTION
Search operators play important roles in NMH algorithms. Researches have pointed out that these NMH algorithms are very different [32]. However, their mechanisms can be summarized as two main characteristics. One is search pattern and the other is search style. Search pattern indicates individuals' search mechanism. It guides how individuals search. Search style denotes individuals' search operator, implying individuals' evolutionary method. To be specific, search pattern defines which search style is used. Also, it can contain various search styles. Thus, search pattern can be represented as Eq. (1).
where X new i,d indicates the new ith solution in the dth dimension. X α , X β and X γ are three definite solutions selected by a certain strategy. S() represents updating units in the search operator and n is the number of updating units.
Hypercube search is popularly used in many well-known NMH algorithms owing to its simple description and multidimension extension, described as Eq. (2).
where SF 1 (), SF 2 () and SF 3 () denote three scaling factor functions which tune the scale of difference between X k α,d and X k β,d . SE is one of recently proposed NMH algorithms [32]. The essential difference of search operators between SE and other VOLUME 8, 2020 algorithms is that SE adopts a spherical search style whereas others use a hypercube search style. When algorithms are limited to two dimensions, SE presents a circular search style whereas others conduct a rectangular one. Fig. 1 shows their search styles.
As shown in Fig. 1, in two dimensions, a hypercube search trajectory is a rectangle whereas a spherical search explores one circle with center O and radius OF. Dotted lines with black arrow SD, SC, SE and SG denote two solution vectors for hypercube search and spherical search, respectively. In hypercube search, its search area is determined by DA and DB. In spherical search, its angle changes from 0 to 2π. When its radius OF equals to rectangular diagonal DC, it is obvious that the spherical search has larger search space than hypercube search, which means it has better exploration ability. This advantage can help spherical search get more possibility to avoid trapping into local optima. When dimension is increased, spherical search works according to Euclidean distance. Its principle in one, two and high dimensions is described as Eqs. (3)-(8).
where X α,d − X β,d represents the absolute value of distance between X α,d and X β,d in one dimension. X α, * − X β, * 2 indicates the Euclidean distance between X α, * and X β, * in high dimensions. θ is a random number in [0, 2π ] and denotes the angle between X α, * and X β, * .
According to Eqs. (3)-(8), seven search operators are proposed as follows: where m ∈ {1, 2, 3} and X g indicates global best individual. X ri , i ∈ {1, . . . , 5} denote one randomly selected individual. According to Tang's research [32], among these search operators, Eq. (12) has the best search behavior. Since SE has a simple mechanism and large search range, it is an efficient heuristic approach. Its search angle is distributed in [0, 2π], thus its search trajectory is directionless, which can provide more diverse evolutionary path. However, it may generate opposite directions to result in slow convergence.

III. NEW ALGORITHM TDSD A. BRIEF INTRODUCTION OF CLS
CLS plays an important role and has achieved great success in memetic algorithms which combine evolutionary algorithms with local search [36]. Its search behavior can improve the quality of individuals and accelerate convergence speed due to its ergodicity and randomicity. Dynamic property of chaos ensures that algorithms can avoid sticking into stagnation in the exploitation phase. At present, CLS has been greatly developed to improve various algorithms, such as PSO [37], DE [38], krill herd algorithm [39] and so on [40], [41].
For CLS, chaotic maps are used to generate chaotic sequences. Gao et al. apply CLS to GSA [35]. Mirjalili et al. embed ten chaotic maps to tune the gravitational constant [42]. Yang et al. employ twelve chaotic maps to greatly increase the diversity of search mechanism in BSO [16]. In this paper, we adopt one popular and traditional chaotic map, namely Logistic map, described as follows: where Z k is the kth chaotic number. µ = 4 and Z 0 = 0.152. According to chaotic sequences of Logistic map, CLS searches all dimensions of each individual X i to form new one X new i as follows: where r ∈ (0, 1) is chaotic search radius. U b and L b indicate upper and lower bounds of X i . ρ = 0.988 is a shrinking parameter.

B. PRINCIPLE OF TDSD
SE is a simple and efficient global optimization algorithm with large search space and undirected search trajectory, however, its convergence speed is unsatisfying. HS is popularly adopted because of its high search efficiency. CLS is utilized to help individuals escape from local optima. These three kinds of search styles have own characteristics and advantages, thus we combine them to propose new algorithm TDSD, which makes individuals achieve better performances.
In TDSD, we adopt Eq. (14) as SE. Although SE with Eq. (12) has the best performance in [32], we aim to precisely control each individual's search. Thus, we select Eq. (14) similar to Eq. (12) to avoid individuals' randomness. HS is used to generate new individuals as follows: where F i is a scale factor and EP i stands for an effective path which makes X i better than its parent. CLS adopts Eq. (17) to enhance individuals' exploitation abilities.
Since there are three kinds of search styles in TDSD, good control strategies among them are devised to help individuals continuously and effectively search. For them, control strategies are described as follows: (1) Control strategy of SE: If the offspring of SE is worse than its parent, SE should search other directions by itself. Otherwise, current direction needs to be further explored. Thus, HS should be used in next iteration.
(2) Control strategy of HS: If the offspring of HS is better than its parent, HS should continue to search current direction. Otherwise, the offspring should be further exploited. Thus, CLS is used in next iteration.
(3) Control strategy of CLS: When HS cannot find a better offspring, CLS carries out 50 times to search local areas of Algorithm 1 TDSD Input: Parameters F 0 , σ 0 , µ, Z 0 , r, ρ Output: Optimal solution 1 Initialization: Randomly initialize N individuals and evaluate f (x i ); 2 while Termination criterion is not satisfied do 3 Scale factor (SF) tuning with N (0, σ ); 4 Dimension selection factor (DSF) tuning; 5 for i = 1 to N do 6 Randomly select two individuals x r1 and x r2 , the offspring. If CLS finds a better offspring within 50 times, HS is used to explore the direction of this better offspring in next iteration. Otherwise, SE is applied to expand the search direction of the offspring of HS in next iteration. The main procedures of TDSD are given as follows: 1) TDSD randomly generates a population and starts from SE. VOLUME 8, 2020 2) When the offspring of SE is worse than its parent, SE continues to search other directions. 3) When the offspring of SE is better than its parent, HS is used. 4) When the offspring of HS is better than its parent, the same direction continues to be searched by HS. 5) When the offspring of HS is worse than its parent, CLS is executed. 6) Within 50 times, if the offspring of CLS is better, switch to HS. Otherwise, switch to SE. 7) Repeat 2)-6) until TDSD is terminated. The whole procedures can be depicted by Fig. 2.
Pseudocode of TDSD is shown in Algorithm 1. U (0, 0.1) indicates a uniform distribution and σ is the variance of Gaussian distribution. We use a variable σ to adjust the Gaussian distribution in order to tune scale factor. In TDSD, SE, HS and CLS are executed in lines 8, 10 and 27, respectively. In each iteration, one kind of search style is selected and used to improve the search performance according to the difference between offspring and its parent. Based on this principle, TDSD effectively integrates triple distinct search dynamics.
To illustrate the efficiency of TDSD, its time complexity is calculated where N is population size, as follows: (1

IV. EXPERIMENTS A. EXPERIMENTAL SETUP
We adopt thirty CEC2017 benchmark functions to test the performance of TDSD. F1-F3 are unimodal functions. F4-F10 are multimodal functions. F11-F20 are hybrid functions. F21-F30 are composition functions. Six state-of-theart NMH algorithms including SE [32], HGSA [43], CLPSO [44], NCS [45], BSO [15] and DE [46] are used to compare with TDSD. The population size N is 100 and the dimension of function is 30. The running time is 30 in order to reduce random errors. The maximum number of iteration is 3000. The testing environment is PC with 3.10GHz Intel(R) Core(TM) i5-4440 CPU and 8GB run time memory by using Matlab R2013b. Parameters are used following the information provided by related literatures, shown in Table 2.  Mean and standard deviation (Std Dev) of optimization error between obtained solution and global optimal solution of seven algorithms are listed in Tables 3 and 4. The best result on each function is highlighted by boldface. Wilxocon ranksum test at a significant level of α = 0.05 is conducted to make statistical analyses where signs +, ≈ and − indicate that TDSD is better, equal and worse than comparative algorithms on each function.

B. EXPERIMENTAL RESULTS
From Tables 3 and 4, we can see that TDSD obtains more better results than its peers. To be specific, TDSD is superior to SE, HGSA, CLPSO, NCS, BSO and DE on 20, 17, 18, 28, 23, 26 functions, respectively. These results demonstrate good search performance of TDSD. In six NMH algorithms, SE uses spherical search and the others adopt HS according to their principles. Comparison between TDSD and SE illustrates that three kinds of search styles are better than single spherical search. HS and CLS further improve SE. Likewise, comparison between TDSD and the others indicate that triple distinct search dynamics implement better performance than single HS.
TDSD is a self-adaptive search algorithm. It can adaptively adjust search strategies according to current search situation. SE has a large search range, hence it can extensively search the space and avoid trapping into local optima. HS is a directional search. Based on new found solutions, it can guide individuals towards more promising area. Thus, it improves the exploration ability. CLS implements local search to enhance the exploitation ability. TDSD adaptively switches these three search strategies to reinforce its overall search ability. In six NMH algorithms, individuals only have one kind of search strategy. In other words, they have the same search behaviors in each algorithm. However, in TDSD, each individual executes different search strategies according to own search situation. Their search diversity is remarkably improved. Consequently, they can find better solutions than six NMH algorithms on numerous functions.
We plot box-and-whisker diagrams and convergence graphs on F1, F4, F17 and F26 to show characteristics of seven algorithms. In Fig. 3, box-and-whisker diagrams of optimal solutions obtained by seven algorithms are shown. From it, we can observe that TDSD has smaller distribution of optimal solutions than others'. On these four functions, optimal solutions of six NMH algorithms change great whereas optimal solutions of TDSD maintain relatively steady. It indicates the effectiveness and stability of TDSD. Fig. 4 displays convergence graphs of average best-so-far solutions of seven algorithms. The horizontal axis indicates the number of iteration and the vertical axis denotes log value of average bestso-far solution. According to Fig. 4, we can see that TDSD shows gradual convergence on these four functions. Six NMH algorithms either trap into premature convergence or do not find superior solutions. Compared with them, TDSD shows  more effective convergence characteristics. It can maintain good convergence speed. In Fig. 4, TDSD does not converge quickly in the early search process, hence premature convergence does not occur. In the late search process, TDSD still finds better solutions, which shows its good exploitation ability. In other words, exploration and exploitation abilities 76758 VOLUME 8, 2020  of TDSD are balanced such that its convergence is effective and continuous. To show the convergence of individuals, we plot their search graphs on F10 with 2 dimensions in Fig. 5. The population size is 100 and the number of iteration is 3000. F10 is a multi-modal function which has many local optima. Fig. 5(a) shows one hundred individuals are VOLUME 8, 2020   randomly distributed with the first iteration. Fig. 5(b) displays that individuals converge towards some areas with the 200th iteration. Fig. 5(c) exhibits that individuals further reduce convergence areas. Fig. 5(d) reveals final convergence of individuals with the 3000th iteration. From Fig. 5, we can see that individuals gradually converge into different areas, suggesting high search diversity. This is because each individual only depends on own search situation and strategy. All individuals do not completely converge into one point. Thus, TDSD maintains the population diversity to improve its search performance.

C. ANALYSIS OF PARAMETERS
TDSD uses CLS to further exploit local areas of solutions found by HS. In TDSD, the number of times of CLS (nCLS) may influence the exploitation ability. To analyze it, we set nCLS to 25, 50, 75 and 100. Experimental results are listed in Table 5.
According to Table 5, we can find that nCLS = 50 is slightly better than other values. Compared with nCLS = 25, CLS executes 50 times to obtain better solutions on four functions. It indicates that 50 times for CLS is more sufficient. However, experimental results of nCLS = 75 and nCLS = 100 do not show that more number of times of CLS is more effective, and they are worse than nCLS = 50 on one function. Thus, we conclude that 50 times is suitable for CLS.
In addition to CLS, SE and HS have two parameters F 0 and σ 0 . Although these two parameters adaptively change during the execution of TDSD, their initial values may also influence the search performance. To determine them, we set F 0 to 1, 2.5 and 5, and σ 0 to 0.1, 0.5 and 0.9. Thus, nine groups of parameter settings are conducted. Their experimental and statistical results are shown in Tables 6 and 7.
From Tables 6 and 7, we can observe that the group of F 0 = 2.5 and σ 0 = 0.5 is relatively superior to other groups. To be specific, when F 0 = 2.5, σ 0 = 0.5 is slightly better than σ 0 = 0.1, but equivalent to σ 0 = 0.9 according to statistical results. It indicates that a great σ 0 value is slightly good. Compared with the groups of F 0 = 1, the group of F 0 = 2.5 and σ 0 = 0.5 is better. It may be because a small F 0 value decreases the exploration ranges of individuals. Also, the groups of F 0 = 5 are inferior because they may excessively increase the search areas of individuals such that overstepping the boundary is prone to occur. Thus, based on these results, we regard F 0 = 2.5 and σ 0 = 0.5 as a group of suitable parameter setting.

D. THREE REAL-WORLD OPTIMIZATION PROBLEMS
To evaluate the practicality of TDSD, we test it on three real-world optimization problems from CEC2011. They are parameter estimation for frequency-modulated sound waves problem (FMSWP), spread spectrum radar polly phase code design (SSRPPCD), and optimal control of a non-linear stirred tank reactor problem (NLSTRP). Their specific details can be referred in [47]. Seven comparative algorithms including WOA [48], DE [46], CGSA-P [49], BSO [15], EPSDE [50], SaDE [51] and jDE [52] are used. Their parameter settings are adopted according to corresponding literatures. The maximum number of iteration for three problems is 600, 2000 and 100, respectively. Experimental results are listed in Tables 8, 9 and 10 where Mean, Std, Best and Worst indicate mean, standard deviation, best and worst solutions, respectively.
From Tables 8, 9 and 10, we can find that TDSD obtains the least means on these three problems. In addition, TDSD has the best solutions on FMSWP and NLSTRP. Thus, experimental results demonstrate that TDSD has the practicality for real-world optimization problems. Compared with various    NMH algorithms, TDSD uses triple distinct search dynamics to optimize problems and find better solutions.

E. COMPARISON WITH CHAMPION ALGORITHMS
Four champion algorithms including EBOwithCMAR [53], LSHADE-cnEpSin [54], LSHADE-SPACMA [55] and SHADE [56] are used to compare with TDSD on thirty CEC2017 benchmark functions with 30 dimensions and three real-world optimization problems. EBOwithCMAR uses covariance matrix to improve the local search ability of effective butterfly optimizer. LSHADE-cnEpSin constitutes ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood. LSHADE-SPACMA combines a semi-parameter adaptation method with modified CMA-ES. SHADE is an adaptive DE with history-based parameter adaptation. Their experimental results are listed in Tables 11, 12, 13 and 14. From Table 11, we can see that TDSD is better than four champion algorithms on 3 functions, i.e., F4, F25 and F26. Although TDSD does not totally perform better than four champion algorithms, it is still a promising one. Compared with those complex methods, TDSD only executes a simple control strategy to adaptively adjust the search style. Thus, its performance is able to be improved by diverse strategies in the future. Besides, the fact that TDSD finds better solutions than four champion algorithms on 3 functions indicates its search is the best on a part of functions.
According to Tables 12, 13 and 14, we can observe that TDSD obtains the least mean on NLSTRP whereas it does not perform the best on FMSWP and SSRPPCD. Four champion algorithms show different optimization abilities for these three problems. For FMSWP and SSRPPCD, LSHADE-cnEpSin and EBOwithCMAR have the least mean, respectively. However, TDSD can obtain the best solution on FMSWP and its best solution for SSRPPCD is the medium rank, suggesting that it is capable of optimizing these two problems.

V. CONCLUSION
In this paper we propose a novel algorithm called TDSD for functions and real-world optimization problems. Three kinds of distinct search styles are integrated to enhance the search performance. According to their characteristics, we design effective control strategies to switch the search style of TDSD. Experimental results indicate TDSD outperforms other state-of-the-art algorithms in terms of its effectiveness and robustness. In the future, we plan to study more search styles and try to incorporate them for improvement of algorithms. He is currently an Associate Professor with the Faculty of Engineering, University of Toyama. His current research interests include nature-inspired technologies, mobile computing, machine learning, and neural networks for real-world applications. He was a recipient of a the Best Paper Award at the IEEE International Conference on Progress in Informatics and Computing, the Shanghai Rising-Star Scientist Award, the Chen-Guang Scholar of Shanghai Award, the Outstanding Academic Performance Award of IEICE, and the Outstanding Academic Achievement Award of IPSJ. He also serves as a Secretary for Hokuriku Section, IEICE, and has served on the program committees for several international professional conferences. He serves as an Associate Editor for many international journals, such as IEEE ACCESS and the IEEE/CAA JOURNAL OF AUTOMATICA SINICA.