A Twinning Memory Bare-Bones Particle Swarm Optimization Algorithm for No-Linear Functions

Been trapped by local minimums is an important problem in no-linear optimization problems, which is blocking evolutionary algorithms to find the global optimum. Normally, to increase the optimization accuracy, evolutionary algorithms implement search around the best individual. However, overuse of information from a single individual can lead to a rapid diversity losing of the population, and thus reduce the search ability. To overcome this problem, a twinning memory bare-bones particle swarm optimization (TMBPSO) algorithm is presented in this work. The TMBPSO contains a twining memory storage mechanism (TMSM) and a multiple memory retrieval strategy (MMRS). The TMSM enables an extra storage space to extend the search ability of the particle swarm and the MMRS enhances the local minimum escaping ability of the particle swarm. The particle swarm is endowed with the ability of self-rectification by the cooperation of the TMSM and the MMRS. To verify the search ability of the TMBPSO, the CEC2017 benchmark functions and five state-of-the-art population-based optimization algorithms are selected in experiments. Finally, experimental results confirmed that the TMBPSO can obtain high accurate results for no-linear functions.


I. INTRODUCTION
As an efficient method for single-objective problems, particle swarm optimization has attracted lots of attention in the academic world since its first introduction by Kennedy and Eberhard in 1995 [1]. The PSO which belongs to the classical evolutionary is inspired by team behaviors. In PSO, the conception of velocity and memory are cited to particles, which enable the particles to remember valuable information-the personal best position of each particle. These pieces of information can be used by the next generation. Meanwhile, the global best position also can be recorded by the swarm.
The associate editor coordinating the review of this manuscript and approving it for publication was Sotirios Goudos .
Due to the high efficiency of the algorithm, which draws a great deal of attention and interest from scholars around the world. Until today, large quantities of new derivative versions, are represented by these outstanding scholars, who also developed new applications, and published theoretical studies on the impact of various parameters and aspects of the algorithm at the same time [2]. Pornsing [3] combines an adaptive strategy with PSO. Wang [4] proposed a new PSO-based method for mixed-variable problems. Wang [5] proposed a novel learning strategy for large-scale optimization problems. Tseng [6] proposed an easy PSO method for nonlinear problems.
The PSO has remarkable advantages in solving non-linear transient problems and there are several important areas where these derivative methods have made a majority contribution, like parameter magnet [7], [8], parameter identification [9], path planning [10], [11], large scale optimization [12], process synthesis [13], community detection [14], feature selection [15], risk prediction [16], biomass power plant [17] and financial management [18] In 2015, Li [19] proposed the HMPSO based on the historical memory of the particle, which uses a distribution estimation algorithm to estimate and preserve information about the distribution of the historical promising personal best position of the particle. The best position of the particle is selected from three candidate positions, generated from the historical memory, the particles' current personal best position, and the swarm's global best position. The introduction of history memory allows HMPSO to achieve the best results for all unimodal functions and equips HMPSO with the ability to effectively prevent premature convergence through the current personal best position of particles and historical memory.
All the above algorithms are inspired by the features of the particles and attempt to improve the search performance of the algorithm by introducing new computational strategies, on another hand, numerous scholars have devoted more attention to the detecting process of particles and trying to improve the efficiency of the algorithm by grouping the particles or dividing the searching space. From the perspective of grouping particle swarms, Liang et al. [20] proposed the APSO-SC algorithm in 2015. Throughout the process, the swarm of APSO-SC is dynamically divided into various subpopulation clusters by a K-means clustering operation, and these clusters are able to exchange information through a circular neighborhood topology. Zhou [21] proposes a multi-choice comprehensive learning strategy for PSO. Vafashoar [22] combines the topology learning strategy with PSO.
Although the PSO has the advantage in terms of application and convergence, premature and slow convergence are still the two main drawbacks of the PSO. To obtain a better result, the parameters of the algorithm need to be adjusted while dealing with some sophisticated problems. The time comes in 2003, a new algorithm named bare bones particle swarm(BBPSO) was proposed by Kennedy [23], which gets rid of the velocity item and can be applied to a wide range of problems without manual intervention. The next position of a particle is selected based on a Gaussian distribution, which made the BBPSO easier and faster than the PSO. Since that, The BBPSO has arisen great attention from researchers. The BBPSO and its variants have been widely applied to surface vehicles [24], knapsack problems [25], and other fields.
To improve the performance of BBPSO, Guo embody the idea of grouping in [26]. Three random particles are placed in the same group and exchange information during iteration. Also, new strategies for bare-bones PSO are presented [27], [28]. Tian [29] proposes a new population based method by simulating the behavior of electronics. To facilitate the further development of the algorithm, Guo [30] introduces the strategy of dynamical grouping into the BBPSO in 2017, whose name is DLS-BBPSO. The particles of this algorithm are divided into different groups dynamically, each group consist only one leader and several teammates. Regarding to the DLB-BBPSO, it is not visible to see the group size and the numbers of the particles in each group. Besides, the particles derive their information from the local best rather than the global best. Particularly, destruction and recombination will be implemented in each generation. among the stated methods, the swarm gets more diversity and more chances to get out of the local optimum.
Campos [31] introduced a variant method of BBPSO to solve the shortcomings of PSO, which is called BBPSO with scale matrix adaptation(SMA-BBPSO). In the SMA-BBPSO, a new concept of multivariate t-distribution with a rule for adaptation of its scale matrix is used to calculate the particle position, The tails of the t-distribution are heavier than the normal distribution, which contributes to the particles fleeing from a local optimum easily. Following the SMA-BBPSO, there proposed a simple update rule to adapt the scale matrix associated with a particle as well.
From the perspective of searching space, In 2019, Guo [32] proposed the FHBBPSO algorithm. The FHBBPSO combines fission and fusion strategies to discover new particle positions. The purpose of the fission strategy is to divide the search space. The fusion strategy, on the other hand, aims to narrow down the search space. During each iteration, the particles continue to detect the best position and work out the optimal value in the corresponding regions, under the two strategies. A set of well-known bench-mark functions are used in the experiment, which confirmed the high search capabilities of FHBBPSO. However, through the listed method, we can understand that most existing methods attach great importance to introducing new strategies, grouping particles, and dividing. While turning those algorithms into practical applications, the process of grouping and dividing may not be very efficient, especially dealing with complex problems. In the following part, a multi-perception concept will be introduced to the algorithm to enhance the searching capability.
However, The PSO and its derivative methods performed well in various problems, but its innate shortcomings are still a big obstacle, Population-based algorithms still cannot balance exploration and development. A population can easily get stuck in a local minimum if it has a strong local search capability. On the other hand, if the widearea search capability is strong, it may be difficult for the population to find the exact local optimum. There is a constant need for algorithm variations of PSO with higher performance.
In conclusion, The rest of this paper is organized as follows: in Section 2, some population-based evolutionary algorithms are introduced; in Section 3, the processes and advantages of the FHBBPSO are introduced; in Section 4, the details of the verifiable experiments and discussion are displayed; in Section 5, the conclusion of this paper is presented. Randomly generate the initial position of X 3: Record the Pbest_position, personal best position of particles 4: Record the Gbest_position, the global best position of the swarm 5: memory_gbest(gen) = Gbest_position 6: end for 7: t = 0, t stands for the iteration times 8: while t < IT do 9: while X = ∅ do 10: Randomly select a particle x(i) from X , then remove x(i) from X 11: for Each memory of the global best particle do 12: calculate a candidate position with x(i) 13: end for 14: Find the best position from all candidate positions 15: Update p(i), this process is also shown in Equ. 1 16: end while 17: Find the candidate g best, the best position from new the Pbest 18: Find the best two from the candidate g best and the MemoryGbest 19: Update the MemoryGbest, best position stored in m_g(1), second best position stored in m_g (2), and so on. 20:

II. MATERIALS AND METHODS
In general terms, qualitative methods offer an effective way for particle swarm optimization. Most of them use the division of the search space or the combination of particles to ensure a tremendous improvement in the efficiency of the optimization. However, Much attention has been paid to the relationship between the particles in prior research. While extra grouping or division of the space may slow down the algorithm in dealing with complicated functions. During this section, more attention is devoted to the structure of the particle itself, therefore, a new method with a memory strategy is presented in this part-a twinning memory bare-bones PSO algorithm (TMBPSO). The details of each step of the method are described in the following subsections.

A. INITIALIZATION
The first step of the TMBPSO is initialization. Considering that all test functions used in this paper are minima problems, the best particle in the swarm globally is the particle with the lowest function value. In the same way, in this paper, we define the ''best position'' as the position with the lowest function value. We use the best value of the particles as the reference for particle comparison. Thus, the particle with the smallest personal best value is the best. Additionally, there are some necessary initial conditions: the number of particles N ; the dimension of the problem D; the fitness function F; the max iteration times T and the search range R. After all is prepared, all particles are randomly distributed in R, the first personal best value of the particle will be calculated according to the F, and no additional parameters are required.

B. THE TWINING MEMORY STORAGE MECHANISM
In the previous optimization algorithms, the structure of the global optimal particle does not differ from that of ordinary particles. To boost the searchability of the best particle, we introduce a twining memory storage mechanism (TMSM) for the global optimal particle. That is to say, the global best particle owns a twin particle, which is used to record the second-best position of the particle swarm. From a macro  perspective, The global best particle has two layers of memory, which means the global best particle owns two layers of position(memory_gbest(1), memory_gbest (2)). Correspondingly, it has two global best values.

C. THE MULTIPLE MEMORY RETRIEVAL STRATEGY
In the process of searching, we apply a multiple memory retrieval strategies to enhance the local minimum escaping ability of the particle swarm. Due to the global best particle owning a twin position, the position can interact with both of them and generate two candidate positions. After that, the next position will be selected from the original position and the two candidate positions. The interaction between the two layers of the global best particle increased the possibility of the particle searching for the optimal position, and accelerated the optimization process.

D. THE OVERALL PROCESS
In this section, we will use a particle x(i) as an example to explain the workflow of TMBPSO. In the pth generation, the personal best position of x(i) is pbest t (i), the global best position of the best particle is gbest t . By the definition of TMSM, the global best particle has an additional memory space. Here we use memory_gbet t (1) to record the best position, and memory_gbet t (2) to record second best position. The next position of x(i) is calculated by Equ. 1: where candidat_pbest t+1 (θ, p) stand for two candidate positions of x(i) in t + 1th generation, θ = (1, 2) is the    counter for memory spaces, Gaus(κ, ϕ) is a Gaussian distribution, pbest t+1 (p) is the personal best position of x(i) in t + 1th generation, Best() is a functions to find the best individual from input data.   After all particles updated their personal best positions, the memory of the best particle will be updated. In order to better describe the evolution of TMBPSO, the pseudo-code is shown in Algorithm 1, the flowchart is shown in Fig. 1.

A. EXPERIMENTAL METHODS
To explore the search capabilities of TMBPSO, the CEC2017 benchmark functions are selected in experiments. The    37 times, and the population size is set to 100, the max iteration is 1.000 × 10 5 , and the dimension is 50.

B. EXPERIMENTAL RESULTS
In this paper, the final difference(FD) is used to assess the efficiency of each algorithm. The FD is defined in Equ. 2.
where final_gbest is the global best value after the final iteration, where Theoretical_optimal is the theoretical optimal solution of each test function. A smaller value of FE means that the final optimization results of the algorithm is as close to the theoretical optimal solution as possible, and the algorithm performs better. In addition, a ranking competition is implemented. In each function, the first ranking algorithm will get 1 point, the   second ranking algorithm will get 2 points, and so on. The average ranking score of TBBPSO is 1.7241. Obviously, TMBPSO is the best one among the five algorithms.
More detailed, numerical analyses are listed below. In numerical analyses, when we compare two algorithms, actually we are comparing their FDs.
7) In f 7 , TMBPSO gains the first rank, are 5.20% better than DLSBBPSO, the second-best algorithm. 8) In f 8 , TMBPSO gains the first rank, are 0.91% better than DLSBBPSO, the second-best algorithm. 9) In f 9 , TMBPSO gains the third rank, are 19.19% worse than DLSBBPSO, the best algorithm.   10) In f 1 0, TMBPSO gains the first rank, are 21.74% better than BBPSO, the second-best algorithm. 11) In f 11 , TMBPSO gains the first rank, 70.63% better than results from BBPSO, the second-best algorithm. 12) In f 12 , TMBPSO gains the first rank, 48.21% better than TBBPSO, the second-best algorithm.
25) In f 25 , TMBPSO gains the second rank, 1.06% worse than DLSBBPSO, the best algorithm. 26) In f 26 , TMBPSO gains the first rank, 0.22% worse than DLSBBPSO, the best algorithm. 27) In f 27 , four algorithms return the same result, which means that four algorithms are trapped by the local minima and cannot escape.   28) In f 28 , four algorithms return the same result, which means that four algorithms are trapped by the local minima and cannot escape. 29) In f 29 , TMBPSO gains the first rank, 0.81% better than TBBPSO, the second-best algorithm. By comparing various aspects, The most obvious finding to emerge from the analysis is that the count of the first rank of the TMBPSO is 18, the count of the second rank is 5, the count of the third rank is 4, the count of the fourth rank is 0, the count of the fifth rank is 2. The TMBPSO group reported significantly more scores than the other four groups, it is obvious that TMBPSO is more trustworthy and sturdy. The cooperation of the TMSR strategy and the MMRS strategy enables the TMSO to get across to the local minimum more efficiently. Accordingly, the accuracy of data obtained by the TMBPSO has been greatly improved. For a more visual performance of the convergence of different iterations, the convergence curve f 1 to f 29 of TMBPSO, DLSBBPSO, ETBBPSO, BBPSO, and TBBPSO are shown in Fig.2, Fig.3, Fig.4, Fig.5, Fig.6, Fig.7, Fig.8, Fig. 9, Fig.10, Fig.11, Fig.12, Fig.13, Fig.14, Fig.15, Fig.16, Fig.17, Fig.18, Fig.19, Fig.20, Fig.21, Fig.22, Fig.23, Fig.24, Fig.25, Fig.26, Fig.27, Fig.28, Fig.29, Fig.30, separately. The scale on the vertical axis stands for the value of FDs. The scale on the horizontal axis stands for the number of iterations, and 100 on the horizontal axis stands for 10,000 iterations.
This study set out with the aim of assessing the no-linear functions of the unimodal and multimodal problems, To maintain high precision results for test functions, The TMBPSO combines the TMSM and the MMRS, The TMSM endows the algorithm with an extra layer of memory to remember the second best particle. The MMRS enables the particle to interact with the best particle and the second-best particle. The combination of these two strategies enhances the exploration and has good performance on both unimodal and multimodal functions. In the 27th and 28th test functions, all the algorithms get the same result, which means all of them are trapped in the local minimals. Since the difference has not been found in the five algorithms it is probably due to the nature of the test function itself. Hence, it could conceivably be hypothesized that proposing some novel strategies for the nature of this test function may resolve these key problems. There is abundant room for further progress in this issue.

IV. CONCLUSION
In this investigation, the TMBPSO is proposed to resolve the no-linear optimization functions, including unimodal and multimodal problems. The TMBPSO contains two major components: the TMSM and the MMRS. According to the original global best particle, the TMSM produces a twining global best particle. In general, the global best particle is endowed with two layers of memory, the extra layer of memory could be used to record the second-best particle. In the MMRS, the particle interacts with the global best particle and its twining particle and generates two candidate particles, the position of the best one of these three particles is the next position of the particle. The TMSM and the MMRS work together, which greatly increases the possibility of the particle searching for the optimal position and speeds up the optimization process. The TMSM increases the diversity of the particle population. The MMRS enables the particle to rectify itself under each iteration. Research on this subject has been mostly restricted to the relationship between the particles. On the contrary, this work contributes to existing knowledge of PSO by providing additional memory or storage space. The central idea of this paper is to focus on the structure of the particle itself. One of the significant findings to emerge from this study is that adding extra memory helps the algorithm to better solve existing problems, and no more parameter and threshold is needed. CHAO  His current research interests include parallel and distributed evolutionary multi-objective optimization on many-core architecture and evolution of machine learning techniques in design. He received the 2015 Highly Commended Paper Award of International Journal of Intelligent Computing and Cybernetics. He is an Editorial Board of Complex & Intelligent Systems and a member of ACM/SIGEVO and IPSJ. VOLUME 11, 2023