Improved Simplified Particle Swarm Optimization Based on Piecewise Nonlinear Acceleration Coefficients and Mean Differential Mutation Strategy

,


I. INTRODUCTION
In recent years, swarm intelligence optimization algorithms have become a preferred method to solve complex and non-linear problems in engineering applications, which are difficult to be solved by traditional methods. Swarm intelligence optimization algorithms are heuristic search algorithms which simulate biological behavior in nature. Typical swarm intelligence optimization algorithms include particle swarm optimization algorithm (PSO) [1], ant colony The associate editor coordinating the review of this manuscript and approving it for publication was Gustavo Olague . optimization algorithm (ACO) [2], grey wolf optimization algorithm (GWO) [3], whale optimization algorithm (WOA) [4], and etc. Among these algorithms, PSO has become the focus of research because of its simple principle, few parameters and high efficiency.
PSO was originally proposed by Kennedy and Eberhart in 1995 [1], inspired by the foraging behavior of birds. In the PSO algorithm, particles update their positions and velocities according to personal best experience and global best experience. Because of its advantages, PSO has been widely applied to many fields, such as stochastic optimization [5], data classification [6], neural networks training [7], path  planning [8], flexible job shop scheduling problems [9], and so on. However, when solving high-dimensional complex optimization problems, PSO is easily stuck into a local optimum. It cannot control the balance between exploration and exploitation well in the search process. To overcome these shortcomings, many improved PSO algorithms have been proposed. Generally, these improvements can be classified into two categories: dynamic adjustment of parameters and evolutionary mechanism design.
In the design of parameter adjustment, inertia weight and acceleration coefficients are often considered. The parameter inertia weight was firstly introduced into the PSO algorithm by Shi and Eberhart [10]. On this basis, they further proposed an adaptive fuzzy adjustment strategy for the adjustment of inertia weight [11], which improved the performance of PSO. Similarly, by using a fuzzy inference system, the inertia weight was dynamically updated in [12]. Fan and Chiu [13] proposed a time-decreasing inertia weight and validated its effectiveness through computational comparisons. A nonlinearly decreasing inertial weight was presented in [14]. Besides, by using the success rate of the swarm as the feedback parameter to ascertain the situation of particles in the search space, Nickabadi et al. [15] designed an adaptive inertial weight.
Like inertia weight, the acceleration coefficients are another two crucial parameters of PSO. Some researchers [16]- [18] held the opinion that the acceleration coefficients should be constant values throughout the iterative process. On the contrary, some scholars [19]- [22] considered that the acceleration coefficients with dynamic update strategy could perform better. Ratnaweera et al. [19] proposed a self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (TVAC). To improve the solution quality and accelerate the convergence, Chen et al. [20], [21] proposed sine cosine acceleration coefficients (SCAC) and nonlinear dynamic acceleration coefficients (NDAC). Similarly, Tian et al. [22] proposed sigmoid-based acceleration coefficients (SBAC) to balance the global search ability in the initial iterative stage and the local convergence ability in the latter stage.
In addition to parameters adjustment strategies, evolutionary mechanism design has also attracted considerable attentions. Learning strategies of APSO-SLC [23], CLPSO-LS [24] and PAL-SAPSO [25] effectively maintain the   population diversity, such that the global search abilities of the proposed algorithms are excellent. To overcome premature convergence during the iterations, Cui et al. [26] proposed an enhanced PSO algorithm (GPAMPSO), which combined the asymptotic predicting model and the adaptive mutation strategy. Hu and Li [27] presented a simplified PSO algorithm (SPSO) by discarding the particle velocity. A modified PSO algorithm based on a multiple scale self-adaptive cooperation mutation strategy (MSCPSO) was developed in [28]. Levy flight was introduced into the update mechanism of PSO [29]. Considering the optimization problems subject to noise, different improved PSO  algorithms, such as DEPSO [30] and opposition-based hybrid PSO algorithms [31], have been proposed with improved performance. Combined with the requirements of application scenarios, a neighborhood based fuzzy PSO algorithm [32] was designed and applied to an artificial neural network (ANN). And Taguchi-based PSO was designed in [33] for searching an optimal four-stage charge pattern of Li-ion batteries. By considering the population topology, Liang et al. [34] presented an adaptive PSO based on clustering. Xu et al. [35] designed a dimensional learning strategy. Lynn and Suganthan [36] developed a comprehensive learning strategy. The algorithms APSO-C [34], TSLPSO [35] and HCLPSO [36] were all verified to enhance exploration and exploitation performances.
Besides, many researchers have developed different frameworks so as to hybridize PSO with other optimization methods. Gong et al. [37] proposed a hybrid PSO algorithm (GL-PSO), which combined genetic algorithm (GA) with PSO. In GL-PSO, the historical information of particles was performed by crossover, mutation, and selection. Under such guidance, the convergence rate and the optimization efficiency of PSO were enhanced. Chegini et al. [38] proposed a hybrid PSO (PSOSCALF), which combined sine cosine algorithm (SCA) and Levy flight approach. Other algorithms such as bee colony algorithm (ABC) [39], differential evolution algorithms (DE) [40], Nelder-Mead simplex search method [41] and estimation of distribution algorithm [42] have been successfully combined with PSO for performance enhancement.
Although the aforementioned PSO variants have shown better optimization performance than original PSO, to our best knowledge, the success of an algorithm in solving a specific set of problems does not guarantee solving all optimization problems with different type and nature. It is still worthwhile to develop new optimization algorithms for solving subsets of problems in different fields. This is also the motivation of this work. In order to further improve the optimization performance of PSO, the adjustment of parameters and evolutionary mechanism design are both considered in this paper. Firstly, a new parameter adjustment strategy named piecewise nonlinear acceleration coefficients (PNAC) is introduced to the simplified PSO algorithm (SPSO), and an improved algorithm called piecewise-nonlinear-acceleration-coefficients-based SPSO (P-SPSO) is proposed. Due to the special piecewise update mechanism of parameters, P-SPSO is verified to establish a better balance between exploration and exploitation. Then, a mean differential mutation strategy (MDM) is developed for the update mechanism of P-SPSO, and another algorithm named mean-differential-mutation-strategy embedded P-SPSO (MP-SPSO) is proposed. In MP-SPSO, particles are not only simply guided by the personal best position and the global best position, but also guided by the exemplars constructed by MDM, which accelerates the population to locate the global optimum. To validate the performance of P-SPSO and MP-SPSO, four different sets of experiments are carried out in this paper. The results show that, 1) the proposed P-SPSO can get better solutions than other four classic improved SPSO with different acceleration coefficients, 2) the proposed MP-SPSO has better optimization performance than P-SPSO and MDM-based SPSO (M-SPSO), 3) the proposed MP-SPSO is clearly seen to be more successful than eight well-known PSO variants, 4) compared to other nine intelligent optimization algorithms, MP-SPSO achieves better performance in terms of solution quality and robustness. Moreover, the proposed MP-SPSO algorithm is successfully applied to a real constrained engineering problem and provides better solutions than other methods.
The following sections of this paper are as follows: Section II briefly describes the concepts of PSO and SPSO. Section III presents the proposed P-SPSO and MP-SPSO algorithm. Section IV sets different experiments to verify the performance of the proposed P-SPSO and MP-SPSO algorithm. Section V verifies the ability of the MP-SPSO algorithm by using a real engineering problem. Finally, the conclusion is given in Section VI.

II. REVIEW OF PSO AND SPSO A. PSO
Inspired by the foraging behavior of birds, Kennedy and Eberhart [1] first proposed the PSO algorithm in 1995. In PSO, each particle represents a potential solution to an optimization problem. Each particle is identified by a position vector x ij = (x i1 , x i2 , . . . , x iD ) and a velocity vector where D is the dimension of the VOLUME 8, 2020 search space. Each particle updates its position and velocity according to the personal historical best experience (p ij ) and the global best experience (p gj ). The specific expression is presented as follows: where w is the inertia weight, c 1 and c 2 are acceleration coefficients, r 1 and r 2 are random numbers in [0, 1].

B. SPSO
On the basis of analyzing the evolutionary mechanism of PSO, Hu and Li [27] proposed a simple PSO algorithm (SPSO). By discarding the particle velocity, SPSO reduced the PSO from second-order difference equation to first-order. The experiment results on six typical benchmark functions showed that SPSO improved the performance of PSO. The formula of SPSO can be described as follows:

III. TWO PROPOSED ALGORITHMS (P-SPSO & MP-SPSO)
PSO has been widely used in many optimization fields because of its easy implementation and high efficiency, but it still suffers from some limitations like slow convergence and premature convergence. To improve the performance of PSO, a new parameter adjustment strategy named piecewise nonlinear acceleration coefficients (PNAC) is considered in this paper and a modified algorithm named piecewisenonlinear-acceleration-coefficients-based SPSO (P-SPSO) is developed. To further promote the comprehensive performance of P-SPSO, a new mutation strategy named mean differential mutation (MDM) is introduced to P-SPSO and another improved method named MP-SPSO is presented in this paper. The details about the two proposed algorithms are presented as follows:

A. THE PROPOSED PIECEWISE NONLINEAR ACCELERATION COEFFICIENTS (PNAC)
In basic PSO, c 1 and c 2 are named as the cognitive component and the social component, respectively. And they play an equally important role in the motions of particles. But there are some differences between them in the performance. The cognitive component c 1 is responsible for guiding particles to p ij . And, the social component c 2 is responsible for guiding particles to p gj . The impacts of c 1 and c 2 on particle motion are shown in Fig.1.
Kennedy and Eberhart pointed out that a larger value of c 1 , compared with c 2 , would result in more excessively wandering of particles in the search space. In contrast, compared with c 1 , a larger value of c 2 might lead particles to rush more prematurely toward a local optimum. In addition, Kennedy suggested that c 1 and c 2 should be set equal in the whole search process. However, Suganthan [43] held the opinion that the values of acceleration coefficients should be dynamically changed. On this basis many researchers have proposed different strategies to update the values of c 1 and c 2 . In the Section I, four classic strategies of acceleration coefficient adjustments [19]- [22] have been introduced. In order to compare different design approaches, more specific descriptions are listed in Table1. Previous studies show that, it is hoped that particles can travel ergodically over the whole search space in the initial stage of the iterative procedure. On the contrary, the convergence ability to the global optimum should be enhanced in the later stage of iteration. A lot of research works have been done to achieve this goal. In this paper, inspired by piecewise function, a new parameter adjustment strategy named PNAC is designed for the acceleration coefficients (c 1 , c 2 ). And an improved algorithm called piecewise-nonlinear-accelerationcoefficients-based SPSO (P-SPSO) is proposed. In the iterative process of P-SPSO, the parameters c 1 and c 2 are updated according to the designed piecewise functions with the formula as follows: where iter is the current iteration and Iter max denotes the maximum number of iterations. From Eqs. (4) and (5), we can find that the update mechanism of acceleration coefficients c 1 and c 2 are different in the early and later stage. To better enhance the learning ability of particles, PNAC uses the piecewise nonlinear update mechanism to dynamically change the values of c 1 and c 2 . Fig. 2 shows the curves of PNAC and four typical acceleration coefficients. The results of comparing PNAC with four compared acceleration coefficients will be discussed in Section IV.

B. THE PROPOSED MEAN DIFFERENTIAL MUTATION STRATEGY (MDM)
In PSO, particles update their velocities and positions according to personal historical best position (p ij ) and the global best position (p gj ) in the search process. However, it is well-known that once the global best position (p gj ) falls into a local optimum, it may lead particles to rush towards a local optimum. And it will make particles trap into the local optima and miss opportunities of jumping to far better optima. To address these issues, learning from the differential evolution algorithm (DE) [44] and mean particle swarm optimization algorithm (MeanPSO) [45], an improved evolutionary mechanism named mean differential mutation strategy is developed for the P-SPSO algorithm. Then, a new algorithm named mean-differential-mutation-strategy embedded P-SPSO (MP-SPSO) is proposed. In the whole search process of MP-SPSO, particles are not only guided by p ij and p gj , but also guided by the exemplars constructed by the proposed MDM, which increases the diversity of particles. Due to the MDM strategy, when p gj falls into a local optimum, particles will fly the direction opposite to p gj . Compared with the original update mechanism, MDM not only enhances the ability of jumping out of the local optimum but also increases the diversity of the whole swarm by making full use of the information of p ij and p gj . Mathematically, the MDM strategy is designed as follows: where c 3 is the mutation acceleration coefficient and r 3 is a random number in [0, 1]. Based on the above discussion, the pseudo code of MP-SPSO can be described in Table 2. For the pseudo code of P-SPSO is nearly the same as MP-SPSO except the updating equations, which is not listed in detail here.

IV. EXPERIMENTS AND RESULTS ANALYSIS
To fully verify the performance of the proposed P-SPSO and MP-SPSO algorithm, four different sets of experiments are carried out in this section. Firstly, to illustrate the effect of the proposed piecewise nonlinear acceleration coefficients (PNAC) on SPSO, P-SPSO is compared with basic SPSO and four typical improved SPSO with different acceleration coefficients. Secondly, we compare MP-SPSO with SPSO, P-SPSO and mean-differential-mutation-strategy-based SPSO (M-SPSO). Thirdly, we compare the MP-SPSO algorithm with eight well-known PSO variants. Fourthly, the MP-SPSO algorithm is compared with other nine intelligent optimization algorithms.

A. BENCHMARK FUNCTIONS
To validate the performance of the proposed algorithms, twenty-five typical benchmark functions [46]- [57] are adopted in this paper. Among these functions, f 1 − f 11 are unimodal functions, f 12 − f 20 are multimodal functions and f 21 − f 25 are rotated functions. Table 3 shows the detail information about these functions, including the name, definition, dimension, search range and global optimum, respectively.

B. COMPARISON OF P-SPSO, PSO AND SPSO WITH DIFFERENT ACCELERATION COEFFICIENTS
In this subsection, to evaluate the feasibility of the basic SPSO algorithm combined with the designed PNAC, the presented P-SPSO is compared with PSO, SPSO, TVAC-SPSO, SCAC-SPSO, NDAC-SPSO and SBAC-SPSO. Specifically, the parameter settings in experiments are shown as follows: the swarm size NP = 30, the maximum iteration Iter max = 500, the inertia weight w = 0.8, the mutation acceleration coefficient c 3 = 0.6. In PSO, the acceleration coefficients c 1 = c 2 = 2. In SPSO [27], the acceleration coefficients c 1 = c 2 = 2. For each benchmark function, 30 independent runs are performed by each algorithm in this subsection. The mean values (Mean) and standard deviation (Std) are employed to evaluate the performance of the algorithms. Simulation results are recorded in Table 4 and the convergence graph achieved by the seven methods are shown in Fig. 3.
From the results in Table 4, it can be clearly observed that SPSO with different acceleration coefficients can achieve better results than PSO in most cases. The superiority of the acceleration coefficients TVAC, SCAC, NDAC, SBAC has been proved in [19]- [22]. So, we do not go into much detail here. We focus on the comparison of P-SPSO, TVAC-SPSO, SCAC-SPSO, NDAC-SPSO and    Table 4, we can find that P-SPSO, TVAC-SPSO, SCAC-SPSO, NDAC-SPSO and SBAC-SPSO have found the global optimal solutions of fourteen func-

SBAC-SPSO. According to
, the proposed P-SPSO algorithm performs better than four compared SPSO variants. For functions f 18 and f 19 , all five algorithms can get the same mean values, and the standard deviation of P-SPSO is lower than four compared algorithms. It indicates that the P-SPSO has better robustness than four compared SPSO variants. For function f 20 , P-SPSO and TVAC-SPSO both show better optimization performance than SCAC-SPSO, NDAC-SPSO and SBAC-SPSO, and the standard deviation of P-SPSO is lower than TVAC-SPSO. So, it can be concluded that, compared with four typical acceleration coefficients, the proposed acceleration coefficients (PNAC) can keep a better balance between exploration and exploitation in the whole search process. It can enhance the performance of SPSO effectively. To further validate the superiority of the proposed P-SPSO algorithms to compared algorithms with statistical significance, Wilcoxon Signed Ranks Test with a level of significance α = 0.05 is applied and the results are listed in Table 5. From Table 5, we can find that the proposed P-SPSO algorithm shows a significant difference over all compared PSO algorithms.
In order to illustrate the effectiveness of the proposed P-SPSO algorithm, convergence curves of all algorithms simulated in this subsection for twelve benchmark func- Fig. 3. To distinctly discriminate the convergence characteristics of different methods, the difference among the fitness values of functions are enlarged via log function (log 10 ).
As shown in Fig. 3, it can be seen clearly that the proposed P-SPSO converge more rapidly to a better solution than PSO, SPSO, TVAC-SPSO, SCAC-SPSO, NDAC-SPSO and SBAC-SPSO in the early stage of evolution on all twelve benchmark functions. And as seen in Table 4, the solutions of P-SPSO are better than compared methods for the majority of functions. Through analysis, compared with other six algorithms, the P-SPSO algorithm can not only enhance the convergence speed but also improve the optimization accuracy. Therefore, we can draw a conclusion that the proposed PNAC is an efficient parameter adjustment strategy for the SPSO algorithm.

C. COMPARISON OF MP-SPSO WITH SPSO, P-SPSO AND M-SPSO
In this subsection, in order to validate the effectiveness of the designed piecewise nonlinear acceleration coefficients and differential mutation strategy, the proposed MP-SPSO algorithm is compared with SPSO, P-SPSO and M-SPSO. To avoid the effect of randomness, each algorithm is run independently 30 times and then we evaluate the performance of each algorithm by mean values (Mean) and standard deviation (Std), which are shown in Table 6.
From Table 6, we can see that for these fifteen functions  Table 7. As shown in Table 7, the p-value of three pairwise comparisons are lower than 0.05, which indicates that the proposed MP-SPSO shows significant difference between SPSO, P-SPSO and M-SPSO.
Based on the above analysis, we can draw the two following conclusions: 1) The proposed MP-SPSO, P-SPSO and M-SPSO all show better optimization performance than original SPSO. 2) The proposed MP-SPSO algorithm has better optimization performance than the proposed P-SPSO and M-SPSO.
In order to illustrate the effectiveness of the proposed methods, convergence curves of the methods simulated in this subsection for nine benchmark functions (f 1 , Fig. 4. As shown in Fig. 4, it can be seen clearly that MP-SPSO and M-SPSO have faster convergence speed than P-SPSO on all benchmark functions. MP-SPSO algorithm has the fastest convergence speed on four functions (f 1 , f 6 , f 8 , f 21 ) and M-SPSO algorithm has the fastest convergence speed on five benchmark functions (f 13 − f 15 , f 24 , f 25 ), which indicates that the proposed mean differential mutation strategy (MDM) can effectively improve the convergence speed of SPSO algorithm.
The key parameter settings for these nine PSO algorithms are shown in Table 8. The simulation results of PSOTD, GPAMPSO, HCLPSO and LFSPSO are directly received from [47], [26], [29]. The results of CSPSO, STSRPSO, and ALPSO are received from [28]. Fifteen benchmark functions listed in Table 3 are used for comparison in this subsection. To avoid the effect of randomness, each algorithm is run independently 30 times. Then we evaluate the performance of each algorithm by mean values (Mean) and standard deviation (Std), which are shown in Table 9. ' * * ' means that the result cannot be obtained from the original literature.
The bold values denote the best results for each benchmark function among the compared algorithms.
From Table 9, it can be observed that the proposed MP-  Table 10. It can be clearly seen that the results of p-value show significant difference between MP-SPSO and all the simulated PSO variants except PSOTD and MSCPSO algorithm.
Although the p-value of PSOTD and MSCPSO is higher than 0.05, we can find that the proposed MP-SPSO algorithm performs a little better than those two algorithms according to the results shown in Table 9.

E. COMPARISON OF MP-SPSO AND OTHER NINE INTELLIGENT OPTIMIZATION ALGORITHMS
To further verify the performance of MP-SPSO algorithm, in this subsection, the MP-SPSO algorithm is compared with other nine intelligent optimization algorithms which have been presented in recent years. These nine algorithms include GWO (grey wolf optimizer) [3], SPaABC (artificial bee colony algorithm with strategy and parameter adaptation) [51], iDEaSm (differential evolution algorithm using efficient adapted surrogate model) [52], JADE (adaptive differential evolution with sorting crossover rate) [53], VOLUME 8, 2020  WOA (whale optimization algorithm) [4], SCA (sine cosine algorithm) [54], MVO (multi-verse optimizer) [55], ADEDE (differential evolution algorithm with a self-adaption parameter control metho) [56] and ABCLGII (artificial bee colony algorithm with local and global information interaction) [57]. The key parameter settings for the ten algorithms are shown in Table 11. The results of SPaABC, iDEaSm, JADE, ADEDE and ABCLGII are directly received from [51], [52], [56] [57]. The program codes of GWO, WOA, SCA and MVO can be found in [3], [4], [54], [55]. Twelve benchmark functions listed in Table 3 are chosen to verify the performance of different algorithms. To avoid the effect of randomness, each algorithm is run independently 30 times and then we evaluate the performance of each algorithm by mean values (Mean) and standard deviation (Std), which are shown in Table 11. ' * * ' means that the result cannot be directly obtained from the original literature. The best results obtained by ten algorithms are shown in bold.
From Table 12, it can be seen clearly that the proposed algorithm MP-SPSO surpasses nine compared algorithms for the majority of functions. On eight functions Therefore, it can be concluded that, compared with these nine intelligent optimization algorithms, the proposed MP-SPSO can achieve better performance in terms of solution quality and robustness. To show the difference between the proposed MP-SPSO algorithm and nine compared algorithms, Wilcoxon Signed Ranks Test with a level of α = 0.05 is adopted and the results are listed in Table 13. As shown in Table 13, the p-value of each comparison is lower than 0.05, which indicates that the proposed MP-SPSO algorithm shows a significant difference between nine compared algorithms.

V. A REAL ENGINEERING PROBLEM
The application of intelligent optimization algorithms in solving real engineering problems is becoming increasingly popular. The structural design problem is a typical optimization problem in the engineering optimization fields. In this section, to validate the performance of MP-SPSO on solving engineering problems, a pressure vessel design problem is selected.

A. PRESSURE VESSEL DESIGN PROBLEM
The pressure vessel design problem is a typical structural design problem. Like any other optimization problems, the goal of this problem is to minimize the total cost including material, formation and welding. Fig.5 shows the schematic of pressure vessel design problem. There are four variables in this problem, including the thickness of shell (x 1 ), the thickness of head (x 2 ), the inside radius (x 3 ) and the length of cylinder without considering the head (x 4 ). Besides, there are four constraints which can be formulated as follows: This problem has been solved by many intelligence optimization algorithms, such as GWO [3], WOA [4], MVO [55], IAPSO (accelerated PSO algorithm) [58], BA (bat algorithm) [59], CEDE (co-evolutionary differential evolution) [60], CLPSO (comprehensive learning particle swarm optimizer) [61], SMO (social mimic optimization algorithm) [62], HCS-LSAL (hybrid cuckoo search algorithm) [63] and IAHLO (adaptive human learning algorithm) [64]. The obtained results by MP-SPSO are compared with these ten algorithms of which the results are received from the references mentioned above. The optimization results of different algorithms are shown in Table 14.
According to the Table 14, we can see that the proposed MP-SPSO algorithm exhibits consistent performance and ranks the first in optimizing this pressure vessel design problem. In a word, the MP-SPSO algorithm can obtain the best feasible optimum for the pressure vessel design problem among the mentioned algorithms. In addition, from the convergence curve of MP-SPSO shown in Fig. 6, it can be found that MP-SPSO can find the best feasible solution with little number of iterations. VOLUME 8, 2020

VI. CONCLUSION
In order to improve the performance of SPSO for solving high-dimensional complex optimization problems, the adjustment of parameters and evolutionary mechanism design are both considered in this paper. Firstly, a new parameter adjustment strategy named piecewise nonlinear acceleration coefficients (PNAC) is introduced to the SPSO algorithm, and an improved algorithm named piecewisenonlinear-acceleration-coefficients-based SPSO (P-SPSO) is proposed. The PNAC can balance the early stage global search and the latter stage local convergence effectively. Then a mean differential mutation strategy (MDM) is developed for the update mechanism of P-SPSO, and another algorithm named mean-differential-mutation-strategy embedded P-SPSO (MP-SPSO) is proposed. The proposed evolutionary mechanism utilizes not only the particle historical information pbests (p ij ) and gbest (p gj ) but also the exemplars constructed by MDM to generate high-quality offspring, whereas diverse information is injected into the offspring to enhance global exploration. To validate the performance of P-SPSO and MP-SPSO, four different sets of experiments are carried out in this paper. Firstly, the proposed P-SPSO algorithm is compared with PSO and SPSO with different acceleration coefficients. In this experiment, due to the special piecewise update mechanism of parameters, the proposed P-SPSO can get better solutions than other compared algorithms. Secondly, the MP-SPSO algorithm is compared with SPSO, P-SPSO and M-SPSO. The results show the effectiveness of PNAC and MDM. Thirdly, the MP-SPSO algorithm is compared with other eight well-known PSO variants. The results prove that MP-SPSO is more successful than eight PSO variants. Fourthly, the MP-SPSO algorithm is compared with other nine intelligent optimization algorithms. It indicates that MP-SPSO can achieve better performance in terms of solution quality and robustness. Moreover, the proposed MP-SPSO algorithm is successfully applied to a real constrained engineering problem and provides better solutions than other methods. ZHENYU WANG received the bachelor's degree from Chongqing Technology and Business University, Chongqing, China, in 2018. He is currently pursuing the master's degree with Foshan University. His current research interests include computational intelligence and swarm intelligent optimization.