Nonlinear Dissipative Particle Swarm Algorithm and Its Applications

A nonlinear dissipative particle swarm algorithm is proposed for the problem of poor search accuracy by the particle swarm algorithm, especially for the optimization of high-dimensional functions. The algorithm dissipates the particles in a nonlinear increasing way: avoiding a large amount of unnecessary dissipation at the beginning of the iteration and putting more effort into dissipation at the end of the iteration, which improves the operation efficiency and global search ability of the algorithm. On this basis, the inertia weights are adjusted with a nonlinear decreasing strategy, which improves the search ability of the algorithm when there is no dissipation. The experimental results show that the nonlinear dissipative particle swarm algorithm has superior performance when dealing with high-dimensional function optimization problems and mobile robot path planning problems.


I. INTRODUCTION
Particle swarm optimization (PSO) [1], which originated from the study of bird predation behavior, is a new intelligent optimization algorithm based on iteration. It initializes a set of random solutions and uses the current position of each particle to evaluate the objective function. At each moment, the particle uses its current position, its own historical best position, and the position of one or more best particles in the swarm to determine the movement of the particle in the search space. The entire population completes one iteration after all particles have finished moving once. Finally, the whole population moves in the direction of the best advantage of the fitness function like a flock of collaborative foraging birds. That is, the particles track the individual extremes searched by the current particles and the global extremes searched by the whole population to search for the optimal value in the solution space. However, unlike the "survival of the fittest" evolutionary idea of the genetic algorithm [2], the particle swarm optimization algorithm searches for the optimal position through the coordination between individual particles. Compared with other bionic algorithms, such as genetic algorithms, the particle swarm algorithm is simple in concept, has few adjustable parameters, and is easy to implement, which has attracted the attention of many scholars at home and abroad and is widely used in solving complex multipeak optimization problems. At the beginning of the research, the initial PSO algorithm was formed by considering multidimensional search and acceleration according to the distance and by matching the velocity of the neighborhood, and then the concept of inertia weight was introduced to achieve a better search of the solution space, which gradually formed the basic particle swarm algorithm [3] that is now commonly used. However, it is an indisputable fact that the basic particle swarm algorithm has a premature convergence problem. To address this problem, a large number of solutions have been generated. Q. Lv et al. [4] used the pheromone mechanism to design the particle behavior to improve the algorithm's search capability. L. L. Kang et al. [5] designed an inertiafree adaptive elite variational inverse particle strategy. B. Q. Lv et al. [6] used the fuzzy rules used by the control system fuzzy controller to improve the algorithm. L. F. Xu et al. [7] used multilevel perturbation to obtain better optimization search results. S. Yi et al. [8] improved the particle swarm algorithm under semisupervised clustering objectives. H. X. Lu et al. [9] used neural networks to dynamically generate the parameters required for the operation of the algorithm. W. B. Liu et al. [10] proposed a particle swarm algorithm with a weighting strategy based on the sigmoid function to adaptively adjust the acceleration coefficients. X. W. Xia et al. [11] proposed a triple archive particle swarm algorithm to improve the solution accuracy and convergence speed. In addition, many different versions of particle swarm algorithms have emerged to achieve adaptation and application in different fields. For example, for the typical NP backpack problem, researchers have proposed a discrete particle swarm algorithm based on the chaotic behavior of ant colonies [12]. In the field of new energy, researchers have also redesigned particle swarm algorithms for PV maximum power point tracking and inverter parameter identification [13][14]. In the field of optics, improved particle swarm algorithms are also playing a role in spectral line separation and spectral absorption [15][16].
The various PSO-based improvement algorithms mentioned above have their own merits, but in general, they all try to solve the inherent problem of the original PSO algorithm -the problem of easily falling into a local optimum.
To address this problem, considering a certain balance between the search speed of the algorithm and the maintenance of population diversity, this paper dissipates particles in a nonlinear incremental way, avoiding a large amount of unnecessary dissipation at the beginning of the iteration and devoting more energy to dissipation at the end of the iteration, which improves the operational efficiency and global search ability of the algorithm. Based on this, a nonlinear dissipative particle swarm optimization (NDPSO) algorithm is proposed by using nonlinear functions to adjust the inertia weights to enhance the ability of the population to utilize the information. In general, the main contribution of this algorithm is to overcome the poor performance of the particle swarm algorithm for high-dimensional function optimization and to improve the operational efficiency and global search capability of the algorithm. By introducing six numerical benchmark test functions for experiments and comparing the analysis with basic PSO, DPSO [17] and other intelligent optimization algorithms [18], the results show that NDPSO is an algorithm with higher search accuracy and more stable performance. In addition, the superiority of this algorithm is shown in experiments on the optimization of artificial potential field parameters for mobile robot path planning.
This paper is organized as follows: Section II briefly introduces the dissipative particle swarm algorithm, Section III proposes the nonlinear dissipative particle swarm algorithm and the implementation process and analyzes its convergence, Section IV shows the simulation experiments, and Section V concludes the paper.

II. DISSIPATIVE PARTICLE SWARM ALGORITHM
To address the drawback that the search capability of the PSO algorithm gradually decreases with the number of iterations, the literature [15] introduces a dissipation operation that gives the particles a greater chance of escaping from the local minima, as described by the following equations: is the velocity of particle i , which represents the distance between the present position of particle i and its next target position.

III. NONLINEAR DISSIPATIVE PARTICLE SWARM ALGORITHM
DPSO is blind in the selection of the jump factor, and the algorithm is too large at the beginning of the operation and produces unnecessary dissipation, which affects the efficiency. Meanwhile, it is too small at the end of the operation because there is no timely dissipation, and the algorithm easily falls into the local optimum. Therefore, this paper proposes a nonlinear dissipative particle swarm algorithm to enhance the management of the population and the particle search method.

A. EQUATIONS PROPOSED FOR THE NDPSO ALGORITHM
As we can see from Equation (1), the magnitude of the velocity indirectly represents the distance of the particle relative to the best position, and the dissipation of the velocity and position in their own way can easily lead to confusion in the population. Therefore, in this paper, the velocity and position of the particles are dissipated according to a unified standard, i.e., the velocity and position of the particles are dissipated simultaneously when a certain condition is reached. At the same time, the dissipation of the particles shows a nonlinear increase with the operation of the algorithm, which is described by the following equation: Here, max min min where k denotes the jump factor, whose value is [0, 1], max k represents the maximum jump factor, min k shows the minimum jump factor, n is a constant, Gen means the number of iterations, and iter means the number of current iterations.
Since ( ) rand belongs to a uniform random distribution between [0,1], a smaller k value at the beginning of the iteration can avoid a large number of dissipation operations and improve the efficiency of the algorithm; as the number of iterations increases, the speed of particles gradually decreases, the value of the jump factor k gradually increases, and the dissipation ability of particles is further strengthened, enhancing the diversity of the population and improving the global search ability of the algorithm.
Of course, the dissipation operation is also possible when the value of k is small, but the small probability of dissipation is more likely to enhance the effect of particle search because at larger velocities, the particles are likely to fly out of the optimal position, thus failing to find the optimal value. In the late iteration, when the value of k is large, there is also a small probability of no dissipation when the particle is likely to have found the optimal value, thus reducing unnecessary dissipation.
Since the strategy of using nonlinear adjustment of inertia weights is better than the strategy of linear decreasing inertia weights in the performance of the algorithm [19], to improve the algorithmic search capability of particles without dissipation, in this paper, based on the nonlinear dissipation operation, the adjustment of inertia weights is also taken as a nonlinear operation to complement the dissipation operation in an attempt to be able to reflect the particle flight process more reasonably, as described in the following equation: where  takes values in the range [0.95,0.4], n is a constant, max  denotes the maximum inertia weight value, min  denotes the minimum inertia weight value, and others are the same as above. The experimental results show that the introduction of Equation (7) enhances the search capability of the algorithm. The experimental results show that the simultaneous nonlinear operation on the dissipation and inertia weights of the particles enables the particles to make full use of the amount of information available to the existing population during the search process, and the particles depend on each other in the search strategy to form a complementary strategy, which improves the algorithm's optimization-seeking ability.

B. CONVERGENCE ANALYSIS OF THE NDPSO ALGORITHM
The NDPSO algorithm proposed in this paper does not change the basic structure of the basic particle swarm algorithm, and the main update mechanism is still similar to the basic particle swarm algorithm, except that the population diversity is increased by nonlinear dissipation of particles, and the inertia weights are adjusted by a nonlinear decreasing strategy to improve the algorithm search performance without dissipation. Thus, the convergence of the algorithm proposed in this paper can be based on the literature [20] method to prove that it will converge to some power balance point in the individual historical optimum and the population optimum. The specific proof is as follows.
To simplify the calculation, let the space of independent variables be 1-dimensional, the number of particles be 1, and the initial conditions be . From Equation (2) and Equation (3), the following is obtained: The closed form of Equation (8) can be obtained as follows: The initial conditions lead to 1 When max(|| ||,|| ||) 1    , we obtain Therefore, the algorithm is guaranteed to converge only when the condition

C. NDPSO ALGORITHM IMPLEMENTATION PROCESS
The specific implementation steps of the NDPSO algorithm are as follows: Step 1: Initialize the particle swarm, i.e., randomly give the initial position and initial velocity of m particles.
Step 2: Given the fitness function, calculate the fitness value of each particle fitness.
Step 3: Determine the value of inertia weight for each iteration from Equation (7) and the value depending on the context factor according to Equation (6). Step 4: For each particle, compare its adaptation value with the adaptation value of the best position i p it has experienced. Update if it is better.
Step 5: For each particle, compare its adaptation value with the adaptation value of the best position g p in the population. Update if it is better.
Step 6: Perform dissipation operations on the particles according to Equation (11).
Step 7: Update the position and velocity of the particle using Equation (1) with Equation (2).
Step 8: See if the stopping condition is satisfied (good enough position or a maximum number of iterations). If the stopping condition is satisfied, then output the optimal value; otherwise, go to Step 2 and continue to run until the condition is satisfied.

IV. EXPERIMENTAL ANALYSIS
To verify the performance of the NDPSO algorithm, six numerical benchmark test functions are selected for testing in this paper, and Table 1 shows the definitions, value ranges and globally optimal solutions of these six test functions. The population size of all experiments is 50, and each function is run independently 30 times with 2000 iterations.
A large number of experimental results showed that the best results were obtained when the maximum jump factor The default parameters in Ref. [3] are chosen for the basic PSO algorithm and those in Ref. [17] are chosen for the DPSO algorithm. The results for the GA, DE, and ABC algorithms come from the literature [18]. In Table 1, the Rosenbrock function is a complex unimodal function whose global minima lie within a smooth, narrow parabolic-shaped valley, and it is difficult for general algorithms to search for the global minima, so the Rosenbrock function is usually used to evaluate the performance of optimization algorithms. Functions 2 3 f f are typical nonlinear multimodal functions; they have a wide search space, many local minima and tall obstacles, which are usually considered to be very difficult to handle complex multimodal problems. Similarly, 4 6 f f functions are also commonly used to test the performance of the functions.

A. PERFORMANCE COMPARISON WITH OTHER INTELLIGENT OPTIMIZATION ALGORITHMS
In this paper, the performance of the NDPSO algorithm is compared with four other intelligent optimization algorithms, which are GA, PSO, DE, and ABC algorithms, and the details are shown in Table 2. From Table 2, we find that the NDPSO algorithm outperforms the GA, PSO, DE, and ABC algorithms in terms of the mean and variance when optimizing the Rosenbrock, Rastrigin, Step, and Zakharov functions and outperforms the others on the Griewank and Ackley functions, except for the ABC algorithm. In addition, the NDPSO algorithm sets more severe conditions for the optimization (e.g., the number of iterations is 50,000 for the GA, DE, and ABC algorithms, while the number of iterations is only 2,000 for the NDPSO algorithm). To verify the performance of the NDPSO algorithm on the Griewank and Ackley functions more comprehensively, the number of iterations optimized by the NDPSO algorithm on the Griewank and Ackley functions was set to the same iteration number of 50,000 as for the ABC algorithm, and other parameters were unchanged. The final result shows that the optimal value of the NDPSO algorithm is 0 for both the Griewank and Ackley functions, which shows that the NDPSO algorithm has faster convergence speed and better search accuracy.

B. PERFORMANCE TESTING OF HIGHER DIMENSIONAL FUNCTION ALGORITHMS
This section implements the above six functions f optimal values are 0, and the fitness is the adaptation value of each search. The algorithm search accuracy is higher when both the current optimal value and fitness are closer to 0. The specific situation is shown in Figures 1-6.  6 f . From Figures 1-6, we find that the optimal value search performance of the NDPSO algorithm is much better than the other two particle swarm algorithms when optimizing the above six 100-dimensional functions; specifically, the accuracy of the optimal value search is much higher than that of PSO and DPSO algorithms under a certain number of iterations, and the results obtained by the NDPSO algorithm are already very satisfactory when the number of iterations is small, which fully indicates that the NDPSO algorithm has higher search accuracy and faster convergence speed and verifies that the optimal value search performance of the DPSO algorithm is also better than that of the basic PSO algorithm in high dimensions.
The increase in the dimensionality of the function increases the complexity of the problem and has a large impact on the optimization capability. Therefore, to further investigate the performance of the nonlinear dissipative particle swarm algorithm (NDPSO) proposed in this paper, the mean and variance of the function are tested by using the three standard test functions 1 3 f f mentioned above when the dimensionality changes from 1 to 1000 dimensions (where mean is the mean value and deviation is the variance).
The changes are shown in Figures 7-12. From Figures 7-12, it is found that when the dimension changes from 1 to 1000, for the Rosenbrock function, the mean value changes from 0 to 0.1535 and the variance changes less than 0.09 using the proposed method, while the mean value changes from 0 to 10 1.1995 10  and the variance changes up to 17 10 using the basic PSO algorithm. This fully shows that this algorithm not only has high search accuracy but is also very stable. Similarly, the results obtained from the Griewank function, Rastrigin function, Step function, Ackley function and Zakharov function also show these advantages of the present algorithm.

C. OPTIMIZATION OF ARTIFICIAL POTENTIAL FIELD PARAMETERS IN PATH PLANNING FOR MOBILE ROBOTS
The artificial potential field method proposed by Khatib is a planning method obtained by introducing the concept of field in physics into robotics. Its principle is to represent the environmental information as a superposition of the gravitational field of the target point and the repulsive field of the obstacle and to guide the robot to avoid the obstacle and move toward the target by the virtual force of the potential field on it, finally achieving the task of path planning. However, the artificial potential field method has several drawbacks [21,22]: (1) it easily falls into the local minimal problem; (2) it is prone to jitter problems in front of obstacles; and (3) the path cannot be found between adjacent obstacles. Given the above problems, this paper uses the unique advantages of the nonlinear dissipative particle swarm algorithm in optimization to select the potential field parameters for the artificial potential field method, which is used to solve the defects of the artificial potential field method, such as local minima and the unreachability problem (GNRON).
The robot path planning problem can be modeled as a constrained function optimization problem. Let the robot's motion region in the two-dimensional plane be A Here, With the gravitational gain coefficient  and repulsive gain coefficient  , the maximum influence distance 0  of the obstacle on the mobile robot that is improperly taken will lead to the combined force sum F  =0 applied to the robot, at which time the mobile robot is caught in the potential field trap. At the same time, the selection of the forward progress length L also affects the path length and the degree of smoothing. Therefore, the parameters that need to be optimized by the nonlinear dissipative particle swarm algorithm are α ,  , 0  and l . The superiority index of the evaluation algorithm is that the optimized parameters enable the robot to take the shortest path from the starting point to the end point after bypassing the obstacle, and the fewer failed attempts to find the target point within a specified number of experiments, the better.
The simulation tests are conducted in an environment of a 12 12  unit area. The robot starting point (start) is located at (0, 0), and the goal point (goal) is located at (9, 7.5). To facilitate the planning process, the robot can be considered a mass point if the obstacle boundary is expanded outward by 1/2 of the maximum dimensions of the robot length and width, thus simplifying the description of the problem. Figure  10 shows the situation when the artificial potential field method falls into a local optimum and the robot cannot reach the target point. Figure 11 shows the path diagram obtained by the NDPSO algorithm, where the robot can avoid the potential field trap and reach the target point, thus solving the defects of the artificial potential field method, such as local minima and the unreachability problem (GNRON). Table 3 shows the results of the parameters obtained by optimization when the above path diagram is obtained.  The results obtained by applying the PSO algorithm, dissipative particle swarm algorithm (DPSO), and nonlinear dissipative particle swarm algorithm (NDPSO) based on the artificial potential field method to find the robot path are shown in Table 4 for 2000 repeated runs.    Here, Min is the minimum path obtained in 2000 runs, Mean is the average path obtained in 2000 runs, Max is the maximum path obtained in 2000 runs, Deviation is the variance obtained in 2000 runs, and Missgoal is the number of failed attempts to find the target point in 2000 runs. From Table 4, it can be seen that the NDPSO algorithm proposed in this chapter has the smallest values of minimum path, mean path, maximum path and variance of path among all of the improved algorithms in path planning for mobile robots, and the number of failed searches for target points in 2000 runs is 39 for the NDPSO algorithm, 216 for the PSO algorithm and 176 for the DPSO algorithm.

V. CONCLUSION
The nonlinear dissipative particle swarm algorithm proposed in this paper is, in general, based on the following two improvements: 1) dissipating particles in a nonlinear incremental manner, avoiding a large amount of unnecessary dissipation at the beginning of the iteration, while devoting more effort to dissipation at the end of the iteration, improving the operational efficiency and global search capability of the algorithm and 2) using a nonlinear decreasing strategy to adjust the inertia weights so that the algorithm improves its searchability when there is no dissipation. The experimental results show that the search capability of the NDPSO algorithm is significantly higher than that of the basic PSO and dissipative particle swarm algorithms, and it has advantages over other intelligent optimization algorithms. The performance of the algorithm is very stable, which is more suitable for high-dimensional complex function optimization problems and mobile robot path planning problems.