A Novel Hybrid Quantum Particle Swarm Optimization With Marine Predators for Engineering Design Problems

computational efficiency of the quantum particle swarm optimization (QPSO) is significantly higher than that of the traditional particle swarm optimization when solving the parameters of the optimization problem model. However, the exploration and exploitation abilities of the QPSO are weak, based on the historical best position and the global best position. Enlightened by the multi-stage search strategies of marine predators algorithm, we propose a novel hybrid quantum particle swarm optimization algorithm with marine predators (HMPQPSO) in this paper. The evolutionary process of the algorithm is divided two stages: firstly, the Brownian motion of the predator is introduced to the exploration. The randomness and uniformity of which can expand the solution space of particles; secondly, the Levy motion strategy and dynamic parameter strategy are used to update the position, which can accelerate the convergence of the algorithm and enhance the diversity of the algorithm. Meanwhile, both fish aggregation devices(FADs) and opposition-based learning strategy incorporated are used to increase the diversity of the population and prevent the phenomenon of premature particle aggregation. The algorithm is applied to distinct types of CEC2017 benchmark test functions and four multidimensional nonlinear structure design optimization problems, as compared to other recent algorithms. The results demonstrate that the convergence speed and accuracy of HMPQPSO are notably superior to that of other algorithms.


I. INTRODUCTION
Intelligent algorithms are inspired by the laws of nature and imitated by the collective behavior of social groups. They can be used to simulate the collective behavior of social groups in nature. These intelligent algorithms can be applied to solve real-life optimization problems [1]. Some of the intelligent algorithms are genetic algorithm(GA) [2], differential evolution(DE) [3], particle swarm optimization(PSO) [4], grey wolf optimizer(GWO) [5], artificial The associate editor coordinating the review of this manuscript and approving it for publication was Geng-Ming Jiang .
fish-swarm algorithm(AFSA) [6], artificial bee colony algorithm(ABC) [7], marine predators algorithm(MPA) [8] and so on. In comparison to optimization algorithms, these algorithms have the advantages of being flexible, independent of derivatives, as well as quick and effective on the condition that dealing with discrete variables. Due to the advantages of fewer parameters and greater efficiency, particle swarm optimization is one of the most widely used optimization methods. It is mainly used in different fields such as computer, mathematics, physics, chemistry and aerospace science.
PSO is designed to simulate the predatory behavior of a flock of birds. Each individual in the population is called a particle. The particles have two properties: velocity and position [9]. The particles are guided to find the global optimal solution by adjusting their velocity and position. At the same time, quantum mechanics is gradually coming into the view of researchers. In quantum systems in the mathematical domain, matter moves in Hilbert space to express its quantum behavior. Moreover, owing to the high number of parameters configure and the low randomness of position changes in PSO, Sun et al. [10] presented PSO with the quantum evolutionary theory and proposed quantum swarm particle optimization(QPSO). The algorithm assumes particles have quantum behavior. Then by solving Schrödinger's equation and using Monte Carlo's method, they obtained the probability density function of the particles. This probability density function appears at a certain point. Eventually the position equation of the particle is obtained. However, this quantum behavior of the particle cannot be accurately determined its position and velocity at the same time. Therefore the state of a medium particle can be represented by a wave function.
Schrodinger's equation and Monte Carlo's method were applied to locate the particles, which is an advantage of QPSO, and its overreliance on the historical optimal solution leads to the early aggregation of particles. The imbalance between exploration and exploitation causes this drawback. How to reasonably balance the performance between the two is the key for scholars to study intelligent algorithms today. In QPSO, optimal selection of parameters and reasonable control of population diversity are important factors to enhance the speed and precision of its convergence. Thankfully, the marine predators algorithm [8] has such advantages and could help QPSO balance exploration and exploitation capabilities. Under such conditions, the original QPSO algorithm is improved in this paper. The marine predators algorithm(MPA) was proposed by Afshin Faramarzi et al. To be specific, marine predators use the Levy strategy in environments with low prey concentration and the Brownian motion strategy in environments with abundant prey. The marine predators algorithm is divided into three stages based on speed ratios, using segmented learning to find the best action strategy for both predators and prey: At low speed ratios (v = 0.1), the best strategy for the predator is Levy movements and the prey are Brownian or Levy movements; At the unit speed ratio (v = 1), if the prey moves in a Levy manner, the best strategy for the predator is Brownian motion; At the high speed ratio (v ≥ 10), the best strategy for the predator is not to move at all, and the prey is either Brownian or Levy motion. In other words, the movement of marine predators is determined by the prey concentration in the environment. It is also easier to go beyond the local optimum and find the global optimum solution by considering the effects of the environment, such as natural (eddy formation) or anthropogenic to change the behavior of predators and prey.
In this paper, we apply the different policy mechanisms of the multi-stage marine predators algorithm to QPSO and propose a novel quantum particle swarm optimization algorithm, namely the Hybrid Quantum Particle Swarm Optimization for Marine Predators (HMPQPSO). In this algorithm, we learn different strategies for different stages of the marine predators algorithm. In the early stage, in order to expand the solution space of the particles, we make the particles perform Brownian motion. Also to reduce the phenomenon of local aggregation of particles, we use the method of randomly selecting the particle positions to update the equations of the particles. This stage makes the motion of the particles more random. In the later stage we choose Pd with dynamic parameter strategy to control the position update of the particles in a way to speed up the particle search, and introduce Levy motion strategy with self-adjusting weights. This method allows the particle to further search for the optimal value in a small space with high probability. In addition it has a small probability to go to other spaces to find and prevent the particles from stagnation. Of course, we also use the opposition-based learning strategy and fish aggregation devices strategy to expand the diversity of the population. This paper is organized as follows. The literature review, related works on Brownian motion and Levy flight strategy are shown in Section 2. Section 3 reviews the classical QPSO and MPA. In addition, the details of the HMPQPSO proposed in this paper are in Section 4. We describe the test problem and experimental setup in Section 5, which are used to analyze the experimental study of it and to compare the performance with other algorithms. Then it is applied to four multidimensional nonlinear structural design optimization problems in Section 6. Finally, a short conclusion is given in Section 7.

A. LITERATURE REVIEW
The PSO's susceptibility to premature aging is one of its most significant flaws. The improved quantum particle swarm optimization algorithm has somewhat alleviated the pressure of the problem in the PSO algorithm. However, there is still the trouble of easily falling into a local optimum. QPSO is mainly affected by two aspects: 1) The model coefficients; 2) The hybrid algorithm.
There is no fixed parameter setting that can fit all problems or the different phases of even a single problem. Drawing on the strengths of other algorithms and combining QPSO with other algorithms is also a popular research. The relationship between individual and global optimal positions, with different interaction patterns, are important guarantees for maintaining population diversity and algorithm convergence [11]. To improve the exploration and exploitation of QPSO, many scholars have conducted profound researches on it.
In terms of model coefficients, R.K. Agrawal et al. proposed the enhanced quantum particle swarm optimization(e-QPSO) [12]. They introduced a new parameter gamma to expand the diversity of the population. The original parameter α is also improved with the aim of adjusting the adaptive VOLUME 10, 2022 balance between the historical best position and the global best position. In [13], a particle search capability factor that can be dynamically adjusted for the shrinkage expansion coefficient is proposed. This improvement is called dynamic quantum particle swarm optimization(DQPSO). There is also a modified quantum particle swarm optimization algorithm (MQPSO) [14]. It uses a dynamic parameter strategy with the goal of simplifying the algorithm. On top of that two other approaches are taken that make the algorithm trade-off between exploration and exploitation. There is also a Quantum Entanglement Stimulated Particle Swarm Optimization algorithm (QEPSO) [15]. It introduces entangled states in the QEPSO algorithm, while using quanta for local search. These approaches can effectively solve the problem of high dependence. The optimization speed of the algorithm is also greatly increased.
In terms of hybrid algorithms, the shortcomings of QPSO can be compensated by combining the advantages of other algorithms. Xiaoyan Liu et al. introduced two strategies in QPSO: Levy flight (LF) and straight flight (SF) [16]. This allows for better solving of high-dimensional problems. The hybrid QPSO algorithm [17] based on advanced cuckoo search strategy and adaptive Gaussian has worked well in solving ordinary differential equations. Chun Xia Yang et al. proposed the hybrid QPSO algorithm (HQPSO) [18]. This algorithm solves the incomplete inventory problem more efficiently by exploiting the complementary advantages of the linear reconstruction algorithm and the stochastic optimization algorithm. Of course it also improves the accuracy of the algorithm. In addition, there is a hybrid binary tournament QPSO algorithm (AGQPSO) in [19]. It is a hybrid metaheuristic algorithm based on binary ranking.
Even though QPSO has been studied by many scholars, its exploration and exploitation ability could still be improved. In MPA, marine predators use the characteristics of randomness and uniformity of Brownian motion in the early stage to expand the exploration space of the solution and improve the exploration ability of the algorithm; In the later stage they use Levy motion to focus on the prey. Since the characteristics of Levy motion, the predators allow both small and large range exploration. There are of course fish aggregation devices (FADs), which also give the predators more opportunities to find prey. These features exactly make up for the weaknesses of QPSO, so the combination of the two algorithms is necessary.
The following part is dedicated to introduce the two key movements of the predators in MPA, namely Brownian motion and Levy motion.

B. BROWNIAN MOTION
Brownian motion is a non-stop irregular motion made by particles. These particles are suspended in a liquid or a gas. It is a normally distributed independent incremental continuous stochastic process. It is also a Markov process. Its step size is a probability function defined by a normal (Gaussian) distribution. The control density function of the point of motion x is given by where the mean µ is 0 and the unit variance σ 2 is 1 in the PDF formula.

C. LEVY MOTION
Levy flight is a model of random wandering that corresponds to the Levy distribution. Levy [20] proposed this probability distribution, and its process has Markovian nature. Levy flight is characterized by stochastic large probability short-distance exploration and small probability long-distance exploration. It has been experimentally shown that many animals and marine organisms follow the Levy flight pattern as their optimal foraging strategy [21], [22].The formula for Levy flight is as follows.
The parameter γ = 1.5 is the power-law component; η and v obey the normal distribution.
The standard deviation corresponding to the above equation satisfies the following equation.
In [23], it is considered that Levy flight is the best search mode for marine prey in natural environments with low concentrations.
The trajectories of Brownian and Levy motions in 2D and 3D space are depicted in Figure 1. From the figures, we can see that the Brownian motion has a wider range of random positions in space and a larger range compared to the Levy motion. In other words, we can draw a conclusion from the picture. Brownian motion not only can explore the area that Levy's flight cannot reach but also can cover a large area of the solution space. Its step size is more uniform and controllable than that of Levy. So Brownian motion is more suitable for the exploration process of the algorithm, which is convenient to expand the range of random positions of particles in the solution space.
In addition, it is clearly observed in the figure that the Levy flight has a motion with smaller steps at high frequencies and larger steps at low frequencies. Like Brownian motion, its motion process has a random nature, which can be applied to Levy flight in the algorithm. Searching in a smaller area can enhance the local exploitation of the algorithm. This search with precision and depth can find the global optimal position near the historical optimal position. It also facilitates the algorithm to probabilistically jump out of the local optimum and prevent premature particle aggregation because Levy flight has a small probability of large stride cases. Combining the two motion strategies together can fully utilize the solution space while finding the optimal solution more locally and precisely.

III. CLASSICAL ALGORITHMS A. PARTICLE SWARM OPTIMIZATION ALGORITHM
The particle swarm optimization proposed by Eberhart and Kennedy in 1995 [4] is a swarm intelligence algorithm. The velocity and position updates are the core of the PSO algorithm, and the principle expressions and updates are as follows, where V = V i,1 , V i,2 , . . . , V i,D is the velocity of particle i, X = X i,1 , X i,2 , . . . , X i,D is the position of particle i (i = 1, 2, . . . N , where N is the population size). D is the dimensionality of the particle. ω ≥ 0 is the coefficient of maintaining the original velocity, called inertia weight, ω can enhance the local search ability of PSO when it is small, and can enhance the global search ability of PSO when it is large; c 1 and c 2 are the learning factors, which regulate the steps of particle flight in the direction of historical optimal solution (Pbest) and global optimal solution (Gbest), respectively, and are generally set to c 1 = c 2 = 2; r 1 and r 2 are the random numbers between [0, 1]. Pbest = P i,1 , P i,2 , . . . P i,D is the individual optimum, Gbest = [G 1 , G 2 , . . . G D ] is the population optimum. The locally optimal stochastic component of the cognitive component is computed by the vector difference between the locally optimal solution found so far and the randomly chosen solution, and the globally optimal stochastic component of the social component is computed by the vector difference between the globally optimal solution found so far and the randomly chosen solution, and is used to force the ith solution (i.e., the randomly chosen solution) to move toward the optimal solution found so far. The PSO algorithm uses Pbest and Gbest to guide the motion of the particle swarm and thus obtain a high convergence rate.

B. QUANTUM PARTICLE SWARM OPTIMIZATION ALGORITHM
The quantum particle swarm optimization was proposed by Sun et al, which assumed the all particles have quantum behavior. By solving the Schrödinger equation and using the Monte Carlo method, they obtained the probability density function of the particle's appearance at a certain point. Then they got the position equation of the particle. This quantum behavior of a particle cannot be accurately determined simultaneously with its position and velocity. So the wave function can be used to represent the state of a medium particle. The specific principle expressions and updates are as follows, ϕ and u are random numbers between [0, 1]. M is the population size, mbest is the average optimal position of the particles and α is called the shrinkage expansion factor to control the convergence rate. The QPSO of most articles is iterated with α represented by a linear equation between [0.5, 1].
where Iter is the number of iterations; Max_Iter is the maximum number of iterations.
Algorithm 1 HMPQPSO Algorithm 1: Initialize the particle parameters according to (7), including x 0 , individual optimal position P 0 ; 2: Calculate the fitness value f (x i ), where i = 1, . . . , N ; 3: Storing the global best position Gbest; 4: Using (14) to calculate marine memory and FADs, update position x t+1 and fitness value f (x t+1 ); 5: while t < Max_Iter do 6: % the first stage % 7: if t < 1 2 Max_Iter then 8: for i = 1, . . . , N do 9: Update the position x t+1 using (19); 10: end for 11: end if 12: % the second stage % 13: if t > 1 2 Max_Iter then 14: for i = 1, . . . , N do 15: Update the P d using (20) and position x t+1 using (21); 16: end for 17: end if 18: Update Pbest and fitness value f (x t+1 ); 19: Update Gbest new using the opposition-based learning strategy (17); 20: Using (14) to calculate marine memory and FADs, update position x t+1 and fitness value f (x t+1 ); 21: First, the initial positions are similar to other intelligent algorithms in that they are uniformly distributed in the search space, where X max and X min are the maximum and minimum values of the variables, respectively, and λ is a random number in the range of [0, 1].
At beginning of MPA, an Elite matrix and a Prey matrix are constructed. The Elite matrix is used to specify the best predators with the purpose of supervising the search and finding the prey based on the location information of the prey. The Elite matrix is expressed.
X l is the elite predator vector. Unlike other primitive algorithms, the elite predators in the matrix will be replaced if there is a better predator at the time of evolution, considering that when the predator is looking for prey, the prey is also in the identity of the predator when looking for its own prey. The prey is defined as another Prey matrix, according to which the predator updates the position. So there is a link between the two matrices, i.e. initializing the initial prey, where the predator constitutes the elite. The predator matrix is expressed as follows.
Two matrices play a crucial role in the whole optimization process. Considering the nature of movement of predators and prey, the MPA was divided into three phases based on speed ratios. The specific steps are as follows.
1) High-speed ratio or prey moving faster than the predators. When the first scenario occurs, it occurs mainly in the initial phase, and population exploration is more important than exploitation. Therefore, in the case of high speed ratio v ≥ 10 or prey moving faster than the predator, the best strategy for the predator is not to move at all, as expressed below, where t denotes the current number of iterations and Max_Iter denotes the maximum number of iterations. − → R B denotes Brownian motion, and is a vector of random numbers based on a normal distribution. represents the term-by-term multiplication and − → R B is multiplied with the prey to simulate the motion of the prey. P = 0.5 and − → R is a vector of uniform random numbers between [0, 1].
Step 1) runs before one-third of the iteration and has a powerful global search capability when the step size is large or the movement is fast. 2) Unit speed ratio or prey moving at almost the same speed as predators. In the case of unit speed ratio, both predators and prey are searching for their prey, in this phase, both exploration and exploitation are vital, prey is responsible for exploitation and predators are responsible for exploration. According to the rules of movement of marine predators, at v ≈ 1 , if the prey moves in Levy, the best strategy for the predator is Brownian motion. So the first half of this stage of population movement is expressed as follows.
− → R L represents Levy motion, a vector of random numbers based on the Levy distribution. − → R L −−→ Prey i simulates the prey movement, and this part is mainly responsible for the development. The second half of this stage of the population movement is expressed as follows, where CF is an adaptive parameter that controls the movement step of the predator, − → R B −−→ Elite i indicates that the trajectory of the predator is a Brownian motion and the prey updates its position according to the movement of predators. 3) Predators move faster than prey at low speed ratios.
When the third scenario occurs, it generally happens at the last stage of the evolutionary process, which mainly improves the exploitation capacity of the population. The best strategy for predators in this stage is Levy movement, expressed as follows, Elite i simulates the movement of the predators to help update the position of the prey.

D. VORTEX FORMATION AND FISH AGGREGATION DEVICES (FADs) EFFECTS
For marine predators and prey, it is also necessary to consider the influence of environmental issues on their behavior. In marine predators algorithm, FADs are considered as local optimal solutions and it is necessary to find these points in the search space. Using the FADs effect of fish in the algorithm allows the population to make longer jumps thus avoiding stagnation. So the FADs are expressed as (14), shown at the bottom of the page, where r is a random number, the FADs is a constant that affects the optimization process, and usually FADs=0.2. r1 and r2 are two random subscripts of Prey, 1 ≤ r1, r2 ≤ n. U is a binary vector containing 0 and 1, such that random is a random number between [0, 1] and each element of U i can be defined as

E. MARINE MEMORY
This step mainly updates the elite matrix, and for each individual in the prey matrix, its fitness is calculated, and if it is greater than the fitness of the corresponding position in the elite matrix, the individual is replaced with the corresponding individual in the elite matrix. Then the fitness of the best individual in the whole elite matrix is calculated, and if it meets the requirements, the algorithm stops, otherwise the iteration continues.

IV. HYBRID QUANTUM PARTICLE SWARM OPTIMIZATION WITH MARINE PREDATORS A. OPPOSITION-BASED LEARNING STRATEGY ENHANCES POPULATION DIVERSITY
The opposition-based learning strategy (OBL) has been applied to a variety of intelligent algorithms. The essence of OBL is to generate reverse candidate solutions, and reverse solutions participate in population evolution with the aim of expanding the search range of the population. The core idea of OBL is to obtain optimal solutions using estimation and inverse estimation.
Assuming that there is a feasible solution κ in the search space, the opposition-based learning strategy can be expressed as follows, where κ t ij is the position information of the i-th particle κ in the j-th dimension at the t-th iteration, rand is the random number between [0, 1], t = 1, . . . , Max_Iter; m t ij is the historical minimum value of the i-th particle κ in the j-th dimension at the t-th iteration, n t ij is the historical maximum value in the j-th dimension. In this paper OBL is applied to Gbest new , where τ is consistent with a normal distribution.
In the initial stage, the global optimal solution of the particle is updated using OBL, with the aim of constructing into a new optimal solution to re-evaluate the fitness value and thus improving the quality of the population.

B. UPDATE STRATEGIES
In accordance with the movement strategy of predators, the predominant idea of the algorithm is to be primarily responsible for the initial exploration. In the first stage, the solution search space is expanded. In the middle stage, exploration is incrementally replaced by exploitation, and exploration and exploitation are performed simultaneously. The later stage is largely responsible for exploitation to prevent particles from moving too quickly and therefore missing the optimal global solution. Utilizing segmented processing to balance the capabilities of exploration and exploitation. The same idea is applied in other heuristic algorithms. In this paper, the predator movement strategy is applied to the position update of quantum particle swarm optimization. In addition, the predator strategy is improved according to the characteristics of particle swarm optimization algorithm, making the new algorithm more convenient to combine the advantages of both.
In the evolutionary process of this article, the particles are updated in two iterative stages. In each stage the population chooses a different update method.
When the particle i is in the first half of the iterative process, the Brownian motion can cover the region in a more uniform and controlled number of steps than the Levy strategy. So all particles in this stage simulate the prey for Brownian motion. This stage is responsible for finding more search space. In order to make the motion of particles more random, random other particle positions Xrand are introduced in this stage. This decision allows the search space to be further expanded. The introduction of random particle positions reduces the influence of historical and global optima on particle updates. This practice prevents early particle aggregation and early convergence in the solution space. The update equation for this phase is shown below.  t denotes the number of current iterations; F is the weight coefficient, which represents the degree of occupation of the extracted information about the location of other particles.
It is a normally distributed random number obeying a mean value of 0.5 and a variance of 0.3; α is the shrinkage dilation factor; RB represents the Brownian motion, which is a vector of random numbers based on a normal distribution; Xrand is the position of the random particle, and is not the same as the current particle's position. In this stage, the amplitude of particles motion is much higher than that of QPSO. This means that the motion of the particles is more random.
When the particle i is in the second half of the iterative process, they need to find the nearby food. However if the particles remain confined around themselves, they are defective. The case of excellent solutions further away will be ignored. At this point the Levy strategy can cover the region in more precise and deeper steps than the Brownian motion. So all particles in this stage simulate prey for Levy strategy. This stage is responsible for finding the global optimal solution in the vicinity of the local optimal solution. In short, this stage is responsible for the development of the particles. At the same time, the Levy move strategy will also have small large  steps. This feature allows particles stuck in the local optimum to leave the space using the random wandering strategy of Levy. It prevents particles from stagnation. The particles probabilistically take a long-distance exploration, which can increase their employment opportunities. From this we get the corresponding update strategy, which is shown as below.
While Iter > 1 2 Max_Iter The first improvement is made to Pd. We introduce the dynamic parameter strategy λ. It increases linearly as the iterations proceed. That is, as the number of iterations increases, the correlation of Pd with Pbest gradually decreases, but the correlation with Gbest gradually increases. In the development phase, this is consistent with the greater reliance of particles on the social component. Where λ f = 0.5, λ i = 2.5. β is a random number between [0, 1].
RL represents Levy motion, a vector of random numbers based on the Levy distribution. This part is mainly responsible for the development. The position update is controlled by introducing self-adjusting weights ψ in this phase of the position update formula. Finally, the position is further adjusted using the learning parameter ξ = 0.5 with the aim of controlling the distance moved. The control parameters in this step are designed to prevent Levy from moving too far from the optimal solution.
The way the position is updated in different stages plays a crucial role in the search process of QPSO. When the first half of the iteration, the Brownian motion can provide more search area for the particles due to the uniformity of its step size. On the other hand, the random particle selection makes the particle activity more random and flexible. When the second half of the iteration, Levy motion strategy position and self-adjusting weights are selected for updating, which further speeds up the particles to find the global optimum. Also, due to the small probability of Levy flight with large step size, it can make the particles not limited to a small piece of solution space. Based on this, the backward learning strategy also increases the diversity of the population. Finally, the FADs effect of fish is used throughout the iterative process, which can prevent the premature aggregation of particles in the population. This method effectively balances the two properties of exploration and exploitation, and also greatly improves the performance of QPSO. In other words, the algorithm in this paper can both expand the solution space of QPSO and speed up the convergence of particles to find the optimal solution.
The pseudocode for HMPQPSO is given in Algorithm 1, and the algorithm flowchart is shown in Figure 2.

C. COMPUTATIONAL COMPLEXITY OF HMPQPSO
As compared to the classical QPSO, the QPSO with predators model and OBL requires additional computations on operators. In each generation, diversity measurements are taken to determine population diversity and, consequently, computational complexity. Population diversity depends mainly on the distance measure, defined as O (D × N × (N − 1) /2). It is worth noting that the total computational complexity of the predators model is where T max is the max iteration, N is the number of populations, Cof is the cost of function evaluation, and D is the dimensionality.
The computational complexity is O (D × N ) when all individuals in the population are considered for the opposite operator. The complexity of classical QPSO is O (T max × D × N ), so the total computational complexity of HMPQPSO is which can be simplified to O T max × D × N 2 . In addition, the other additional computational costs are negligible compared to the fitness evaluations.

V. EXPERIMENTAL VERIFICATION AND ANALYSIS A. BASELINE TEST FUNCTIONS
In this paper, the performance evaluation of the new algorithm is tested in the CEC2017 benchmark functions. Out of 30 functions, 29 functions are selected for testing in this paper, and F2 is excluded because it exhibits strong instability in high dimensions. The CEC2017 benchmark functions are divided into four main categories: Among them, singlepeaked functions (F1-F3) exhibit narrow ridge properties, they are non-separable and smooth, and are used to challenge the exploitation capability of the algorithm, simple multi-peaked functions (F4-F10) have many local optimal solutions and are used to test the exploration ability, the hybrid functions (F11-F20) have the smallest deviation values between local and global optima, and the synthetic functions (F21-F30) have all the properties of the above functions, and are designed to test the overall performance of the algorithm. The details are shown in the Table 1. Compared with other test functions, CEC2017 has more functions and is superior to reflect the performance of the algorithm.

B. ALGORITHMS FOR COMPARISON AND PARAMETER SETTINGS
All the algorithms were implemented in the MATLAB 9.10(R2021a) programming language. In order to better evaluate the algorithms, we select several representational and state-of-the-art QPSOs and PSOs. The specific algorithms and the corresponding parameters are shown in the Table 2. The population size is 100. The dimensions are set to 10 and 30, and the number of iterations corresponds to 1000 and 3000. The starting search points are randomly generated within the same initialization range. To reduce the influence of other variables on the data, these parameters and settings are used uniformly in all algorithms. In order to make the results more meaningful, all algorithms were run 30 times independently on each benchmark function. The mean (mean) and variance (std) were calculated for all algorithms. The best results among the nine algorithms are indicated in bold.
Wilcoxon signed-rank test is used to compare the instructions and makes the results more realistic. Wilcoxon signed-rank test is applied to pairwise comparisons in t-test. It compares the performance of this paper's algorithm with other algorithms. At the significance level of ''+'', ''−'' and ''='', the improved particle swarm optimization algorithm is significantly better than, significantly lower than and consistent with the compared algorithms. The disclosed parameters of these algorithms are kept consistent throughout this paper.

C. 10-D EXPERIMENTAL RESULTS
To reflect the advantages of HMPQPSO, this section compares the test results of HMPQPSO and other algorithms on the 10-D CEC2017 benchmark test function. The results of the nine algorithms are shown in Tables 5 and 6. And the corresponding functions are selected for convergence speed and accuracy analysis. Figure 3 shows the convergence curves of the nine algorithms on the CEC2017 benchmark test function, where the vertical coordinates are f (x) − f (x * ).
The number of function evaluations can be expressed as where N is the size of population and Max_Iter is the number of iterations. The final results in the 10-D function search are shown in the Tables 5 and 6. For the single-peaked function, HMPQPSO obtains the best performance on both F1 and F3. The performance of mean and variance is much better than other algorithms. Especially for the F3 function, HMPQPSO converges directly to the optimal value. This is an effect that other algorithms cannot achieve. The convergence of each algorithm to the F3 function can be found by Figure 3(a). We obviously see that the convergence speed of the algorithm in this paper is also faster than the other algorithms. The second ranked algorithm is BLPSO algorithm. However, its convergence shows a segmented shape, and it even fails to reach the accuracy of HMPQPSO algorithm at the end.
In the function with simple multiple peaks, HMPQPSO obtains good results in both F4-F10. It can even find the optimal value in the F9 function. Except for the F10 function, which is not as stable as the BLPSO algorithm, HMPQPSO has a strong optimal finding ability in all of them. It can be seen from Figure 3(c) that the accuracy of HMPQPSO and BLPSO is very similar. Also, the accuracy of GaussQPSO and DQPSO are close to each other. Yet, among these four algorithms, the HMPQPSO algorithm is the earliest to converge.
For the mixed functions, HMPQPSO also performs well. In the calculation of the F16 function, BLPSO performs the best, obtaining the first rank in both mean and variance. HMPQPSO ranks second. In the F17 function, HMPQPSO has the best mean value, however the best variance is BLPSO. That means, for the F17 function, HMPQPSO has a better accuracy. Robustness is the strong point of BLPSO to calculate F17. Except for these two functions, our algorithm has the best results for all other functions. The results of F15 are shown in Figure 3(f). The convergence results of BLPSO and HMPQPSO are similar. However, HMPQPSO consumes much less effort to reach the target. Until the middle and late stages, BLPSO still struggles very much to find the target value.
In the synthetic function, the HMPQPSO algorithm still maintains good performance. From F21 to F30, the algorithm in this paper gets the first place for finding the optimal mean value. Even though the algorithm has some volatility in F21, F24, F25, F26 and F28, the overall strength of HMPQPSO is still outstanding. Just like in Figure 3(i), the accuracy and speed of HMPQPSO are firmly in the front. This shows that the HMPQPSO algorithm dominates in functions with integrated characteristics. Also, it has good effects in jumping out of local optimum by using FADs and marine memory.
In the Table 3, the final ranking and rank test of all algorithms are available. According to the final ranking, the algorithm in this paper shows the best performance among all the algorithms in the 10-dimensional CEC2017 benchmark test function. With 28 best averages out of 29 benchmark functions, total Rank ranks first.

D. 30-D EXPERIMENTAL RESULTS
In this section, the number of function evaluations can be expressed as The final results in the 30-dimensional function search are shown in Tables 7 and 8. For the single-peaked function, HMPQPSO obtains the best performance on both F1 and F3. And it has a much higher merit-seeking ability than the other algorithms. In other algorithms, the computational results of F1 and F3 are much larger than the optimal values. And the goal of HMPQPSO is very close to the optimal value. The convergence of each algorithm to the F3 function can be found by the Figure 4(a). We obviously see that due to the FADs and the opposition-based learning strategy, the algorithm in this paper keeps searching for the optimal value during the iterative process. Meanwhile the particles of the other algorithms have been aggregated in the initial stage. The other algorithms do not show any signs of jumping out of the local optimum in the later stage.
Among the functions with simple multiple peaks, HMPQPSO obtained good results in all F4-F10. In all algorithms, the mean ranking of all functions computed is first. However, HMPQPSO has some volatility at this point. It ranks second in variance in F4, F5, F7 and F8 functions, and the winner at this time is BLPSO. Even in F10, the variance of HMPQPSO achieves only the sixth place. The ranking of HPSOBOA rushes to the first place. In other words, the HMPQPSO algorithm still has a good result of finding the best in these functions. However, the robustness of some of them is not the best. It can be seen from Figure 4(c) that MQPSO performs the worst in terms of convergence ability.
The particles of MQPSO have been in stagnation during the iterative process. For the F8 function, the HMPQPSO algorithm also does not converge to completion very early in the early stage. All the way to the middle, HMPQPSO is still constantly jumping out of the local optimum and searching for the global best. Of course, even in the late stage BLPSO still jumps, however, the result is obviously not as good as the HMPQPSO algorithm.
For the hybrid functions, the HMPQPSO algorithm performs equally brightly. HMPQPSO performs the best in the calculation of the best-seeking mean for the F11-F20 functions. Each of the functions is the best mean. Likewise, the variance does not perform poorly, with no cases below the second place. Seven out of ten functions get the best results. HMPQPSO ranked second. Among the F17 functions, HMPQPSO has the best mean, but the best variance is BLPSO. In Figure 4(e), DQPSO and eQPSO have similar result of finding the best, but the convergence process is very different. There is also the convergence curve of BLPSO, which shows short jumps down throughout the iterative process. Nevertheless, its optimal mean value still has a large gap with HMPQPSO. The particles of HMPQPSO, on the other hand, find the position very close to the optimal point at a very early stage. In the middle and late stages the particles find the optimal solution in a local region with high probability according to Levy flight.
Among the synthetic functions, the mean result of the HMPQPSO continues to be the first. Similarly, no other function ranks lower than the second, except for the variance of F21, which ranks third. This indicates that among all the CEC2017 functions, HMPQPSO has a higher merit-seeking ability than several other algorithms in the comparison. From F1 to F30, the algorithms in this paper get the first place in the mean value of finding the optimum. Even if the algorithm has some volatility in individual functions, the overall strength of HMPQPSO is still outstanding. HMPQPSO has strong optimality finding ability and very good robustness. In Figure 4(i), HMPQPSO gets rid of the drawback of other VOLUME 10, 2022 QPSO algorithms that tend to fall into local optimum, and the particles are more likely to move closer to the optimal point. In addition, the convergence speed of HMPQPSO is also fast. The algorithm in this paper is a good and improved QPSO.
In the Table 4, the final ranking and rank test of all algorithms are available. According to the final ranking, the algorithm of this paper shows the best performance among all algorithms in the 30-dimensional CEC2017 benchmark test function for the best-seeking mean. With 29 best averages out of 29 benchmark functions, totalRank ranks first.
In the 10-D totalRank, BLPSO ranks second and GaussQPSO ranks third. And in the 30-D totalRank, BLPSO also ranked second and EQPSO ranked third, and the rest of the QPSO deformations were calculated far worse than the top three. In particular, the two algorithms FQPSO and MQPSO, have extremely poor results in the test functions of the CEC2017 benchmark test function. The two algorithms may be more practical in other fields because of the specific problem-specific analysis of the situation.

E. EXPERIMENT ANALYSIS
The results of Wilcoxon Signed-Rank Test are given in Tables 3 and 4 for analyzing the difference between HMPQPSO algorithm and each of the other algorithms. The +, = and − represent win, tie and lose, respectively. Which means that in the CEC2017 test function, HMPQPSO outperforms the other algorithms in w functions, is similar to the other algorithms in t functions, and lags behind the other algorithms in l functions. When the dimensionality is 10 dimensions, for the BLPSO algorithm, HMPQPSO performs well in 28 functions and BLPSO performs better in one function; for QPSO with 6 deformations, the algorithm performs worse than the HMPQPSO in all functions. This may be caused by the over-reliance of QPSO on the historical optimum. At dimension 30, HMPQPSO has the most w functions for the other algorithms, indicating that HMPQPSO is more valuable for the calculation of CEC2017 test functions.
It can be seen that the performance of the algorithm in this paper is better than the other eight algorithms in both the 10-D and 30-D CEC2017 benchmark test functions. Through the convergence plots we can easily find that the deformation of QPSO is not as good as the base deformation of PSO. This may be caused by the fact that QPSO is highly dependent on the historical optimal position, which reduces the randomness. This situation particles are particularly prone to aggregation and the difficulty of falling into a local optimum occurs. However, HMPQPSO can overcome this drawback well by using the stochastic motion of marine predation and parameter control. Thus, it can appear that the algorithm of this paper is even better than the deformation algorithm of PSO.
In summary, HMPQPSO shows strong optimization in the CEC2017 benchmark test functions using multi-strategy motion as well as marine memory and FADs for jumping out of local optima. In particular, we have excellent performance in the complex functions F11-F30. Thanks to the search strategy of Brownian motion, the particle can expand its solution space. And its strategy of Levy flight can make the particle jump out of the neighborhood of individual optimal solutions.

VI. STRUCTURAL DESIGN OPTIMIZATION PROBLEMS A. PROBLEM DESCRIPTION
In this section, the algorithms in this paper focus on four optimization problems: Himmelblau's non-linear optimization problem for constrained optimization, Design of pressure vessel, Welded beam design problem and Gear train design for engineering optimization problems. These problems have linear and nonlinear constraints, and other improved VOLUME 10, 2022 129338 VOLUME 10, 2022 algorithms have been solved for the problems. So the proposed algorithms can be verified through comparative experiments. To eliminate randomness and variability, 30 independent trials are performed in each problem separately, with a randomly generated optimization size of 20 * D, where D is the number of dimensions, and the obtained optimal value, optimal mean, variance and median are computed separately. The termination condition was set to a maximum number of iterations of 1000 or a relative error of 10e-6.

B. HIMMELBLAU'S NON-LINEAR OPTIMIZATION PROBLEM
The problem was originally proposed by Himmelblau and is widely used as a standard for dealing with nonlinear constrained optimization problems. In this problem, there are five variables X = [x 1 , x 2 , x 3 , x 4 , x 5 ], and all of them are positive. There are also six nonlinear inequality constraints and ten boundary conditions, which are expressed as (24), shown at the bottom of the page.
The problem has been applied by many algorithms such as GA, PSO etc. The statistical results of HMPQPSO and the other algorithms are shown in Table 9, it can be clearly seen that all algorithms maintain similar optimal values, while the variance of the algorithm in this paper is smaller and reaches the optimal value the most times. By comparing the results in Table 10, the algorithm of He et al. gives infeasible solutions because of the violation of the g 1 constraint. The optimal solution of the algorithm proposed in this paper is X = [78, 33, 29.995, 44.99476, 36.77792], and the corresponding objective function value is −30665.53867. The optimal value, the average optimal value, and the variance of the objective function values obtained from 30 independent runs can be seen that, on the basis of obtaining the optimal solution, the algorithm in this paper runs more efficiently and the algorithm is more stable. As Figure 5(a) shows the convergence curve of this paper's algorithm for solving the Himmelblau nonlinear optimization problem, it can be seen from the curve that the algorithm finds the optimal value by high efficiency convergence speed in the early stage. In addition, it can also jump out of the local optimum and further find the optimal solution in the middle and late stage. Due to the powerful exploration and exploitation of the algorithm in this paper, we harvest a curve which converges quickly and precisely.

C. DESIGN OF PRESSURE VESSEL
The design problem of pressure vessel was first introduced by Kannan and Kramer [34]. The main shape of a pressure vessel is a cylindrical vessel which is capped at both ends by a hemispherical shaped head as shown in Figure 6. It has a working pressure of 2000 psi and a maximum volume of 750 ft 3 . The main objective of the problem is to minimize the manufacturing cost of the pressure vessel, which mainly includes the cost of material, forming and welding. Four main variables are included: the thickness of the pressure vessel (Ts), thickness of the head (Th), inner radius of the vessel (R) and length of the vessel without heads (L). The first two variables are integer multiples of 0.0625. Separately, let the four variables be x 1 , x 2 , x 3 , x 4 . This equation is shown as (25), at the bottom of the next page.
There are also many algorithms that solve the problem, such as [31], [35], and [36]. In Table 11, the objective function value of HMPQPSO is 5880.67092, which is much better than other algorithms. Even the median value is 5881.53669, which is better than the optimal value of algorithms. In the Table 12, the optimal solution corresponding to the objective function value of each algorithm is listed, and the optimal solution of HMPQPSO is X = [0.89971, 0.44286, 46.61698, 127.6749], which does not exceed the range of the constraints. As shown in Figure 5(b). In this problem, the convergence curve of the algorithm in this paper decreases substantially and converges rapidly in the early stage, and the curve decreases gently in the middle and late stages, and continues to find the global optimal solution with high efficiency near the local optimal solution. And at this point, QPSO has initially fallen into a local optimum solution. In the convergence graph, the value initially found where g 1 (X ) = 85.334407 + 0.0056858x 2 x 5 + 0.0006262x 1 x 4 − 0.0022053x 3 x 5 g 2 (X ) = 80.51249 + 0.0071317x 2 x 5 + 0.0029955x 1 x 2 − 0.0021813x 2 3 g 3 (X ) = 9.300961 + 0.0047026x 3 x 5 + 0.0012547x 1 x 3 + 0.0019085x 3    by QPSO has priority over HMPQPSO. However, the algorithm in this paper is able to converge quickly in the early iterations. This situation is attributed to the powerful exploration capability of MPA. This capability is incorporated into the QPSO, which can improve the performance of QPSO significantly.

D. WELDED BEAM DESIGN PROBLEM
The welded beam design optimization problem is a commonly used engineering optimization problem with the structure shown in Figure 7. It was first proposed by Coello [39] in order to find the minimum manufacturing cost of welded beams. This cost problem is mainly influenced by the constraints of shear stress (τ ), bending stress in the beam (θ), buckling load on the rod (P C ), deflection at the beam end (δ) and lateral restraint. There are four optimization variables, which mainly include: weld thickness (h), connection beam length (l), beam height (t) and beam thickness (b). The mathematical model of the welded beam design problem is shown below, where x 1 , x 2 , x 3 , x 4 corresponds to four variables, τ max is the maximum allowable shear stress in the weld = 13, 600 psi, δ max is the maximum allowable bending stress in the beam = 30, 000 psi, and P = 6000 lb is the load. The two components of the shear stress are the principal stress τ 1 and the secondary stress τ 2 .
; P = 6000 lb ; P C (X ) = The calculation results of different algorithms for this problem are shown in Table 13. The table shows that the objective function value of HMPQPSO is 1.6953, which is lower than the 1.6955055 of CSA and also much lower than the other algorithms, whose optimization ability is greatly improved. The optimal solutions of all algorithms are organized in Table 14. The optimal solution corresponding to the value of the objective function of HMPQPSO is X = [0.19040, 3.549096, 9.036395, 0.205740] within the corresponding constraint.

E. GEAR TRAIN DESIGN
The gear train design problem is an unconstrained optimization problem. The problem was first proposed by Sandgren [35] and the objective of the problem is to minimize the cost of the gear ratio of the gear train. The four decision variables defining the gear ratio are X = T d , T b , T a , T f = (x 1 , x 2 , x 3 , x 4 ) and the mathematical model of the problem is Minimize f (X ) = 1 6.931 − The solutions of the problem by HMPQPSO algorithm and other algorithms are shown in Tables 15 and 16. The table shows that the accuracy of the HMPQPSO algorithm is higher than the other algorithms, and the objective function value, variance and standard deviation are smaller than the other algorithms. The optimal solution obtained by HMPQPSO is X = [12, 12, 17.299482, 57.693492], and the corresponding objective function value is 1.3363e-11. At this stage( Figure 5(d)), it can be seen that QPSO is also optimized. Both QPSO and HMPQPSO are converging rapidly in the early stages. However, HMPQPSO converges faster and more accurately than the QPSO algorithm.
All in all, the exploration and exploitation capabilities of the HMPQPSO algorithm are greatly improved in practical problems, especially in the above four structural design problems. Its efficiency in finding the optimal solution is also increased tremendously.

VII. CONCLUSION
In this paper, a novel hybrid quantum particle swarm optimization with marine predators is proposed. The algorithm has two main search phases, namely Brownian motion and Levy motion used by MPA. The search method of Brownian motion combined with the traditional QPSO could expand the search area of the particle swarm. In addition, Levy motion combined with the traditional QPSO could avoid the aggregation of particles and find the global optimal solution near the local optimal solution. Levy motion is able to improve the exploitation capability of QPSO. At the same time, it allows for small probability of long-distance movement, which avoids getting trapped in the region of local optimum. Therefore, both new search phases help to increase the performance of QPSO. In addition, to create a better environment for the randomness of particles, we introduce the opposition-based learning strategy and fish aggregation devices. The proposed HMPQPSO has been fully evaluated on CEC2017 functions and compared with established related optimization algorithms, including six PSO algorithms and two QPSOs. The experimental results show that HMPQPSO achieves competitive or even better performance on most functions, especially on composite functions with complex landscapes. Also in the engineering optimization problem, HMPQPSO has better objective function values compared to other algorithms and the resulting optimal solutions do not exceed the constraints. This indicates that HMPQPSO can be well applied to engineering optimization problems.
In the future, we will further improve the optimization performance of the algorithm. There is much work that deserves further investigation. For example, the performance of HMPQPSO for stability is not satisfactory, so one possible work is to combine it with a stable search strategy. Thus, the performance of HMPQPSO can be further improved. In addition, the predators model is a general optimization framework that can be applied to other metaheuristic algorithms, such as differential evolution (DE), genetic algorithm (GA), and artificial bee colony algorithm (ABC).

DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

ETHICAL APPROVAL
This paper does not contain any studies with human participants or animals performed by any of the authors.