Multi-Strategy Fusion Improved Adaptive Hunger Games Search

Aiming at the drawbacks of Hunger Games Search (HGS) algorithm, such as slow convergence speed and the tendency to fall into local optimum, a Multi-strategy fusion Improved Adaptive Hunger Games Search (MIA-HGS) algorithm is proposed. Firstly, a good point set is employed to generate a more diverse initial population. Secondly, the control strategy selection parameter is fixed in the original HGS algorithm; an adaptive adjustment parameter is proposed to replace the fixed parameters, whose dynamically tuned update strategy strengthens the global searching ability. Finally, to further jump out of the local optimum, a mutation operation based on Logarithmic spiral opposition-based learning is performed on a population for a certain condition. Simulation experiments are carried out for 23 benchmark functions and the UAV aerial planning problem. The results show that MIA-HGS solves more accurately and converges more rapidly than the original HGS algorithm on 23 benchmark functions, with MIA-HGS leading on 69.5% of the tested functions and tying with HGS on 21.7% of the tested functions. It also showed better performance than the other algorithms on the UAV flight planning problem.


I. INTRODUCTION
In solving optimization problems [1], swarm intelligent optimization algorithms have now become a research hotspot for scholars due to their simple structure and easy implementation [2], [3], [4]. In recent years, a large number of intelligent optimization algorithms based on population foraging have been introduced. For example, Particle Swarm Optimization algorithm (PSO) [5], Grey Wolf Optimizer algorithm (GWO) [6], Sparrow Search Algorithm (SSA) [7], Whale Optimization Algorithm (WOA) [8], etc.. These optimization algorithms have a similar structure and each has its own advantages, and disadvantages.
The Hunger Games Search algorithm (HGS) [9] is a swarm intelligence optimization algorithm proposed by Yang et al. in 2021 to simulate animal hunger foraging. The algorithm is not only a simulation of animal foraging behavior but also takes into account the physical and psychological factors of The associate editor coordinating the review of this manuscript and approving it for publication was Sotirios Goudos . the animals. The algorithm is made to fit the behavioral logic of most animals foraging for food. It has been shown in the original literature [6] to be significantly better than the Particle Swarm Optimization algorithm (PSO), Differential Evolution algorithm (DE), Whale Optimization Algorithm (WOA) and Grey Wolf Optimizer algorithm (GWO) in terms of optimization accuracy and optimization stability. Due to its superior performance in finding the best solution, it has been applied in various fields such as wireless sensor networks [10], friction welding [11], solar photovoltaics [12], soil winding rate [13], cyber security [14], medical image processing [15] and Optimal Charging/Discharging Decision of Energy Storage [16].
Although HGS has good performance and has been used successfully in various fields, it also suffers from the common problems of group intelligence optimization algorithms, namely the tendency to fall into local optima and slow convergence. Since HGS has been proposed for a short period of time, only a few scholars have improved it. For example, literature [17] used different chaotic mappings to increase population diversity at different times; literature [18] proposed an improved HGS algorithm with multi-strategy integration by exploiting the global search capability of Multi-Strategy (MS) framework; literature [19] combined HGS with WOA; literature [20] increased the ability of the algorithm to jump out of the local optimum through a logarithmic spiral inverse learning mechanism.
This study is dedicated to improving the shortcomings of the traditional HGS algorithm and enhancing the overall optimization capability of the algorithm. Therefore, this paper proposes a Multi-strategy fusion Improved Adaptive Hunger Games Search algorithm (MIA-HGS). In this paper, the original HGS algorithm is improved at three levels. Firstly, at the initial stage of the population, the good point set strategy [21] is used to enhance the initial population diversity; Secondly, the parameters are improved so that they have adaptive adjustment ability, thus different search strategies are selected to improve the search performance of the algorithm; Finally, stagnation detection and logarithmic spiral opposition-based learning mechanisms are introduced to increase the ability of the algorithm to jump out of local optimum.

II. HUNGER GAMES SEARCH ALGORITHM (HGS)
The Hunger Games search algorithm is designed to simulate the activity behavior of herd animals during foraging. In nature, animals hide from natural predators, cooperate with each other or hunt alone, creating a natural law of survival of the fittest. In this brutal competitive relationship, animals inevitably have to improve themselves, i.e., evolve, in order to improve their chances of survival. The hungrier they are, the more they crave food and thus ensure their survival. At the same time, the stronger they are, the more likely they are to get food. This phenomenon is called nature's hunger game [22].

A. APPROACH FOOD
The foraging process involves both teamwork and individual action, with the animals foraging in the following ways: where Game 1 simulates animals foraging alone, i.e., looking for food near themselves; Game 2 and Game 3 simulate group cooperative foraging, looking for food around individuals in the population that have already found food (the best adapted); − → R is a random number between [−a, a] and a is a convergence factor that decreases linearly from 2 to 0 with increasing iterations; r 1 and r 2 are both random numbers between [0, 1] random numbers; randn(1) is a normally distributed random number satisfying a mean of 0 and a standard deviation of 1; − → W 1 and − → W 2 denote hunger weights, which are adjusted to control foraging based on the influence of environmental and psychological factors; − → X b denotes the optimal global position; t represents the number of current iterations; − − → X (t) denotes the current individual position; the parameters l = 0.08; E is calculated as follows: where F(i) denotes the fitness of each individual; BF is the best fitness obtained during the current iteration; the sech() is a hyperbolic function.
− → R is calculated as follows: where r 3 is a random number between [0, 1]; T represents the maximum number of iterations.

B. HUNGER ROLE
The hunger characteristics of individuals in the search were mathematically modelled. The formula for − → W 1 in Eq. (1) is as follows: The formula for − → W 2 in Eq. (1) is as follows: where hungry represents the hunger level of each individual; N is the total number of individuals; SHungry is the sum of hunger levels of all individuals, i.e. sum (hungry); r 3 , r 4 and r 5 are all random numbers between [0, 1]; hungry(i) is calculated as follows: In each iteration, the hunger of the individual closest to the food (with the best adaptation) is 0. H is calculated as follows: where r 6 and r 7 are both random numbers between [0, 1]; LH is the lower bound of H , taken as LH = 100; WF is the worst fit obtained during the previous iteration; UB and LB denote the upper and lower bounds of the search space respectively. VOLUME 11, 2023

III. ALGORITHM IMPROVEMENT STRATEGIES A. GOOD POINT SET STRATEGY
In the original HGS algorithm, the initialized population is generated randomly within the search range, which has a large uncertainty and makes it difficult to guarantee the diversity of the population. This will directly lead to a significant reduction in the convergence speed of the algorithm. In contrast, the good point set [23] solves this problem to the fullest extent possible, and the initial population it generates is more evenly distributed within the search range, which greatly improves the population diversity. It has been shown in the literature [24], [25], [26] that good point sets can effectively improve the convergence speed of the algorithm. Therefore, this paper adopts the good point set to enhance the diversity of the initial population. Good point set expressions: where p stands for good point, p k = mod 2 cos 2πk s , 1 , k = 1, 2, · · · , m. n indicates the number of populations, mod () is a remainder function, which means that p i takes only the fractional part of 2 cos 2πk s , s is the smallest prime number that satisfies s−3 2 ≥ m; m represents the spatial dimension of the population search. When initializing the population, it is sufficient to map the set of good points to the search space: where UB j , LB j denote the upper and lower search bounds of the j dimension. In order to more intuitively demonstrate the effect of using the good point set strategy to initialize the population, this paper uses the random strategy and the good point set strategy to generate 100 discrete points in a 1 × 1 2D space respectively, and the comparison effect is shown in Fig. 1. As can be seen from Fig. 1, the randomly generated discrete points are messy and some of them even overlap. However, with the good point set strategy, each discrete point was evenly distributed in space and no overlap was found, which greatly improved the diversity of the population and laid a good foundation for the next iterative search.

B. ADAPTIVE STRATEGY ADJUSTMENT
The parameters in the central equation of the HGS (which is Eq. (1)) control the choice of strategy. In the original HGS, it is common to take l = 0.08, such that the probability of an animal foraging alone is only 8%, which means that 92% of the individuals in the population search around the optimal individual from the beginning, completely discarding their previous position. This leads to premature convergence of the algorithm, which directly loses the diversity of the population and increases the probability of the algorithm falling into a local optimum. Although the original algorithm with a parameter of 0.08 was also experimentally discussed, the maximum value of the parameter in the experiment was only 0.1.
In this study, we will improve the parameter l and propose an adaptive adjustment strategy: in the early stage of the algorithm, we will continue to maintain the population diversity, so that the population will search more in its own vicinity, providing a good basis for the later search and reducing the probability of falling into the local optimum; in the middle stage of the algorithm, we should strengthen the cooperation between the populations, i.e. search near the optimal individuals of the population to form a fast convergence in the local area In the middle of the algorithm, we need to strengthen the inter-population cooperation, i.e. to search around the optimal individuals of the population to form a fast convergence in the local area, so that the accuracy of the algorithm can be improved rapidly; in the later stage, the algorithm has reached a certain accuracy, in order to further improve the accuracy of the algorithm and to find whether there are other extreme value points, the individual search should be mainly used to find a better extreme value point.
In summary, five different parameter adjustment strategies based on the normal distribution, power function, and Cauchy distribution are constructed in this paper, and the comparison graph is shown in Fig. 2. After experimental testing and verification, the parameter adjustment strategy based on the normal distribution was finally chosen, with the parameter l changing with the number of iterations t, i.e.: where T is the maximum number of iterations set by the algorithm, the algorithm has been tested to find the best results when µ = 0.5, σ 2 = 0.45, b = 0.8 are taken.

C. LOGARITHMIC SPIRAL OPPOSITION-BASED LEARNING
In order to further enhance the ability of the algorithm to jump out of the local optimum and, at the same time, speed up the convergence rate of the algorithm. When the population optimal fitness value does not change in a certain number of population iterations, the algorithm is judged to have entered a stagnant state, at which point the logarithmic spiral opposition-based learning mechanism is introduced. Although it may lead to an increase in the algorithm's time complexity, the conditions under which this mechanism occurs are mostly in the middle to late stages of the algorithm, i.e. the algorithm has largely confirmed the optimal solution. 67402 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  However, it may be a locally optimal solution, so appropriate variation in the population is required to make the algorithm jump out of the local optimum. The logarithmic spiral opposition-based learning mechanism is based on opposition-based learning (OBL) [27], incorporating logarithmic spirals.
OBL maps the current solution X to the opposite of the feasible domain [UB, LB], i.e.. X op = UB + LB − X . This causes the current solution X to change radically, greatly improving the ability of the algorithm to jump out of the local optimum. This relatively extreme approach, however, only mutates the current solution X in a fixed direction, and although it jumps out of the local optimum, it does not necessarily facilitate the search of the algorithm at a later stage. Therefore some randomness needs to be added so that the algorithm takes into account more regions, i.e.: where r 8 , r 9 is a random number between 0 and 1 and is the global optimal solution. Secondly, the search area is further extended by fusing the logarithmic helix on the basis of Eq. (14).
Finally, although logarithmic spiral OBL can produce changes in the population, such updates do not necessarily move in a good direction, so this paper adopts the idea of greed, retaining only those solutions that have improved fitness after a position change and not recording those that move in a worse direction.

D. ALGORITHM STEPS
The steps of the Multi-strategy fusion Improved Adaptive Hunger Games Search algorithm(MIA-HGS) are as follows: Step 1: Setting the basic parameters of the MIA-HGS; Step 2: Initialize the population X 1 , X 2 , · · · , X N using the good point set strategy, Eq. (11)(12); Step 3: Calculate the fitness of each individual of the population according to the fitness search function, update and record the population optimal fitness BF and X b ; Step 4: Calculate the strategy selection parameter l according to Eq. (13), calculate the strategy selection parameter E according to Eq. (2)(3), R calculate the convergence factor according to Eq. (4)(5) Calculate the starvation weights W 1 and W 2 according to Eq. (6)- (10), and finally update the population location according to Eq. (1); Step 5: When the population optimal fitness BF in Step 3 has not been updated for five or more times, a logarithmic spiral OBL mechanism will be introduced to update the variation for all individuals in the population, i.e. equations (14)-(16); Step 6: If the current number of iterations t exceeds the set maximum number of iterations then proceed to Step 7, otherwise jump back to Step 3; Step 7: End of MIA-HGS algorithm, output optimal adaptation BF and optimal position X best .
The MIA-HGS algorithm flow is shown in Fig. 3.

IV. SIMULATION EXPERIMENTS AND RESULTS ANALYSIS
In order to test the optimization-seeking performance of the MIA-HGS algorithm, 23 benchmark test functions from the literature [9] are selected for experimental comparison in this paper, and the Wilcoxon sign-rank test [28] were applied to identify the algorithms' significant differences. These 23 benchmark test functions are used in almost all articles on swarm intelligence optimization algorithms. The 23 benchmark test functions are shown in Tab. 1, where F 1 − F 7 is a high-dime national single-peaked function, F 8 − F 13 is a high-dimensional multi-peaked function, F 14 −F 23 is a fixeddimensional multi-peaked function. The Wilcoxon sign-rank test was used as a statistical significance test and when the result was '+' it indicated that MIA-HGS performed significantly better than its competitors. The experimental environment was an AMD Ryzen 7 5800X CPU @ 3.80 GHz, 16.00 GB of RAM, Windows 11, and Matlab R2018a. All the algorithms selected in this paper used the same parameter settings, i.e., the number of populations was 30, the maximum number of iterations was 500, and the algorithms were run 30 times independently. The average and standard deviation of these 30 independent experiments were taken as the evaluation index of the algorithm's optimization-seeking simulation experiments.  Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  are set as in the literature [7]: ST = 0.8, SD = 0.1, PD = 0.2. The rest of the algorithm data were obtained from the literature [29]. A comparison of the algorithm data is shown in Tab. 2.
As can be seen from Tab. 2, the MIA-HGS has a higher search accuracy than the HGS in 69.5% of the tested functions and is equal to the HGS in 21.7% of the tested functions. Firstly, the high-dimensional single-peaked function F 1 − F 7 has only one extreme value point, which mainly tests the local fast convergence ability of the algorithm. MIA-HGS is equal to HGS only in F 1 , and the rest are ahead. The above results prove that MIA-HGS has improved the fast local convergence performance compared with HGS; Secondly, there are multiple local optima on the high-dimensional multi-peak functions F 8 − F 13 , which mainly test the algorithm's global search and ability to jump out of the local optimum. MIA-HGS and HGS can find the global optimum every time on F 9 − F 11 , and there is no more room for improvement, so the results are equal. However, MIA-HGS improves on all the remaining test functions. The above results test demonstrates that MIA-HGS has enhanced its ability to search globally and jump out of local optima compared to HGS; Finally, in the fixed dimensional multi-peak function F 14 − F 23 , due to its low dimensionality and simple operation, MIA-HGS does not play its full performance and leads HGS in F 14 − F 15 and F 19 − F 23 , whereas F 17 − F 18 has the same finding accuracy as HGS, but still has a very small difference in standard deviation. Meanwhile the Wilcoxon sign-rank test showed that MIA-HGS was significantly better than HGS on 60% of the tested functions and only 17% of the tested functions were inferior to HGS. In summary, it fully demonstrates that the improved algorithm proposed in this paper is real and effective.
Compared to these five classical swarm intelligence optimization algorithms, MIA-HGS ranked first in terms of optimization results on 82.6% of the tested functions; tied with individual algorithms in terms of optimization accuracy on 8.6% of the tested functions, but with a slightly lower standard deviation; and ranked second on 4.3% of the tested functions. By the Wilcoxon sign-rank test, MIA-HGS was ahead of PSO, GWO and WOA across the board, although the disadvantage was most pronounced when comparing with SSA, which had only five cases. It can be concluded that MIA-HGS has some advantages over the classical swarm intelligence optimization algorithm.

B. COMPARISON WITH IMPROVED HGS ALGORITHM
As HGS was proposed in 2021, there are fewer relevant improvements, and researchers have chosen different test functions, so this paper collects and replicates experiments as much as possible. This will be done with the Chaotic Hunger Games Search algorithm(Chaotic-HGS) [17], the Multi-Strategy ensemble Hunger Games Search algorithm(MS-HGS) [17], the Logarithmic spiral Opposition-Based Learning Hunger Games Search algorithm (LsOBL-HGS) [20], and the Elite Opposition-Based Learning and t-Distribution Hunger Games Search algorithm (EtHGS) [22]. The experimental data of Chaotic-HGS is directly derived from Scenario 2, which has the best improvement in the literature [17]. The experimental data for MS-HGS were obtained directly from the literature [18], the experimental data for LsOBL-HGS were derived by replicating the literature [20] with parameter l = 0.08, and the experimental data for EtHGS were mostly derived from the literature [22] with the remainder is derived from the experimental replication with parameter l = 0.03, p = 0.8.
Tab. 3 shows the experimental results of MIA-HGS and the four improved HGS algorithms. Among them, MIA-HGS is ranked first on F 1 − F 4 , F 6 − F 11 , F 13 and F 19 ; the average of the search for excellence on F 17 − F 18 and F 21 − F 23 has reached the optimum, only slightly inferior to the first ranked algorithm in terms of standard deviation; and ranked second on F 12 and F 14 −F 15 . The average ranking results (the smaller the average ranking, the better the performance) are MIA-HGS < EtHGS < Chaotic-HGS < LsOBL-HGS < MS-HGS. The Wilcoxon sign-rank test experimental results also show that MIA-HGS has the advantage over other algorithms to find and develop the target solution space more quickly. In summary, MIA-HGS has the most comprehensive performance among the algorithms mentioned in this paper.
In order to more intuitively demonstrate the effect of the MIA-HGS and the other four HGS improvement algorithms, as well as the original HGS algorithm, in terms of finding the best results, this paper selects two of each of the three types from the 23 basic test functions, for a total of six test functions for iteration curve demonstration. Since the six algorithms converge in 50 generations, the paper will set the population size to 30, and the number of iterations to 50, and the effect is shown in Figure 4. From the iteration curves in Fig. 4, we can see that the starting point adaptation of MIA-HGS is relatively low, which fully reflects that the good point set strategy generates initial points with great population diversity and provides a good basis for the convergence of the later iterations of the algorithm. The starting point of MIA-HGS is not the lowest in F 1 , F 3 and F 18 of Fig. 4, but it can catch up in the subsequent iterations. In the convergence comparison plot in Fig. 4, the MIA-HGS convergence curve is largely at the below, fully reflecting its speed advantage of iterative search.

V. MIA-HGS APPLIED TO UAV 3D PATH PLANNING
In order to verify the effectiveness of MIA-HGS when solving practical problems, MIA-HGS is applied to UAV 3D path planning. The UAV 3D path planning problem is a superiority-seeking constraint problem. In the course of a mission, the UAV needs to reach a specified location from an initial position. When planning the path, factors such as terrain and obstacles need to be considered to avoid collisions between the UAV and the mountain peaks while keeping the path length as short as possible. A. TOPOGRAPHIC AREA MODEL Ada The mathematical model for mountain terrain in 3D path planning is as follows [30]: where n represents the total number of peaks in the 3D image; (x i , y i ) represents the central coordinates of the ith peak; h i represents the height of the highest point of the ith peak; x si and y si represent the decay coefficients of the ith peak along the x and y axes respectively, which control the slope of the peak.

B. PATH MODEL
In 3D spatial path planning, to avoid obstacles the UAV flight trajectory must be a 3D spatial curve, i.e. with continuous and no abrupt changes in curvature and deflection. However, in the swarm intelligence optimization algorithm, multiple sample points can only be used instead of a curve for optimization. In order to realistically simulate the UAV trajectory, this paper uses cubic B-spline interpolation to smoothly connect the discrete points. k discrete sample points are set as intermediate path nodes, i.e. {M 1 (x 1 , y 1 , z 1 ) , · · · , M k (x k , y k , z k )}; the start and end points are S (x s , y s , z s ) and G x g , y g , z g respectively, thus forming the complete path node {S, M 1 , · · · , M k , G}. These discrete points' coordinates x s , x 1 , · · · , x k , x g , y s , y 1 , · · · , y k , y g , and z s , z 1 , · · · , z k , z g are fitted using cubic B-spline interpolation, resulting in a smooth 3D spatial curve.
In the path planning process, in order to prevent collisions between the UAV and the mountain, the 3D spatial curve of the simulated route cannot have any crossover overlap with the mountain, meaning that the height z of the curve at the same horizontal and vertical coordinates is greater than the height of the peak z h , i.e.: z > z h (x, y).

C. EXPERIMENTAL RESULTS AND ANALYSIS
In order to verify the effectiveness of the MIA-HGS algorithm in 3D spatial path planning, simulation experiments were conducted using Matlab2018a and compared with some classical optimization algorithm. The experimental space is   80 × 80 × 40(km 3 ) and there are 6 peaks in the space, the specific experimental data of each peak are shown in Tab. 4. The coordinates of the start and end points are (10,80,5) and (60, 0, 5) respectively. The three-dimensional spatial topography is shown in Fig. 5.
The initial population size of the algorithms involved in the experiments was all set to 30, the maximum number of iterations was 100, and the rest of the parameters were kept consistent with the experiments above. The convergence curves of the algorithm iterations are shown in Fig. 6, and the results are shown in Tab. 5.
From Fig. 6 and Tab. 5, it can be seen that MIA-HGS also has a strong optimization capability when solving 3D route planning, and the final shortest route distance obtained by MIA-HGS is 96.71km, which is better than the 100.93km planned by the HGS algorithm, which shows that the HGS  and GWO algorithms fall into a local optimum, while PSO is close to the global optimum but converges too slowly. The average of the MIA-HGS algorithm was found to be better than the HGS algorithm. MIA-HGS takes a similar amount of time to HGS in the initialization and adaptive strategy phases, but the final step is more time-consuming. The jump out of the local optimum phase is the generation of a new population based on the original population, which is eventually retained on merit, and consumes similar time as the initialization population with a time complexity of O (n). However, this phase also occurs conditionally and is not executed every time. Overall the overall time complexity of MIA-HGS is increased, but as this phase allows the algorithm to find the global optimum quickly, the extra time spent iterating is more than worth it. The final route map generated by MIA-HGS is shown in Fig. 7, where the route completely bypasses terrain such as peaks and the curve is smooth and continuous. The simulation results show that the MIA-HGS algorithm has successfully solved the 3D path planning problem.
The fitness parameters in MIA-HGS remain key to controlling whether populations choose global search or local exploration. The introduction of variation is at an intermediate to late stage and variation manipulation does not affect the choice between global search and local exploration.

VI. CONCLUSION AND FUTURE PERSPECTIVES
This study proposes a Multi-strategy fusion Improved Adaptive Hunger Games Search algorithm (MIA-HGS) to address the disadvantages of poor initial population diversity, low solution accuracy, and the tendency to fall into local optimality of the Hunger Game Search algorithm (HGS). The algorithm uses a good point set strategy to enhance the diversity of the initial population and provide a good basis for the algorithm to find an optimum; The original HGS algorithm uses a fixed parameter to control the selection of the update strategy during the convergence process. In this paper, the parameter is improved by replacing the fixed parameter with an adaptive adjustment parameter, which automatically adjusts the parameter size according to the number of iterations and selects the update strategy according to the situation, further improving the global search ability and reducing the probability of the population falling into a local optimum; when the population stagnates, a logarithmic spiral opposition-based learning strategy is introduced to enable the algorithm to jump out of the local optimum in time and accelerate the speed of the search. Adaptive parameters in MIA-HGS are still key to controlling the choice of global search or local exploration of populations. The introduction of variation is at an intermediate to late stage and the variation operation does not influence the choice between global search and local exploration. These improvements have led to significant improvements in the accuracy, stability, and speed of convergence of the algorithm.
Finally, through simulation experiments on 23 benchmark test functions and comparison with five traditional optimization algorithms and four improved HGS algorithms, the experimental results show that the MIA-HGS algorithm effectively improves the ability of the original algorithm to search globally and jump out of local optima in highdimensional problems, improves the convergence accuracy and accelerates the convergence speed. In order to verify the ability of MIA-HGS in solving practical problems, this paper applies MIA-HGS to UAV trajectory planning and also achieves good application results, and accomplishes the planning objectives better. The next step will be to use the MIA-HGS algorithm on more practical engineering problems, such as friction welding, image segmentation, bridge design, etc., to verify the algorithm's ability to solve more and more complex practical problems.
DAMING ZHANG received the Ph.D. degree in resource information and decision-making from Northeastern University, China, in 2017. He is currently an Associate Professor and a Master's Supervisor with the School of Information Science and Engineering, Guilin University of Technology. His main research interests include resource information, decision-making, and software engineering.
YANQING ZHAO is currently pursuing the master's degree with the School of Information Science and Engineering, Guilin University of Technology, Guilin, China.
JUNJIE DING is currently pursuing the master's degree with the School of Information Science and Engineering, Guilin University of Technology, Guilin, China.
ZIJIAN WANG is currently pursuing the master's degree with the School of Information Science and Engineering, Guilin University of Technology, Guilin, China.
JIAQING XU is currently pursuing the master's degree with the School of Information Science and Engineering, Guilin University of Technology, Guilin, China.