Improved Whale Optimization Algorithm Based on Nonlinear Adaptive Weight and Golden Sine Operator

Whale optimization algorithm (WOA) is a swarm intelligence-based algorithm that simulates whale population predation in the sea. Aiming at the shortcomings of WOA such as low precision and slow convergence speed, an improved whale optimization algorithm based on nonlinear adaptive weight and golden sine operator (NGS-WOA) was proposed. NGS-WOA first introduced a non-linear adaptive weigh so that search agents can adaptively explore the search space, and balance the development and exploration stages. Secondly, the improved golden sine operator is incorporated into the WOA. Due to the special relationship between the sine function and the unit circle, traversing the sine function is equivalent to scanning the unit circle. The search agent performs an efficient search with a sine route so as to improve the convergence speed and global exploration capability of the algorithm. At the same time, the addition of the golden section coefficient allows search agents to exploit with a fixed shrink step. The search agent can develop to areas with excellent results, which improves the optimization accuracy and local exploitation ability of the algorithm. In the simulation experiments, the gold sine algorithm (GoldSA), whale optimization algorithm (WOA), particle swarm optimization (PSO) algorithm, firefly algorithm (FA), fireworks algorithm (FWA), sine cosine algorithm (SCA) and NGS-WOA were selected for comparison experiments. Then, the effectiveness of the proposed improved strategies is verified. Finally, the improved WOA is applied to high-dimensional optimization and engineering optimization problems. The experimental results show that the improved strategy can effectively improve the performance of the algorithm, so that NGS-WOA has the advantages of high global convergence and avoiding falling into local optimal values.


I. INTRODUCTION
Meta-heuristic algorithms have been used in more and more engineering problems over the past two decades. Meta-heuristic algorithms are inspired by simple principles in nature and do not require gradient information. They are easy to implement and can be used to solve a wide range of problems covering different areas. Generally, meta-heuristic algorithms simulate natural phenomena to build mathematical models. The meta-heuristic algorithms mainly include The associate editor coordinating the review of this manuscript and approving it for publication was Pengcheng Liu . evolution-based algorithms, swarm intelligence-based algorithms, and physical phenomenon-based algorithms.
The evolution-based algorithms are inspired by Darwin's theory of evolution and retain the best individuals from the population as the offspring to perform operations. Evolutionbased algorithms include Genetic Algorithm (GA) [1], Evolutionary Strategy (ES) [2], Differential Evolutionary Strategy (DE) [3], Biogeographic-Based Optimization Algorithm (BBO) [4], Probability-Based Incremental Learning (PBIL) [5], etc. The swarm intelligence based algorithms simulate the intelligent behavior of biological swarms (foraging methods, migration routes, mating choices, and information sharing mechanisms) to establish mathematical models. (ASO) [36], Social Based Algorithm (SBA) [37], Taboo Search (TS) [38], etc. In general,swarm intelligence-based algorithms are better than evolution-based algorithms. First, evolution-based algorithms do not retain search space information after preserving the next generation. For example, genetic algorithms form a new generation of populations through selection, crossover, and mutation. Information exchange mechanisms are not used between populations. However, swarm intelligence-based algorithms usually retain information in the search space. For example, the dragonfly algorithm uses the concept of neighborhoods so that individual dragonflies can move and forage based on each other's information. Second, swarm intelligence-based algorithms typically control fewer parameters and operators than evolution-based algorithms. SI-based algorithms have higher flexibility and can be applied to optimization problems in different disciplines and situations. It is worth mentioning that heuristic Mathematical Programming(MP)techniques based algorithms are relatively novel algorithms that have been proposed in recent years. At present, there is little related literature: the Basic Optimization Algorithm (BOA) [39], [40] uses basic mathematical operators and displacement parameters pointing to the optimal value for optimization. The Sine Cosine Algorithm (CSA) [41] builds a mathematical model by adaptively and equally using sine and cosine search methods. The Gold Sine Algorithm (GoldSA) [42] was inspired by the sine function in trigonometric functions. GoldSA uses the sine function route combined with the golden section coefficient to solve the optimization problem.
The whale optimization algorithm (WOA) [43] is a swarm intelligence-based algorithm proposed by Seyedali Mirjalili in 2016 to solves the optimization problem by establishing a mathematical model that simulates the behavior of whale predation. It relies on simple concepts and contains fewer operators, so it has attracted the attention of many scholars. However, WOA still has defects in solving optimization problems. These limitations inspired scholars to conduct in-depth research on it and proposed many improved methods and practical applications. Majdim.Mafarja et al designed two hybrid models based on WOA for feature selection problems [44]. In the first model, the simulated annealing (SA) algorithm is embedded in the WOA. In the second model, the simulated annealing (SA) algorithm is used to improve the best solution found by each iteration of the WOA. Experimental results show that both hybrid WOA can improve classification accuracy. However, the effect of the improved algorithm on the benchmark function has not been demonstrated. In the problem of determining the optimal threshold for image segmentation, the traditional method based on Otsu and Kapur is applicable to the two-level threshold problem. However, determining the optimal threshold in the case of multiple levels of thresholds has the disadvantage of being time-consuming. Mohamed Abd ElAziz et al used WOA to determine the optimal threshold for image segmentation [45], which made up for the time-consuming disadvantage. However, the performance of the improved algorithm in terms of convergence speed and optimization accuracy is unknown. DiegoOliva et al used chaotic mapping to calculate and automatically adjust the internal parameters of WOA, and proposed the parameter estimation problem of the WOA applied to photovoltaic cells [46]. Experiments show that the proposed algorithm improves the ability to find the best solution. At the same time, the robustness of the algorithm has also been effectively improved. However, the effectiveness of the improved algorithm requires more experiments to verify. Jianzhou Wang et al proposed a multi-objective whale optimization algorithm (MOWOA) applied to wind speed prediction system for wind power generation [47], which is used to optimize the weights and thresholds of the Elman neural network used in the prediction system, which has better prediction accuracy and stability. However, the effectiveness of the MOWOA requires more experiments to verify. Salgotra, Rohit et al used the concept of opposite learning, exponentially decreasing parameters, and the concept of eliminating or re-initializing the worst particles to improve the whale algorithm [48]. Experimental results show that the performance of all proposed versions is better than the original WOA. At the same time, the influence of dimensions and variable population on the algorithm is analyzed. WOA has strong ability to solve different optimization problems, but still has the disadvantages of low optimization accuracy, slow convergence speed, and easy to fall into local optimal values. At the same time, according to the No Free Lunch Theorem (NFL) [49], no algorithm can solve all optimization problems. Improved algorithms may outperform other algorithms on some optimization problems. Therefore, the innovation and necessity of this paper have been strongly supported.
The structure of WOA algorithm is simple, and it has strong optimization ability. However, the shortcomings of WOA are also obvious. First, the location update mechanism of WOA is determined by random parameters, which brings randomness to the algorithm. Randomness allows the algorithm to explore more areas. However, random selection of mathematical models may appear that the selected model is not optimal. At the same time, the spiral update mechanism may trap search agents in a logarithmic spiral, which reduces population diversity. Finally, the location update mechanism has designed parameters that balance the exploitation and exploration phases. However, the parameters are random, which brings uncertainty to the algorithm. In view of the shortcomings of the WOA algorithm, this paper proposes an improved WOA based on nonlinear adaptive weight and golden sine operator (NGS-WOA). The nonlinear weight can be adaptively changed with the iterative process, and guide the search agent to explore the search space. It can find suitable transition between local exploitation and global exploration. Due to the special relationship between the sine function and the unit circle, traversing the sine function is equivalent to scanning the unit circle. The improved golden sine operator enables the search agent to efficiently scan the search space with a sinusoidal route, and improves the global exploration capability of the algorithm. At the same time, the addition of the golden section coefficient makes the search agent move closer to a region with good results with a fixed contraction, which improves the local exploitation ability of the algorithm. In addition, the golden sine operator is used to update the optimal solution found in each iteration of WOA, which eliminates the randomness of the algorithm. The sections of this paper are arranged as follows. Section II introduces the whale optimization algorithm. Section III introduces the improved strategy and the detailed steps of NGS-WOA. Section IV discussed the simulation experiments and analysis of NGS-WOA. First, the improved WOA is compared with other algorithms. The effectiveness of the improved strategy was subsequently verified. Finally, the improved WOA is applied to high-dimensional optimization and engineering optimization problems. Section V is the conclusion of this paper.

II. WHALE OPTIMIZATION ALGORITHM A. MATHEMATICAL MODEL OF WHALE OPTIMIZATION ALGORITHM
Whales are intelligent animals [50], and their predation process is peculiar. In order to transform the process of whale predation into a mathematical model, WOA regards the optimal value in the search space as a prey. The search agent continuously explores the search space through the location update mechanism, and finally moves closer to the area where the optimal value is located. In order to further simulate the process of whale predation, WOA designs two mathematical models, namely the shrinking enclosing mechanism and the spiral update mechanism. The use probability of these two models is equal. To implement the mathematical model, the algorithm first generates a random number p in [0,1]. If p < 0.5, choose the shrinking enclosing mechanism. If p ≥ 0.5, choose the spiral update mechanism. The shrinking enclosing mechanism and the spiral update mechanism are described as follows.
The shrinking enclosing mechanism can simulate the process of whale predation, which is described as follows: where, X indicates the position vector of the search agent, t is the current number of iterations, X p (t) indicates the position of the current optimal value (prey), A · C · X p (t) − X (t) indicates the indentation distance of the search agent to the current optimal value, which is used to simulate the process of the whale approaching the prey, A and C are coefficient vectors used to control the movement effect of the search agent, which can be described as: where, A is a random number in [−1,1], a indicates the convergence factor, which gradually decreases from 2 to 0 as iterations, r is a random number in [0,1]. It is not difficult to see that the shrinking enclosing mechanism with the decrease of the convergence factor a can realize the continuous shrinking process of whales. In fact, in addition to shrinking and enclosing the prey, whale populations will also randomly look for prey based on the location of the individual. The equation for random search of whale population is described as: where, A has the same meaning as Eq (2), X rand (t) indicates a randomly selected position vector from the search agent. It can be seen from Eq (5) that the search agent can realize the process of whale randomly finding prey by selecting random vectors. The shrinking enclosing mechanism can simulate the behavior of whale populations enclosing prey and randomly searching for prey. To implement this mathematical model, the shrinking enclosing mechanism switches the search mode through the coefficient vector A. The model of the shrinking enclosing mechanism is shown in Fig. 1. Among them, (x, y) indicates the position of the current search agent, and (x * , y * ) that the coefficient vector A can guide search agents through the exploitation or exploration process. When A ≤ 1, the search agent can move closer to the current optimal value, which reflects the local exploitation ability of the algorithm. When A > 1, the search agent can randomly search the area outside the current optimal value, which reflects the global exploration ability of the algorithm. The shrinking enclosing mechanism closes or spreads the optimal value area, which makes the algorithm well balance the exploitation and exploration.
It can be known from Eq (1) that, in addition to the shrinking enclosing mechanism, another mathematical model of the whale optimization algorithm is the spiral update mechanism, which can be described as: where, b is a constant used to limit the shape of the logarithmic spiral, l is a random number in [−1,1], which is used to control the indentation effect of the search agent, D indicates the distance between the i-th search agent whale and the current optimal value, which can be calculated by: The model of the spiral update mechanism is shown in Fig. 2.
It can be seen from Fig. 2 that the spiral update mechanism first calculates the distance between the search agent and the current optimal value. Then, a logarithmic spiral route is created between them to ensure that the search agent moves closer to the target value. In addition, Fig. 2 shows the effect of parameter l on the indentation effect of search agents. The decrease of parameter l can enhance the indentation effect of the search agent to the current optimal value, and increase the local search ability of the algorithm. Increasing the parameter l can weaken the indentation effect of the search agent to the current optimal value, and reduce the probability of the algorithm falling into a local optimum.

B. PSEUDO CODE AND DISCUSSION OF WOA ALGORITHM
The pseudo code of WOA is described as follows. Check if any search agent goes beyond the search space and amend it Update X p if there is a better solution t = t + 1 End While Return X p From the pseudo code and mathematical model, it can be seen that WOA has the following advantages: 1) The concept and structure of the algorithm are simple and easy to implement. 2) WOA regards the current best solution as prey. The search agent updates the location according to the prey, and can conduct exploration and exploitation without losing the target value, and develop to an excellent area in the search space. 3) WOA retains the best solution for each iteration, even if the quality of the entire population declines, it does not affect the prey. 4) The position update mechanism designed by WOA considers the balance between development and exploration, and can avoid falling into local optimal values. 5) As a swarm intelligence algorithm, WOA will continuously improve the initialized random solution. Compared with individual-based algorithms, it has better global exploration.
At the same time, according to the discussion about WOA in the introduction, we should also consider the potential flaws of WOA: 1) Too many random parameters bring uncertainty to the algorithm. 2) WOA designed the parameters to balance the exploitation and exploration phases. However, the parameters are random. The parameters should change adaptively with iteration. 3) WOA algorithm is easy to fall into local optimal value on multi-peak function. Improvement strategies need to be introduced to further enhance the transition between exploitation and exploration.

III. IMPROVED WHALE OPTIMIZATION ALGORITHM BASED ON NONLINEAR ADAPTIVE WEIGHT AND GOLDEN SINE OPERATOR A. NONLINEAR ADAPTIVE WEIGHT
Local exploitation and global exploration are two important stages of the optimization algorithm. Excessive exploitation capacity will inhibit the tendency of search agents to develop towards the global optimal solution. Excessive exploration capabilities will reduce the optimization accuracy. Therefore, finding an appropriate transition between the two stages can improve the performance of the optimization algorithm. WOA has designed two mathematical models, namely the shrinking enclosing mechanism and the spiral update mechanism. In the shrinking enclosing mechanism, when A ≤ 1, the search agent moves closer to the area where the target value is located, which reflects the exploitation ability. When A > 1, the search agent randomly searches the area outside the target value, which reflects the exploration ability. In the VOLUME 8, 2020 spiral update mechanism, the decrease of parameter l can enhance the indentation effect of the search agent to the target value, which reflects the exploitation ability. The increase of parameter l can weaken the indentation effect of the search agent to the target value. This process reduces the probability of the algorithm falling into a local optimum and reflects the exploration capability. It can be seen that the changes of parameters A and l directly affect the balance between exploitation and exploration, and then affect the performance of the algorithm. However, the changes of parameters A and l are random, which brings randomness to the algorithm. Therefore, this paper introduces a nonlinear adaptive weight, so that parameters A and l can be adaptively adjusted with the number of iterations so as to reduce the impact of randomness on the performance of the algorithm. The proposed nonlinear adaptive weight C 1 are described by the following equation: where, t is the current number of iterations, T is the total number of iterations, k indicates the adjustment factor, which is used to adjust the convergence effect of the nonlinear adaptive weight. The influence of the adjustment factor k on the nonlinear adaptive weight C 1 is shown in Fig. 4. It can be seen from Fig. 4 that when k = 1/2, the nonlinear adaptive weight have the best convergence effect. The nonlinear adaptive weight has a slow downward trend in the early stage and a significant downward trend in the later stage, which can effectively balance the exploration and exploitation. Eq. (2) and (5) of the shrinking enclosing mechanism are updated to the following equations after introducing nonlinear adaptive weight C 1 .
Eq (6) of the spiral update mechanism are updated to the following equations after introducing nonlinear adaptive weight C 1 .
From the beginning of the iteration to half of iterations, the downward trend of the nonlinear adaptive weight C 1 is slow, and the values of parameters A and l are not decreasing significantly. The shrinking enclosing mechanism and the spiral update mechanism can enable search agents to perform global search, explore areas with excellent results in the search space, and improve the exploration capabilities of algorithm. As the number of iterations increases, the downward trend of nonlinear weight accelerates, and the values of parameters A and l decrease significantly. The shrinking enclosing mechanism and the spiral update mechanism make the search agent move closer to the target value area, which improves the exploitation ability of the algorithm. The introduction of nonlinear adaptive weight can adaptively adjust parameters A and l, so that the algorithm balances the exploration and exploitation stages.

B. GOLDEN SINE OPERATOR
The Golden Sine Algorithm (GoldenSA) is a mathematical heuristic algorithm proposed by Erkan Tanyildizi in 2017, which is inspired by the sine function in mathematics. Gold-enSA uses a sine function combined with the golden section coefficient to perform an iterative search. It has good robustness and convergence speed. The sine function is represented by the abbreviation sin and the range is [−1,1]. The sine function is a periodic function that repeats values at regular intervals. The function period is 2π. The sine function has a special relationship with the unit circle. The relationship between the sine function and the unit circle is shown in Fig. 5. The coordinates on the sine function are relative to the y-axis coordinates of the points on the unit circle with the origin as the center and a radius of 1. Traversing the points of the sine function is equivalent to traversing all the points on the unit circle. Scanning a unit circle by a sine function is similar to exploring a search space.
In the 4th century BC, the ancient Greek mathematician Eudoxus first researched the golden section coefficient systematically and established the theory of proportion. Around 300 BC, Euclid absorbed the research results of Eudoxus when he wrote The Original Geometry, and further systematically discussed the golden section, becoming the earliest treatise on golden section. The shape designed in this proportion is very elegant, and the most harmonious dimension can be observed between the whole and the various parts. Therefore, the golden section is widely used in the field of art. The golden section coefficient is derived from the following equation: 77018 VOLUME 8, 2020  Eq.(12) can be described as the following equations: Solving Eq. (14) can gives the golden section coefficient: The golden section coefficient does not require gradient information, only one iteration is required for each step. At the same time, the shrink step of the golden section coefficient is fixed. Therefore, combining the sine function with the golden section ratio can find the maximum or minimum value of a single unimodal function faster. The combination of the sine function and the golden section coefficient is shown in Fig. 6. The golden sine algorithm adds the golden section coefficient in the process of updating the position of the search agent, so that the algorithm can continuously reduce the search space. The algorithm searches in the area that produces excellent results instead of the entire search space, which greatly improves the convergence speed of the algorithm.
Similar to the population-based optimization algorithm, the golden sine algorithm first randomly generates initial population in the search space, that is to say: where, rand (N , D) indicates the positions of N search agents randomly distributed in the D-dimensional space, ub is the upper bound of the search space, and lb is the upper bound of the search space. At this time, the population initialization of the golden sine algorithm has been completed. In order to improve the random population and enable the search agent to develop towards the target value, the golden sine algorithm uses the Golden Section Search (GSS) to update the position of the search agent. In general, the flowchart of GSS method is shown in Fig. 7. The equation for GSS to update the population position is described as follows.
where, V (i, j) indicates the position of the search agent in the i-th dimension. D (j) indicates the position of the target value. r 1 and r 2 are random numbers, r 1 ∈ [0, 2π], r 2 ∈ [0, π]. x 1 and x 2 are coefficients calculated through the golden ratio, which can drive the search agent closer to the target value. Coefficients x 1 and x 2 can be calculated by: In the golden sine algorithm, the equations for calculating coefficients x 1 and x 2 are the same as GSS method. In order to adapt to the expression of the optimization algorithm, the values of a and b will change with the target value, which will cause the parameters x 1 and x 2 to change. Among them, the initial values of a and b are −π and π. The flow chart of the parameter change with the target value is shown in Fig. 8.
The pseudo code of GoldSA is described as follows. Initialize the population V by the Eq.(16) The original WOA designed the mathematical model (shrinking enclosing mechanism and the spiral update mechanism) for balanced exploitation and exploration. However, the parameters for switching exploitation and exploration are random, which will bring uncertainty to the algorithm. Therefore,the nonlinear adaptive weight is introduced in this paper. Parameters A and l can be adaptively adjusted, so that the location update mechanism can find a transition in the exploration and exploitation phases. With the introduction of nonlinear adaptive weight, the shrinking enclosing mechanism is updated to Eq (9) and (10). The spiral update mechanism is updated to Eq (11).
The main goal of the optimization algorithm is to find the optimal areas in the search space, and to ensure that search agents explore these areas as completely as possible. Optimization problems generally have a wide search space. Narrowing the search range will improve the algorithm's global convergence and local exploitation. In order to achieve this process, this paper introduces an improved golden sine operator into a whale optimization algorithm. The sine function has a special relationship with the unit circle. Traversing all points on the sine function is equivalent to scanning the unit circle. The search agent can scan the search space more efficiently with a sinusoidal route, which improves the global exploration capability of the algorithm. At the same time, the golden section coefficient enables the search agent to update the distance and direction with a fixed step size, and continuously narrows the space to be explored. Search agents search in areas that return excellent results, rather than the entire search space, improving the local exploitation capabilities of the algorithm. The improved golden sine operator increases the indentation coefficient based on the golden sine operator, and enhances the moving effect of the search agent. Search agents can quickly develop into excellent regions in order to improve the convergence speed of the algorithm. The improved golden sine operator was used to update the position of the search agent, that is to say: where, r 1 and r 2 are random numbers. r 1 controls the moving distance of search agent. r 2 controls the moving direction of search agent. m 1 and m 2 are improved indentation coefficients, which can drive search agents closer to the target value in longer steps. Fig. 9 is the flowchart of NGS-WOA. The pseudo code of NGS-WOA is as follows.
Initialize the whales population X i (i = 1, 2, 3 . . . n) Calculate the fitness of each search agent

D. TIME COMPLEXITY ANALYSIS
The time complexity of an algorithm is a function that qualitatively describes the running time of the algorithm. The time complexity is usually expressed in big O symbol. In order to calculate the time complexity, the number of operating units of the algorithm is usually estimated. Therefore, the total running time and the number of operating units of the algorithm differ by at most a constant factor. In the optimization algorithm, the time complexity is related to the number and structure of the operation units of the algorithm. For the WOA, the time complexity mainly depends on the number of search agents, the number of iterations, and the location update mechanism. The time complexity of the improved algorithm NGS-WOA mainly depends on the number of search agents, the number of iterations, and the improvement strategy. In order to evaluate the impact of the improved strategy on the algorithm running cost, the time complexity of WOA and NGS-WOA was analyzed.
The time complexity of each operation unit of WOA is described as follows.
1) The N search agents are distributed in the D-dimensional search space, which needs to run N · D times.
2) Calculate the fitness value of each search agent and select the best agent as the prey, which needs to run [N · (N − 1)] 2 times.
3) Parameters a, A, C, l and p are updated respectively, which needs to run 5 times.
4) The position update operation of N search agents in the D-dimensional space, which needs to run N · D times.

5)
Output the optimal value (prey), which needs to run 1 times.
Each of the above operation units has undergone T iterations. Therefore, the total time complexity of the WOA is O (WOA) = T · ND + N 2 − N 2+6 . The time complexity of each operation unit of the NGS-WOA algorithm is described as follows.
1) The N search agents are distributed in the D-dimensional search space, which needs to run N · D times.
2) Calculate the fitness value of each search agent and select the best agent as the prey, which needs to run [N · (N − 1)] 2 times.
3) Parameters a, A, C, l and p are updated respectively, which needs to run 5 times. 4) Nonlinear adaptive weight C 1 is updated, which needs to run 1 times.
5) The position update operation of n search agents in the d-dimensional space, which needs to run N · D times. 6) Parameters r 1 , r 2 , m 1 and m 2 are updated respectively, which needs to run 4 times.
7) The N search agents perform the golden sine operation in the D-dimensional space, which needs to run N · D times. 8) Output the optimal value (prey), which needs to run 1 times.
Each of the above operation units has undergone T iterations. Therefore, the total time complexity is O (NGS − WOA) = T · ND + N 2 − N 2+11 + T · ND. The golden sine operator (GS) is used to update the optimal solution found in each iteration of the WOA algorithm.
Therefore, compared with WOA, NGS-WOA increases the time cost. In addition, GS can also be embedded in WOA as a local operator. It should be noted that although embedding the GS operator in WOA will not increase the time cost, it has a limited effect on improving the optimization performance. The influence of the introduction position of the GS operator on the optimization performance of WOA will be discussed in detail in Section B of Chapter IV.

IV. SIMULATION EXPERIMENTS AND RESULT ANALYSIS A. BENCHMARK FUNCTIONS
The experimental simulation used 28 benchmark functions to evaluate the performance of NGS-WOA. These test functions can be divided into three categories: unimodal functions, multimodal functions, and combined functions. Among them, the function F1-F22 is the CEC2005 test set. As a classic test set, you can comprehensively evaluate the performance of the algorithm. In addition, functions F23-F28 are CEC2017 test functions. As the latest test set, it can increase the credibility of the experiment. Functions F 1 − F 7 are unimodal functions with only one global optimum. The unimodal function is used to evaluate the local exploitation ability and convergence speed of the algorithm. Functions F 8 − F 13 are multimodal functions. Unlike unimodal functions, multimodal functions have multiple local optimums. As the size of the problem increases, the number of local optimums also increases. The multimodal function has important reference value for evaluating the exploration ability of the algorithm. Functions F 14 − F 22 are combined functions. The difference from VOLUME 8, 2020 multimodal functions is that they have fewer dimensions, have a small number of local optimal values, and do not allow adjustment of the dimensions. The global optimal value of the fixed-dimensional multimodal function is shifted to test 77024 VOLUME 8, 2020 the optimization accuracy of the algorithm. The benchmark functions and specific information are shown in Table 1.

B. ANALYSIS OF THE INTRODUCTION POSITION OF GS OPERATOR
The position introduced by the GS operator may affect the optimization performance of the WOA algorithm. This paper proposes two methods for introducing GS operators. In the first method, the GS operator is used to optimize the optimal solution found in each iteration of WOA. In the second method, the GS operator is embedded as a local operator in WOA, replacing the spiral update mechanism. Compare the two methods to explore their effectiveness. The convergence curves of some functions are shown in Fig. 10. However, the convergence curve can only show the convergence speed and optimization accuracy of the algorithm. Therefore, the mean and variance designed in Table 2 are used to intuitively show the average accuracy and robustness. For each function, the algorithm runs independently 10 times. It can be seen from the convergence curve that the first method has better convergence speed and optimization accuracy than the second method. In addition, the first method also has obvious advantages in average accuracy and robustness.
The selection of WOA mathematical model is determined by random parameters, which increases the randomness of the algorithm. The first method updates the optimal solution found in each iteration, which can effectively avoid randomness. The advantage of the first method is that it significantly improves the performance of the algorithm, but the disadvantage is that it adds a little time complexity. The second method embeds the GS operator in WOA. However, the probability that the GS operator is selected is determined by random parameters, and the randomness of the algorithm is not eliminated. Therefore, the second method does not significantly improve the performance of the algorithm. The advantage of the second method is that it does not increase the time complexity. Although the first method adds a little time complexity, the optimization performance has been significantly improved. Therefore, this paper chooses the first method as the introduction method of GS operator.

C. COMPARISON OF NGS-WOA WITH OTHER ALGORITHMS
In order to objectively evaluate the performance of the improved WOA, this paper selects the Whale Optimization Algorithm (WOA), Gold Sine Algorithm (GoldSA), Particle Swarm Optimization (PSO) Algorithm, Firefly Algorithm (FA), Firework Algorithm (FWA), Sine Cosine Algorithm (SCA) and NGS-WOA for comparison experiments. It should be noted that NGS-WOA uses the GS operator to update the optimal solution for each iteration, so NGS-WOA has two location update evaluations in each iteration. However, other algorithms have only one location evaluation. To maintain fairness, the number of iterations of NGS-WOA is set to 500, and the number of iterations of other algorithms is set to 1000. At this time, the sum of the location update evaluation of NGS-WOA is 2 × 500, which is consistent with other algorithms (1 × 1000). The parameter settings of the algorithm are listed in Table 3. The convergence curve of the algorithms are shown in Fig. 11. However, the convergence curves does not show the average accuracy and robustness of the algorithm. To further evaluate the performance of the algorithms, the algorithms were run independently 10 times for each test function. The experimental results are listed in Table 4. Three indicators listed in Table 4 are the optimal value, mean value and variance, which are used to evaluate the optimization accuracy, average accuracy, and robustness of the algorithms.
It can be seen from the convergence curves that NGS-WOA is superior to other algorithms in 50% of functions, and is the optimal algorithm in this experiment. For unimodal VOLUME 8, 2020 functions F1-F7, the improved WOA is inferior to PSO algorithm and FA only on function F6. NGS-WOA has obvious advantages in functions F1-F5, which shows that the improved strategy proposed in this paper can effectively 77026 VOLUME 8, 2020 improve the local exploitation ability and convergence speed of WOA. For multimodal functions F8-F13, NGS-WOA has obvious advantages over functions F7-F11. Especially for functions F9 and F11, the NGS-WOA and GoldSA find the 77028 VOLUME 8, 2020 theoretical optimal value within 100 iterations and exit the iteration. The improved WOA is inferior to PSO algorithm and FA only on functions F12 and F13. It is verified that the introduction of improved golden sine operator enables VOLUME 8, 2020 search agents to develop into excellent areas in search space. The convergence curves of the multimodal functions show that the improved algorithm can avoid the stagnation of the local optimal value and has strong global exploration ability.   The compound functions F14-F22 has a small dimension and is mainly used to evaluate the optimization accuracy of the algorithm. Therefore, the difference in algorithm convergence speed is not obvious. For functions F23-F28, the improved algorithm has obvious advantages. The convergence speed of NGS-WOA is worse than that of GoldSA only in function F3. As can be seen from Table 4, NGS-WOA found a theoretical optimal value on 54% of functions. WOA found a theoretical optimal value on 18% of functions. GoldSA found a theoretical optimal value on 46% of functions. PSO algorithm, FA, and FWA found theoretical optimal values on 7% of functions. SCA found a theoretical optimal value on 3% of functions. The improved WOA has the best optimization accuracy in this experiments. NGS-WOA also performs well in terms of average accuracy and robustness. NGS-WOA is more competitive than other algorithms, which shows that the improved WOA has better optimization ability, and the excellent robustness makes the algorithm less susceptible to randomness.
In order to objectively show the comparison between NGS-WOA and other algorithms, this paper uses Wilcoxon rank sum test [51] to further analyze the experimental results. As a non-parametric statistical test, Wilcoxon rank sum test uses sample rank instead of sample value test, which can analyze whether the difference between two samples is significant. It can also test whether the distribution VOLUME 8, 2020 functions of two populations are equal. Table 5 describes the p-value test of wilcoxon rank sum test. The statistical results determined the level of difference between NGS-WOA and other algorithms,which can be recorded as the p-value. 77034 VOLUME 8, 2020 If p exceeds 0.05, it can be understood that there is no significant change between the two samples. If p is less than 0.05 and is close to 0, it means that there is a significant difference between the two samples. If the p-value result is NaN, there is no difference between the two samples. The test results listed in Table 5 show that the p-value of NGS-WOA in most functions is less than 0.05. Wilcoxon rank sum test further shows that NGS-WOA has significant advantages over other algorithms.
In summary, the results in this section show that NGS-WOA is the best algorithm, the second is the GoldSA algorithm, and the third is the WOA algorithm. NGS-WOA is not only better than WOA, but also better than GoldSA, which verifies the effectiveness of the improved strategy proposed in this paper. Nonlinear weight enables the position update mechanism of whale algorithm to adaptively balance the exploration and exploitation phases. In addition, the introduction of improved golden sine operator enhances the moving effect of search agents, efficiently searches only in areas that produce good results, which improves the exploitation and exploration capabilities of algorithm. Section C has compared NGS-WOA with some algorithms. The experimental results show that NGS-WOA has VOLUME 8, 2020 obvious advantages. In order to further explore the performance of NGS-WOA, this paper selects other improved algorithms for comparative experiments, such as IWOA [52], CPWOA [53], PSOGSA [54], and VPSO [55]. Choose the function F1-F28 to experiment. The convergence curves of some functions are shown in Fig. 12. For each function, the algorithm runs independently 10 times. The statistical results are shown in Table 6. It can be seen from the experimental results that NGS-WOA has the best optimization ability. NGS-WOA has obvious advantages in convergence speed, average accuracy and robustness. LWOA is the second-best algorithm, with good accuracy in many functions. CP-WOA is the third best algorithm. In summary, the comparison experiments with other improved algorithms further verify that NGS-WOA has strong performance.

E. EFFECTIVENESS ANALYSIS OF IMPROVED STRATEGIES
This paper proposes two improvements based on the WOA: Firstly, the nonlinear adaptive weight is introduced to enable the algorithm to balance exploitation and exploration. Secondly, in order to further increase the exploitation and exploration capabilities of the algorithm, an improved golden sine operator is introduced. Section B has preliminary verified that the improved strategy can effectively improve the performance of the whale optimization algorithm. In order to further evaluate the effect of the improved strategy on the performance of the algorithm, the improved WOA (NGS-WOA), whale optimization algorithm (WOA), and WOA only based on golden sine operator (GS-WOA) were selected for comparison experiments. By using functions F1-F22, and the convergence curves are shown in Fig. 13. Each function runs independently 10 times. The experimental results are listed in Table 7. It can be seen from the convergence curves that whether for unimodal functions F1-F7 or multimodal functions F8-F13, NGS-WOA has better convergence speed and optimization accuracy than WOA, which indicates that the two improvement strategies proposed in this paper can effectively improve performance of algorithm. Because the dimensions of functions F14-F22 are not large, the differences between algorithms are not obvious. WOA based on the golden sine operator (GS-WOA) is significantly better than WOA, which shows that improved sine operator can make the search agent scan the space efficiently and improve the exploration ability of the algorithm. The increase VOLUME 8, 2020  of the indentation coefficient enhances the moving effect of the search agent. The search agent explores excellent areas and improves the exploitation of the algorithm. NGS-WOA has better optimization effect than GS-WOA, indicating that the introduction of nonlinear adaptive weights can further balance exploitation and exploration. For each function, the results of 10 independent runs of the algorithm are selected for Wilcoxon rank sum test. Table 8 describes the p-value test for Wilcoxon rank sum test. It can be seen from the p-value test that NGS-WOA has significant advantages over WOA and GS-WOA, which indicates that a variety of improvement strategies can significantly improve the performance of the algorithm.

F. HIGH-DIMENSIONAL FUNCTION OPTIMIZATION
The search space for practical optimization problems is complex, with high dimensions and noise. For example, different data sets cause neural networks to have different structures. When the optimization algorithm is used as a neural network trainer, the dimensions that need to be optimized may be more than 50 dimensions [52]. In order to verify the effectiveness of the algorithm for solving high-dimensional optimization problems, this paper applies NGS-WOA to experiments of high-dimensional function optimization. The dimensions of the fixed dimension functions F 14 − F 22 must not be changed. Therefore, the dimensions D = 30 of functions F 1 − F 13 are increased to D = 50, D = 100, D = 150, and D = 200, respectively. The experimental results are listed in Table 9. Generally, benchmark functions create the search space based on dimensions, number of variables, and constraints. The calculation cost of the algorithm will increase with the increase of the dimension, and the algorithm is prone to fall into a dimensional disaster. The unimodal functions F 1 − F 7 will increase the search range as the dimension increases, and the exploitation capability of the search agent will decrease. In addition, an increase in the dimension will increase the local optimal value of the multimodal functions F 8 − F 13 , which may cause the stagnation of local optimums. If the exploration capability of algorithm is weak, the search agent cannot jump out of the local optimum. Therefore, there are challenges in high-dimensional optimization problems. It can be seen from the experimental VOLUME 8, 2020 results that NGS-WOA found the theoretical optimal value of the 54% functions in four high-dimensional situations. Among them, for the unimodal functions F 1 −F 7 , the increase of the dimensions has no obvious effect on the improved WOA, which indicates that NGS-WOA has strong exploitation ability. For multimodal functions F 8 − F 13 , the average accuracy and robustness of NGS-WOA is excellent, without falling into dimensional disaster. It shows that the algorithm has strong exploration ability. It should be noted that the optimal value of the function F8 is f min = −418.9829 · D, the global optimal value will shift with the dimensions, and NGS-WOA can still find the theoretical optimal value. To verify the performance difference of the improved WOA in different dimensions, Table 10 describes the p-value test for Wilcoxon rank sum test. The p-value test results show that there is no difference in performance between high-dimensional optimization and low-dimensional optimization in most functions. Wilcoxon rank sum test further illustrates that the improved WOA has strong optimization ability and robustness, and can avoid stagnation of local optimal values. NGS-WOA can effectively solve high-dimensional optimization problems. In order to explore the impact of variable population and iteration times on high-dimensional optimization performance, Table 11 shows the statistical results of different variables in 100-dimensions. The experimental results show that increasing the variable population and the number of iterations has little effect on NGS-WOA.

G. ENGINEERING OPTIMIZATION DESIGN
In order to verify the effectiveness of the improved WOA for engineering optimization, NGS-WOA was applied to the optimization design of welded beams. The purpose of welding beam optimization is to minimize the manufacturing cost of welding beams. The four optimization constraints are shear stress τ , bending stress in the beam θ, buckling load p c , and deflection of the beam δ. These four variables that need to be optimized are the thickness of the weld seam h, the length of the clamp beam l, the height of the bar t, and the thickness of the bar b. The objective function, optimization constraints, and optimization variables of the welding beam optimization   problem are expressed by the following equations.
In order to objectively demonstrate the performance of NGS-WOA, the WOA, GSA in [43], GA (Deb) algorithm in [53], [54], and HS (Lee and Geem) algorithm in [55] are selected. In addition, mathematical methods such as stochastic method, simplex method, and successive linear approximation are selected in this paper [56]. The optimization results shown in Table 12 show that the improved WOA has the best optimization cost, which is not only better than other algorithms, but also better than WOA. The comparison of the statistical results in Table 13 shows that the improved WOA is better than GSA in mean and variance, but is inferior to PSO algorithm and WOA.
The purpose of spring optimization is to minimize the weight of the spring. There are three design variables: wire diameter d, average coil diameter D, and number of active coils n. The optimal design of the spring is expressed by the following equation: In order to objectively demonstrate the performance of NGS-WOA, WOA and GSA in [43] were selected. In addition, mathematical techniques (constrained correction of fixed costs [57] and penalty functions [58]) or meta-heuristic algorithm techniques such as PSO algorithm [59], Evolutionary Strategy (ES) [60], GA [61], improved harmony search (HS) [62], Differential Evolution (DE) [63] and Ray Optimization (RO) algorithm [64] were selected. The optimization results listed in Table 14 show that the improved WOA has the best optimization cost. The comparison of the statistical results in Table 15 shows that the improved WOA is better than GSA, PSO algorithm and WOA in terms of mean and variance.

H. DISCUSSION OF NGS-WOA ALGORITHM
In view of the shortcomings of the WOA algorithm, this article will be improved from two aspects. First, the nonlinear weights can make the parameters adaptively change with iteration, eliminating the uncertainty caused by random parameters. Second, the golden sine operator optimizes the optimal solution found in each iteration of WOA, which makes up for the shortcomings of the mathematical model determined by the random parameters. In addition, the golden sine operator updates the optimal solution found in each iteration of WOA, which further improves the optimization performance of the algorithm. Compared with other algorithms, NGS-WOA has better optimization performance, and has the advantages of high precision and high convergence speed. Compared with the original WOA, the disadvantage of NGS-WOA is that it adds a little time complexity. However, the optimization performance has been significantly improved. Experimental results show that NGSWOA can effectively solve high-dimensional optimization and engineering optimization problems, which lays theoretical foundation for the application of NGS-WOA.

V. CONCLUSION
As a novel swarm intelligence-based algorithm, WOA has a simple structure and a strong ability to solve optimization problems. However, it has the defects of too many random parameters and stagnant local optimal values. Therefore, this paper first introduces non-linear adaptive weights so that the location update mechanism can adaptively transition between the exploration and exploitation stages. In addition, this paper introduces an improved golden sine operator. The search agent explores the search space efficiently with a sinusoidal route, which improves the global exploration capability of algorithm. At the same time, the golden section coefficient enables the search agent to update the distance and direction with a fixed step size, and search in areas that only produce good results. In addition, the increase of the indentation coefficient enhances the moving effect of the search agent. Search agents can quickly develop into excellent areas so as to improve the local exploitation capability of the algorithm. The simulation experiments use 28 test functions to evaluate the performance of the improved WOA (NGS-WOA). Experimental results show that the improved strategy proposed in this paper effectively improves the optimization performance of NGS-WOA. Also it is more competitive than other algorithms. At the same time, NGS-WOA can effectively solve high-dimensional optimization and engineering optimization problems, and it has significance for further research and application of improved methods of whale optimization algorithm.