Grey Prediction Evolution Algorithm Based on Accelerated Even Grey Model

The grey prediction evolution algorithm based on the even grey model (GPEAe) is a pioneer of prediction-based evolutionary algorithms. Its offsprings are generated by a first-order inverse accumulating generation operation (1-IAGO) depending on every prediction of the even grey model. For the fact that the original values are already known for the first few predicted values in 1-IAGO, this paper firstly develops an accelerated 1-IAGO (1-AIAGO) which replaces a particular prediction with the corresponding original value. An accelerated even grey model (AEGM(1,1)) based on the 1-AIAGO is then proposed. Finally, this paper proposes a new grey prediction evolution algorithm (GPEAae) which uses the AEGM(1,1) as the reproduction operator of evolutionary algorithm to forecast the offspring. The performance of GPEAae is verified on the CEC2014 benchmark functions and the set containing nine engineering constrained design problems. The experimental results show that the GPEAae has superiority and highly competitive effectiveness when compared with the GPEAe and other state-of-the-art algorithms. The motivation of the GPEAae is using the iterative frame of evolutionary algorithms to superimpose the weak effect generated by the proposed 1-AIAGO which replaces the approximate (predicted) value with the corresponding accurate (original) value. Matlab_Codes of this article can be found in https://github.com/Zhongbo-Hu/Prediction-Evolutionary-Algorithm-HOMEPAGE.


I. INTRODUCTION
The grey prediction evolution algorithm based on even grey model (GPEAe) [1], first proposed by Zhongbo Hu et al. in 2019, is constructed by introducing the grey prediction theory [2]- [6] into the optimization field. The GPEAe is a population-based stochastic search method, which predicts the evolutionary direction at the macroscopic level to lead the population gradually move toward the global optimum. The main difference between GPEAe and other evolutionary algorithms (EAs) [7]- [11] is that GPEAe adopts the prediction theory to obtain the offsprings. Unlike traditional optimization techniques, GPEAe does not rely on gradient information and has the ability in jumping out of local optima. In addition, GPEAe has broad application prospects due to its advantages of simple structure, easy coding, few parameters, The associate editor coordinating the review of this manuscript and approving it for publication was Gustavo Olague . and satisfactory effect in solving complex numerical optimization problems.
The grey system theory, as a theoretical cornerstone of the grey prediction evolution algorithm (GPEA), has been widely applied in many different fields so far [12]- [15]. Some advanced grey models have been created in the past few years, i.e., the new multivariable grey model with structure compatibility [16], the nonlinear grey bernoulli model [17], the new-structure grey Verhulst model [18], etc. Since the GPEA was proposed, some relevant researches have been developed in the following three aspects. (I) Introduce new grey models to design new GPEA: a multivariable grey model [19] was used to design a new GPEA in [20]. (II) Improve the GPEA according to evolutionary approaches: a topological opposition-based learning strategy was used to improve the GPEA in [21]. (III) Apply to the GPEA to solve specific optimization problems: the GPEA is applied to solve environmental/economic dispatch problem in [22]. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ In the evolution process, the offspring of GPEAe is generated by a first-order inverse accumulating generation operation (1-IAGO). The 1-IAGO is used to calculate the final expression that depends on every prediction of the even grey model (EGM (1,1)). Concretely, the EGM(1,1) firstly converts the unordered original data to data sequence with specific law, and then calculates the predicted values of the current and next data sequence. Finally, the 1-IAGO operates on the two predicted values to generate the data of next trial population. The 1-IAGO is a recursive expression and it will directly affect the prediction effect of EGM (1,1). The 1-IAGO can help GPEAe to reach good optimization results, but there is still room for further improvement of its prediction effect.
When analyzing the two predicted values in the calculation expression of 1-IAGO, we find that the first predicted value has already been known. This paper firstly develops an accelerated 1-IAGO (1-AIAGO) which replaces the first predicted value with the corresponding original value. Based on the 1-AIAGO, a new grey model called the accelerated EGM(1,1) (AEGM(1,1)) is then proposed. The operation on original value helps the 1-AIAGO to make better use of existing information of data sequence. At the same time, the 1-AIAGO can reduce the calculation of the predicted value once, so that AEGM(1,1) has a smaller computational complexity compared with EGM(1,1).
On the basic of the AEGM(1,1), this paper proposes a new grey prediction evolution algorithm (GPEAae). The GPEAae considers the population sequence of the evolution algorithm as time series, and then uses an AEGM(1,1) to predict the reproductive individuals in the evolution process for generating the trial populations. Intuitively speaking, the motivation of the GPEAae is to amplify the weak effect in the grey model by using the iterative frame of evolution algorithms. Concretely, the GPEAae can superimpose and amplify the weak effect generated by the proposed 1-AIAGO which replaces the approximate (predicted) value with the corresponding accurate (original) value. The main contributions of this paper are as follows.
• Propose a new grey prediction evolution algorithm. As a new grey prediction evolution algorithm, the GPEAae has better overall performance and less computational complexity than the GPEAe.
• Verify the advantages of the GPEAae. The performance of GPEAae is verified on the CEC2014 benchmark functions and the set containing nine engineering constrained design problems. When comparing GPEAae with other state-of-the-art algorithms, the experimental results show excellent performance of GPEAae in terms of solution accuracy.
• Propose a new grey prediction model. The AEGM(1,1) can be regarded as a new prediction model in the grey prediction theory. It changes the way in which the predicted values are ultimately generated, and provides inspiration for the study of other grey prediction models. The rest of this paper is organized as follows. Section 2 briefly presents the grey prediction theory.
The AEGM (1,1) and GPEAae are introduced with detailed explanations in Section 3. Several numerical experiments are carried out and discussed in Section 4. Finally, Section 5 gives the concluding remarks and future work.

II. PRELIMINARIES: THE GREY PREDICTION THEORY
The grey prediction theory has been widely used and achieved remarkable results over the past few decades, the prediction process of it can be generally divided into three stages to eventually acquire the prediction sequence of raw data. At first, two operations (first-order accumulating generation operation and background value generation operation) are implemented to transform the discrete raw data into data sequence with the law of approximate exponential growth. Whereafter, the raw data and transformed data sequence are brought into the grey prediction model (such as even grey model) to establish exponential function for generating the next elements. Finally, an inverse operation (first-order inverse accumulated generating operation) will be performed on just generated elements to obtain the forecasting elements of the raw data.

A. GREY DATA TRANSFORMING
The purpose of grey data transforming is to convert unordered non-negative columns into near-exponential monotonically increasing sequences, providing data basis for construction of grey prediction model. The accumulating generation operation (AGO) is the most fundamental data transforming operator in the grey prediction theory.

Definition 3 (Even Grey Model):
Given y (0) (k) and z (1) (k) as shown above, an even grey model (EGM(1,1)) is described as Development coefficient α evinces the development tendency of the time response sequence and raw data. The grey action quantity β evinces the changing relationship between data. To acquire the time response function, difference equation Eq. (3) of EGM(1,1) is converted into the corresponding whitenization differential equation as dy (1) dt The solution of Eq. (4), which is known as time response function, is obtained as follows: When t = 1, take y (1) (1) = y (0) (1), then the time response sequence (after discretization) of EGM(1,1) is given bŷ Theŷ (1) (k) makes up the predicted sequenceŶ (1) .

C. PREDICTIVE GENERATION FOR RAW DATA
Prediction generating is an especially important step in the EGM(1,1), and the prediction sequence of raw data is ultimately figured out by the introduced inverse operator of 1-AGO.

III. THE GREY PREDICTION EVOLUTION ALGORITHM BASED ON AEGM(1,1)
Based on the grey prediction theory introduced above, this section first introduce the accelerated even grey model, then outline the specific calculation procedures of GPEAae in detail. Special Statement: In the algorithm design concept, it is an eternal topic to realize the tradeoff between solution accuracy and complexity of an algorithm. In this paper, the the accelerated even grey model is constructed under the circumstance of employing three data items to predict the fourth data (the reasons for constructing prediction models with three data items have been explained in [1]).

A. THE ACCELERATED EVEN GREY MODEL
As an improvement of EGM(1,1), the proposed accelerated even grey model (AEGM(1,1)) is identified by the new operation in generating the predictive data for raw data. The calculation steps of AEGM(1,1) will be introduced as follows.
(i) Data generating and transforming.
In this stage, the accelerated 1-IAGO is employed to obtain the next predictive data of raw sequence.

2) TIME COMPLEXITY ANALYSIS
While the 1-IAGO needs to calculate on two predicted valueŝ y (1) (3) andŷ (1) (4) for prediction, the 1-AIAGO calculates on the known original value y (1) (3) and the predicted valuê y (1) (4). It can be seen that the 1-AIAGO makes full use of the original data information without calculating the predicted valueŷ (1) (3). This reduces the time complexity of the grey model to some extent. Replacing the approximate value (ŷ (1) (3)) with the corresponding accurate value (y (1) (3)) seems to save only a small amount of computational overhead. But under the iterative frame of evolutionary algorithms, this weak effect of saving computational overhead can be superimposed and amplified to ultimately form great influence on the time complexity of GPEAae.

B. THE GPEAae
The GPEAae is constructed based on the AEGM(1,1) above, and its evolution process will be introduced. At the beginning of GPEAae, the population is randomly initialized to generate some potential solutions in the feasible region. During the iterations, the parent populations reproduce its offspring by employing a reproduction operator inspired by AEGM(1,1) model. Finally, after looping a selection operator, GPEAae finds out the function-optimized solutions to reach the termination conditions.

1) INITIALIZATION
Similar to traditional population-based EAs, each generation population of GPEAae can be expressed as X g = ( x g 1 , x g 2 , . . . , x g N ), g = 0, 1, . . . , g max . Where g represents the current generation and g max means the maximum number of generations, N is the number of individuals in the population. The individual in population X g is symbolized by Where D denotes the dimensions of variables. Unlike traditional EAs, the proposed GPEAae needs to initialize three generation populations. So 3N individuals are randomly generated in feasible region, then they are sorted according to fitness values. Concretely, N smallest individuals make up the first generation X 0 and the middle N individuals make up the second generation X 1 (g = 1), at last, the remaining largest N individuals make up the third generation X 2 (g = 2).
In the process of population initialization, the potential individuals are generated by randomly distributed numbers within the feasible region. As for the ith individual in first generation population X 0 , the jth dimension element value of it is randomly generated by where i = 1, 2, . . . , N and j = 1, 2, . . . , D. The rand(0, 1) represents a random number in the range [0, 1]. The L j and U j are the low and up boundaries of jth dimension element. Discussion: Three initialized populations are essential for GPEAae, and there is an alternative and effective way to realize the initialization by means of other metaheuristic.

2) REPRODUCTION OPERATOR
A novel reproduction operator, named ae reproduction operator, is designed based on AEGM(1,1). Broadly speaking, ae reproduction operator treats the population sequence as a time series, and bring three neighbouring populations into the AEGM(1,1) to construct exponential functions for predicting offspring solutions. Specifically speaking, all calculations of ae reproduction operator are performed at the element level of the individual in population.
Consider X g as a target population, then three neighbouring generations X g−2 , X g−1 , X g (g ≥ 2) compose a population series to generate the trial population (denoted as U g ). The individual series is symbolized by x r1 , x r2 , x r3 , which are randomly chosen from X g−2 , X g−1 , X g , respectively. Let the trial vector u where As defined in section 2 above, the α i,j , β i,j are the grey developmental coefficient and grey control parameter, respectively. The geometric explanation of the ae reproduction operator is illustrated in Fig. 1. The symbols x r1 , x r2 , and x r3 represent three original data in the ae reproduction operator. The green dots denote the transformed data after the 1-AGO operation. Based on the transformed data, an exponential function is then constructed. The yellow dot denotes the forth predicted data in the exponential function. After a 1-AIAGO operation, the predicted data u g i,j is finally generated.
Note: (1) When the three elements of the individual series are close enough, the exponential prediction can't be constructed to predict the next element, then a local searching strategy based on random disturbance will be run as a replacement. (2) When there are two of the three elements are close enough, the exponential prediction conducted by AEGM(1,1) will degenerate into a linear prediction. (3) When the elements of new individuals overstep the feasible region, they will be replaced by some random numbers generated within the feasible region. Please refer to the literature [1] for the specific expressions.

3) SELECTION OPERATOR
To ensure the convergence rate of GPEAae, a greedy selection operator is adopted for reserving some potential individuals into next generation. The most potential individuals are obtained from the comparing of objective function values between the target x g i and trial u g i individuals. When solving the minimization problems, if the u g i acquires a smaller function value, it will be passed into next generation. Otherwise, the x g i will be reserved in the population. This selection operator can be presented as follows:

IV. NUMERICAL EXPERIMENTS
In the numerical experiments, 30 CEC2014 benchmark functions and nine engineering constrained design optimization will be presented for comparing and evaluating the search performance of GPEAae and GPEAe. As completely new evolutionary algorithms, GPEAae and GPEAe are compared with DE and PSO on the CEC2014. During the recent years, DE and PSO are two of the most widely propagated and applied metaheuristics in solving many kinds of optimization problems. The comparison experiments with DE and PSO can well reveal the effectiveness and application prospect of GPEAae and GPEAe. In the subsequent nine engineering constrained design optimization experiments, GPEAae and GPEAe are compared with some original and improved well-known algorithms. The comparison results reflect the capability of GPEAae and GPEAe when resolving practical optimization problems. Both programs of GPEAae and GPEAe are coded and executed in MATLAB 2012a (Windows7; Intel(R) Core(TM) i5-4590 CPU @ 3.30GHZ; 4GB RAM). All the experimental results will be analyzed and discussed in detail.  (F23-F30). The detailed information of the 30 benchmark functions is provided in [23].

1) PARAMETER SETTINGS
To solve the benchmark functions, the same population size of 50 are used by all the compared algorithms (GPEAae, GPEAe, PSO and DE), and two different dimension sizes 10D and 30D (D is the dimension variable) are separately simulated. As the termination criterion for the four algorithms, the maximum function evaluations (FEs) of 10D and 30D simulations are set to 100000 and 500000, respectively. All experimental results are obtained through 30 independent runs, and the performance indices such as the best, mean and standard deviation of the error values (F(x) − F(x * )) are recorded and assessed. Where F(x) is the minimum function value obtained by the best solution and F(x * ) is the global optimal function value. The remaining parameters of DE and PSO are set as follows.

2) COMPARISONS ON SOLUTION ACCURACY AND CONVERGENCE RATE OF 10D CEC2014 BENCHMARK FUNCTIONS
In these experiments, GPEAae is compared with GPEAe, PSO and DE. The statistical results that can reveal the detail performance on the solution accuracy of four algorithms are recorded in Table 1 and Table 2, including the Best, Mean, and Std values for 30 10D benchmark functions in 30 independent runs. The best results of each index among four compared algorithms are prominently represented in boldface, and the average ranks are listed at the end of Table 2. Furthermore, the convergence characteristics of nine representative 10D functions F3, F7, F15, F16, F17, F19, F24, F28, and F30 are illustrated in Fig. 2. The curves are given to show the performance on convergence rate of four compared algorithms more clearly. From the subfigure (b) in Fig. 2, it can be seen that GPEAae can achieve higher solution accuracy with faster convergence rate when compared with GPEAe, PSO, and DE. As the subfigure (c), (e), (f), (g) and (i) in Fig. 2 show, the GPEAae has competitive convergence rate and can achieve the highest solution accuracy in the end. It is necessary to emphasize that the smaller the Best value obtained by the algorithm, the higher the solution accuracy of the algorithm is to approach the theoretical optimal value. With respect to the Best index, the function values obtained by GPEAae are the smallest among four compared algorithms. In the overall ranking of Best values, GPEAae ranks first on 30 functions with an average ranking of 1.87, while DE, GPEAe, and PSO rank second, third, and last with the average ranking of 2.00, 2.40, and 3.57, respectively. This indicates that GPEAae performs higher solution accuracy than DE, GPEAe and PSO. Unfortunately, GPEAae ranks only second in terms of the Mean and Std values, worse than DE and better than GPEAe and PSO.  However, the statistical results in Table 1 and Table 2 show that GPEAae performs better solution accuracy than GPEAe on 22 out of the 30 functions. When comparing the Mean and Std values, GPEAae can also outperforms GPEAe on 22 and 19 functions, respectively. Based on the above observation, it can be concluded that although GPEAae did not consistently perform best in terms of all performance indices, its comprehensive performance is better than GPEAe, PSO and DE in 10D CEC2014 benchmark function experiments.

3) COMPARISONS ON SOLUTION ACCURACY AND CONVERGENCE RATE OF 30D CEC2014 BENCHMARK FUNCTIONS
To explore the performance of GPEAae in solving high-dimensional functions, the dimensions of 30 benchmark functions are set to 30 in these experiments. Table 3 and  Table 4 shows the statistical results obtained by GPEAae, GPEAe, PSO and DE in 30 independent runs. Similar to the 10D CEC2014 experimental results above, the performance of GPEAae on solution accuracy is still the best. The Best values acquired from GPEAae are the smallest among the four algorithms on functions F1, F4, F9, F10, F11, F15, F16, F18, F21, F23, F29, and F30. It can be observed from the end of Table 4, GPEAae is superior to GPEAe, PSO, and DE when concerning the average ranking of Best values, and the average ranking of GPEAae are second with regard to the Mean and Std values. In addition, Fig. 3 offers the convergence characteristic curves of nine representative 30D functions F4, F9, F10, F11, F15, F16, F18, F21, and F29. All the subfigures in Fig. 3 illustrate that the GPEAae has highest solution accuracy among all compared algorithms. Especially, the subfigure (f) show that the fast convergence rate in the middle iterations help GPEAae finally achieves high solution accuracy. The curves demonstrate that GPEAae behaves both competitive solution accuracy and convergence rate compared with the other three competitors. When comparing the performance between GPEAae and GPEAe, the solution accuracy obtained by GPEAae is better than that of GPEAe on eighteen functions, and GPEAae can achieve smaller Mean and Std values than GPEAe on more than half of the all 30 functions. The comparisons show that GPEAae performs much better than GPEAe for these 30 functions. Even when the dimension of CEC2014 benchmark functions change from 10 to 30, GPEAae still has stable performance. All in all,the experimental results show that although the robustness of GPEAae is not the best, it can achieve better solution accuracy than GPEAe, PSO, and DE in 30D CEC2014 benchmark function experiments.

4) SIGN TEST
As a widely used test method, the sign test [24] is employed to judge whether the difference of two selected algorithms is significant enough. The Best values obtained by the GPEAae and other compared algorithms are selected as the evaluation indicator of the sign test. All the Best values of the 60 functions (composed of 30 10D CEC2014 and 30 30D CEC2014 functions) are obtained through 30 independent runs. The signs '+', '≈' and '−' in Table 5 denote that the GPEAae has better, nearly the same and worse performance than the compared algorithms, respectively. In the sign test, set a null hypothesis that there is no significant difference between the GPEAae and the compared algorithms. When the P value is less than 0.05, the null hypothesis is rejected. As shown in the last column of Table 5, the p value (GPEAae vs. GPEAe, GPEAae vs. PSO, GPEAae vs. DE) is 0.015543, 2.21E-08 and 0.051510, respectively. In other words, the performance of GPEAae is significantly better than that of GPEAe and PSO. But compared with DE, the advantage of GPEAae is slight and not significant enough.

B. ENGINEERING CONSTRAINED DESIGN EXPERIMENTS
In this investigation, GPEAae and GPEAe are compared with some original and improved well-known algorithms on nine practical engineering design problems. The experimental results reflect the capability of GPEAae and GPEAe when resolving practical optimization problems. The nine engineering cases are as follows: three-bar truss, pressure vessel, tension/compression spring, welded beam, speed reducer, gear train, cantilever beam, heat exchanger and tubular column.

1) PARAMETER SETTINGS
In the past few years, many effective constraint processing methods have been developed, such as penalty functions and ε-constrained methods [25]. The GPEAae and GPEAe employ the ε-constrained methods to handle the constrains with ε = 1E − 6. The parameter setting of nine engineering design problems is presented in Table 6. The 'N', 'T' and 'D' denote population size, maximal iteration times and number of optimization parameters, respectively. All experimental results are obtained through 50 independent runs, and some performance indices are recorded such as the Best, Mean, Worst, Standard deviation (Std), and the Function estimation times (FEs) needed for getting current optimum value. The objective functions, constraint conditions and descriptions of nine engineering design problems are provided in [26] and [27].

Case 1 (Three-Bar Truss Design Problem):
The components of this problem is illustrated in Fig. 4. Table 7 represents the statistical results acquired from GPEAae, GPEAe and other seven algorithms, including the ant lion optimizer (ALO) [28], dynamic stochastic selection (DEDS) [29], hybrid particle swarm optimization with differential evolution (PSO-DE) [30], mine blast algorithm (MBA) [31], differential evolution with level comparison (DELC) [32], hybrid evolutionary algorithm (HEAA) [33], and crow search algorithm (CSA) [26]. The optimal function value for this problem is acquired from GPEAae with f (X ) = 263.895712 at X = {0.788695, 0.408192}, which means that GPEAae can attain better solution accuracy when compared with GPEAe and other six algorithms. The index FEs in Table 7 are recorded to reflect the computational overhead. The GPEAae has the smallest FEs equals to 7700, this indicates that GPEAae reaches the currently optimal solution with the lowest computational overhead. So it can be concluded that GPEAae has better solution accuracy and convergence rate than the compared algorithms on this problem.

Case 2 (Pressure Vessel Design Problem):
This problem is a well-known problem from Kannan and Kramer [34]. Fig. 5 shows the pressure vessel and its parameters. Table 8 shows the statistical results acquired from GPEAae, GPEAe and other seven algorithms, namely, grey wolf optimizer (GWO) [35], GA based co-evolution model (CGA) [36], co-evolutionary PSO (CPSO) [37], GA through the use of dominance-based tour tournament selection (DGA) [38], hybrid PSO (HPSO) [39], co-evolutionary differential evolution (CDE) [40], and CSA. For this problem, both GPEAae and GPEAe can reach the same optimal function value f (X ) = 6059.708025 which is the smallest among all the algorithms except GWO. In other words, GPEAae and GPEAe perform equally well in terms of solution accuracy. But for computational overhead, the FEs of GPEAae is equal to 20280 which is slightly better than 21180 FEs of GPEAe. And the both FEs of GPEAae and GPEAe are far superior to other compared algorithms. Although the Mean and Std of GPEAae on this problem perform generally, GPEAae can attain competitive solution accuracy through the smallest computational overhead.

Case 3 (Tension/Compression Spring Design Problem):
This problem was taken from Belegundu and Arora [41]. The schematic diagram of this problem are illustrated in Fig. 6. In this problem, the statistical results of GPEAae and GPEAe are compared with nine algorithms include GWO, modified VOLUME 8, 2020   differential evolution (MDE) [42], CPSO, HEAA, DELC, HPSO, DEDS, MBA and CSA. As the Table 9 shows, all compared algorithms have the ability to reach the optimal function value f (X ) = 0.012665 except CPSO and GWO. In terms of the FEs, GPEAae and GPEAe use the same FEs of 19980 (only higher than the FEs of MBA). That is to say, the computational overhead of GPEAae and GPEAe rank second among all compared algorithms. It is clear that the performance of GPEAae and GPEAe on solution accuracy and convergence rate are basically the same.

Case 4 (Welded Beam Design Problem):
This problem is another structural optimization problem taken from Coello [36]. Fig. 7 illustrates the welded beam and its parameters. Table 10  which is the smallest. This means that GPEAae and GPEAe perform equally well in terms of solution accuracy. With regard to the computational overhead, GPEAae reaches the global optimum with a ranking second FEs of 35860, which is the best among all compared algorithms except MDE with 24000 FEs. That is to say, GPEAae only needs to sacrifice just a little computational overhead to obtain better solution accuracy.
Case 5 (Speed Reducer Design Problem): The components of this problem are presented in Fig. 8. As the Table 11 shows, the performance of GPEAae, GPEAe is compared with other six algorithms, including MDE, DEDS, HEAA, DELC, POS-DE, and MBA. The optimal function value for this problem is acquired from GPEAae with f (X ) = 2994.468240 at X = {3.499997, 0.700000, 17.000000, 7.300001, 7.715311, 3.350214, 5.286653}, which means that GPEAae can attain better solution accuracy when compared with GPEAe and other six algorithms. It is obvious that GPEAae uses lower computational overhead (with 19980 FEs) than all the compared algorithms except MBA (with 6300 FEs). This indicates that GPEAae can reach the optimal function value with competitive computational overhead.
Case 6 (Gear Train Design Problem): The components of this problem are illustrated in Fig. 9. The algorithms applied to this problem include GPEAae, ALO, ABC, MBA, GPEAe and CSA. Table 12 shows the comparison results acquired from them. It can be observed, all six algorithms can reach the same optimal function value which equal to 2.7E-12. With respect to FEs and Std, the GPEAae performs mediocre, better than MBA, CSA and worse than GPEAe, ABC, ALO. That is to say, although the computational overhead of GPEAae is    not so excellent when solving this problem, but GPEAae can obtain a good solution accuracy.
Case 7 (Cantilever Beam Design Problem): The overall schematic of this problem is illustrated in Fig. 10. Table 13 represents the statistical results obtained by GPEAae, GPEAe and other five algorithms, including moth-flame optimization algorithm (MFO) [43], method of moving asymptotes (MMA) [44], multi-verse optimizer (MVO) [45], symbiotic organisms search (SOS) [46], generalized convex approximation (GCA) [44] and CSA. The optimal function value for this problem is acquired from both GPEAae and GPEAe with the same f (X ) = 1.339957, which means that GPEAae and GPEAe can attain better solution accuracy when compared with other six algorithms. At the same time, GPEAae and GPEAe reach the same FEs equal to 19920 (smaller than 125000 FEs of CS and larger than 15000 FEs of SOS). This indicates that GPEAae and GPEAe reach the same optimal VOLUME 8, 2020   solution when spending the same computational overhead. So it can be concluded that both GPEAae and GPEAe have satisfactory performance of solution accuracy and convergence rate on this problem.
Case 8 (Heat Exchanger Design Problem): As another constrained engineering minimization test problem, the heat exchanger design problem was taken from Deb [47]. As Table 14 shows, the performance of GPEAae and GPEAe is compared with other five algorithms from the literature, namely, harmony search (HS) [48], genetic algorithms (GAs) [47], changing range genetic algorithm (CRGA) [49], proposed harmony search (PHS) [27], and improving proposed harmony search (IPHS) [27]. The optimal function value obtained by GPEAae is f (X ) = 7049.254884 with X = The overall schematic of this problem is illustrated in Fig. 11. The statistical results of GPEAae and GPEAe are compared with CSA, cuckoo search algorithm with particle swarm optimization (CS-PSO) [50] and cuckoo search algorithm with genetic algorithm (CS-GA) [50]. As shown in Table 15, both GPEAae and GPEAe can reach the same optimal function value f(X) = 1.724851 which is the smallest. This means that GPEAae and GPEAe perform equally well in terms of solution accuracy. As for computational overhead, the FEs of GPEAae is equal to 18300 which is only worse than 15000 FEs of CS. This indicates that GPEAae has competitive performance in both computational overhead and solution accuracy on this problem.

3) STATISTICAL ANALYSES OF ENGINEERING RESULTS
The optimal feasible solutions and FEs of all the above nine engineering design problems are recorded in Table 16. As the Table 16 shows, the GPEAae can obtain the optimal solutions of eight engineering design problems. Especially, GPEAae has the lowest FEs in terms of three-bar truss and pressure vessel design problem, which means that GPEAae can achieve the competitive solution accuracy with the minimum computational overhead. When solving the rest seven problems, the FEs of GPEAae are also competitive among that of other compared algorithms. As for tension/compression spring and gear train design problems, the feasible solutions acquired from GPEAae and GPEAe are the same as most other compared algorithms. Although GPEAe holds nearly the same excellent performance with GPEAae on six out of the nine engineering design problems, its feasible solutions   are worse than that of GPEAae on speed reducer, three-bar truss and heat exchanger design problems. All in all, considering the overall performance on solution accuracy and computational overhead, GPEAae behaves an extremely competitive performance in solving these nine practical optimization problems.

C. DISCUSSION ON EXPERIMENTAL RESULTS
Take the above statistical results on CEC2014 benchmark functions and nine practical engineering problems into consideration at the same time. The performance of GPEAae's search behavior is discussed from three aspects as follows.
• Discussion on the solution accuracy. As shown above, the Best values can reveal the solution accuracy an algorithm obtained when solving the objective functions (on CEC2014 benchmark functions and the practical engineering problems). The smaller Best value an algorithm obtains, the better solution accuracy an algorithm reaches to near the theoretical optimal solution. From Table 1 to Table 4, when compared with that of GPEAe, PSO, and DE, the average ranking of GPEAae's Best values rank first on all 10 and 30 dimensions CEC2014 benchmark functions.
Furthermore, the GPEAae also acquires the current optimal solutions of eight engineering design problems.
• Discussion on the convergence rate. The FEs values are recorded to reflect the computational overhead of the compared algorithms, and the small computational overhead also means fast convergence rate. In solving the nine engineering design problems, two FEs values of GPEAae rank first, six FEs values of GPEAae rank second, and the rest ranks third. Moreover, the convergence curves show the variation of the error values with the increase of iterations, and the trend of curves also reflect the convergence rate. As illustrated in Fig. 2 and 3, GPEAae has competitive performance in convergence rate compared with GPEAe, PSO, and DE.
• Discussion on the algorithmic robustness. The robustness of an algorithm can be reflected by the Worst, Mean and Std values, and the smaller these values also mean higher robustness of the algorithm. As shown in Table 1 to  behave mediocre when solving the nine engineering design problems. Compared with GPEAe, the GPEAae has better robustness, but it needs to be further improved.
The experimental results demonstrate that GPEAae has superior performances in both benchmark functions and practical applications, and shows great advantages compared with GPEAe.

V. CONCLUDING REMARKS
A new grey prediction evolution algorithm, which is abbreviated as GPEAae, is proposed in this paper. This paper first develops an accelerated first-order inverse accumulating generation operation (1-AIAGO), then based on the 1-AIAGO to create an accelerated even grey model (AEGM(1,1)). Finally, the GPEAae regards the population sequence as time series, and then employs an AEGM(1,1) as its reproduction operator to generate the offspring. During the iterations, the GPEAae superimposes and amplifies the weak effect produced by the proposed 1-AIAGO which replaces the predicted value with the corresponding original value. Since the 1-AIAGO calculates the predicted value one less time in each iteration, the computational complexity of the GPEAae is lower than that of the GPEAe. In CEC2014 benchmark functions experiments, the GPEAae is tested and compared with GPEAe, PSO, and DE. The experimental results indicate that the comprehensive performance of GPEAae is significantly better than GPEAe, PSO and DE. Specifically, the GPEAae outperforms the GPEAe on both solution accuracy and convergence rate. In nine engineering constrained design experiments, the GPEAae can reach all the current best feasible solutions, and has competitive computational overhead compared with GPEAe and the state-of-the-art algorithms.
Along the improvement ideas of GPEAae, some other effective grey models such as grey Bernoulli model [17], fractional grey model [51], etc, could also be utilized to construct more GPEA in the future. In addition, some effective evolutionary approaches such as parameter control, operator hybridization, adaptive strategy, etc, could be employed to improve the search behavior of GPEAae. In future research, it is also promising to explore GPEAae's wider application scenarios and suitability for various problems. With the created novel AEGM(1,1) as a bridge, the GPEAae can be further studied by researchers in both optimization field and grey system field.