A Surrogate-Assisted Many-Objective Evolutionary Algorithm using Multi-Classification and Coevolution for Expensive Optimization Problems

Surrogate-assisted evolutionary algorithms have received a surge of attentions for their promising ability of solving expensive optimization problems. Existing surrogate-assisted evolutionary algorithms usually adopt the regression models and the binary classification models to guide the evolution of the population for solving the multiobjective optimization problems. However, the regression models will make the algorithm to be increasingly computation-expensive as the number of objectives increases, while the use of the binary classification models might suffer from the poor diversity since the diversified information of solutions cannot be reflected in these classification models. For this issue, this paper proposes a surrogate-assisted many-objective evolutionary algorithm using the cooperation of the multi-classification and regression models to improve the search quality while reducing the computational cost. Our approach includes two parts: At the model training stage, a multi-classification model is constructed to divide the whole population into several classes for ensuring diversity, a distance regression model and an angle regression model are used to select solutions with better convergence and diversity in each class; At the evolution stage, a coevolutionary framework is used to guide the evolution according to a new selection criterion. Experimental results verify the effectiveness of the proposed algorithm on a set of expensive test problems with up to 10 objectives.

However, most existing MOEAs are not efficient to deal with the expensive optimization problems, since they generally require thousands of fitness evaluations (FEs) and one single FE can be very time-consuming or financially very expensive. For example, a single function evaluation based on computational fluid dynamic (CFD) simulations could take from minutes to hours [18], which is computationally prohibitive to MOEAs.
One approach to address above problem is to use efficient surrogates for approximating, which can effectively reduce the computational cost. Such algorithms are called surrogate-assisted evolutionary algorithms (SAEAs). In the past decades, a number of SAEAs have been developed for expensive MOPs [19][20][21][22][23][24][25][26], which are called multiobjective SAEAs (MOSAEAs). However, there are still several challenging issues: First, various types of surrogate models can be used in SAEAs, such as polynomial response surface methodology [27], Gaussian process model (also known as Kriging model) [28], artificial neural networks [29], and support vector machines [30]. However, there are no clear rules for selecting appropriate type of surrogates for the algorithm [31]. The second issue is what should be predicted by the surrogate [32]. At present, the most common SAEAs use surrogate to approximate the objective function or fitness function. Moreover, there are some SAEAs use surrogate to predict the relationship between the solutions. Third, how to select MOEA as the evolution engine according to the property of the problem is another key issue [33]. In addition, how to select effective solutions from the current population to update the surrogate is another important issue.
At present, the most popular method is to use the surrogate to approximate the objective function. However, when MOSAEAs use surrogates to approximate the objective functions, the computational cost of constructing surrogates will grow accordingly as the number of objectives increases. For this problem, the binary classification surrogates are introduced to predict the relationship between the solutions, e.g., the dominance relationship of the solutions [24]. In this way, the surrogate is able to divide the newly generated population into positive and negative groups. However, since it is difficult to compare solutions which are both predicted as positive group, the diversity of solutions cannot be maintained well. In this paper, we propose a new surrogateassisted many-objective evolutionary algorithm based on multi-classification and coevolution mechanism, called ACDEA, the algorithm integrates three effective surrogates at each generation which could can ensure diversity while controlling the number of surrogates.
Our contributions mainly include:  A multi-surrogate cooperation mechanism is constructed. In this mechanism, a multi-classification model is used to divide the whole population into a set of classes, which is conductive to the diversity; a distance regression model is used to predict the convergence of candidate solutions and an angle regression model is used to explore uncertain region of each class and improve the diversity.  A coevolutionary framework is incorporated to guide the evolution to search promising solutions according to a new selection criterion. In the rest of this paper, the related work and our motivation are described in Section II. In Section III, the proposed many-objective SAEAs ACDEA is introduced. Numerical experiments are detailed in Section IV. Finally, the conclusions are outlined in Section V.

II. RELATED WORK AND MOTIVATION
In this section, we first present a brief description of some existing MOSAEAs, and then introduce our motivation.

A. SURROGATE-ASSISTED MOEAs
In principle, SAEAs use surrogates to approximate the objective functions or related functions [34], which can be formulated as where * f is the true function, f is the approximate function gotten by the surrogate and the () x  is the error function. A certain number of SAEAs were proposed in the past decades, which can be roughly divided into two categories according to the intention of using surrogate.
In the first category, the surrogate is used to approximate the objective function or fitness function. For example, in the MOEA using weight aggregation and efficient global optimization (ParEGO) [19], an aggregation function is set up by randomly selected vectors at each generation, and a Kriging surrogate is created to approximate the aggregation function. In the MOEA/D assisted by efficient global optimization (MOEA/D-EGO) [20], the used surrogate is to approximate the objective function of each subproblem. And in the Kriging assisted RVEA (K-RVEA) [21], the surrogates based on Kriging are created to approximate each objective function at each generation. This sort of SAEAs employ traditional MOEAs after creating the surrogates, and they can lead to solutions with both excellent convergence and diversity.
In the second category, the surrogate is used as a classifier to divide the newly generated population into positive and negative groups. At present, there are few such work. For example, in domination-based MOEA (CPS-MOEA) [22], the training set is divided into positive and negative groups according to the nondominated sorting, then the surrogate is created to select 'good' solutions from newly generated population at each generation. In classification and regression-assisted differential evolution algorithm (CRADE) [23], the classification surrogate is used to discard offspring solutions worse than their parents. In classification-based surrogate-assisted evolutionary algorithm (CSEA) [24], the solutions in training sets are divided into good group and bad group according to a classification boundary consists of several reference solutions, and then an artificial neural network is applied to predict the dominance relationship between newly generated solutions. This category of SAEAs usually needs only one surrogate at each generation.
The above two categories of SAEAs have their own advantages, but they also have their own disadvantages. We put forward our motivation according to their shortcomings.

B. MOTIVATION
The number of surrogates is an important factor related to the efficiency of MOSAEAs since the growing number of surrogates will increase the difficulty of managing these surrogates, and also the computation cost of constructing surrogates. Controlling the number of surrogates can also provide an opportunity to use more complex surrogates.
Although some MOSAEAs use uni-surrogate at each generation, e.g., CSEA [24]. Those algorithms will instead post a challenging diversity issue, which is caused by the use of the classifier as the surrogate. In such MOSAEAs, the surrogate divides the newly generated population into positive and negative groups. However, it is difficult to further compare the solutions which are both predicted as positive group, which always leads to the poor diversity. An example is given in Fig. 1, we can see that in the positive group, there are four positive solutions gathering together, and another positive solution far from them. Obviously they have different diversity, however the surrogate predicts that they are equally good.
Many MOSAEAs construct a corresponding surrogate for each objective, with the aim of approximating the exact value of the objective, e.g., KRVEA [21]. These algorithms can guarantee better diversity because of the diversity strategy they use, e.g., reference vector guided method used in KRVEA which divides individuals of different diversity into different categories, however, such strategies always need to know each objective value of the individuals. Accordingly, with the increase of the number of objectives, more surrogates are needed. Therefore, the motivation of our algorithm is to establish a method of constructing the surrogates which can control the number of surrogates and ensure the diversity as well. This paper suggests a MOSAEA called ACDEA, the core ideas is: using a multi-classification model to divide solutions with different diversity into different classes, which can avoid establishing surrogates for each objective. And using the other two surrogates to find promising solutions in each class. The problems presented in above can be addressed properly.

A. FRAMEWORK
In this paper, an ACDEA is proposed for expensive manyobjective optimization. The core concept of ACDEA is using multi-classification surrogate to divide solutions into several classes and using the other two surrogates to find promising solutions in each class. Fig. 2 describes the framework of the proposed ACDEA. The pseudo code of ACDEA is presented in Algorithm 1, which can be divided into 5 main steps as follows： 1) Initialization (Lines 1-5): An initial population P0 with 11d-1 solutions is generated using Latin hypercube sampling [35], N uniformly distributed reference vectors are generated according to N uniformly distributed reference points which are generated by the canonical simplex-lattice design method [14], where d is the dimension of decision variables. Set FE to 11d-1 which represents the number of solutions have already evaluated by expensive objective functions. The solutions in P0 are copied to archive A1 and T1, which used to save all solutions that have already evaluated by expensive objective functions and the solutions used as training sets respectively. The initial population P0 is used as the training set of the first-generation surrogate. 2) Creating surrogate (Lines 7-9): K reference solutions are selected according to the method described in section Ⅲ -C from the current training set which are already evaluated by the expensive function evaluations. Then three labels of each solution in training set are calculated according to the reference solutions, which are Angle, Distance and Class(described in section Ⅲ-B). Then a multiclassification surrogate is created based the label Class, and two regression surrogates are created based the label Angle and the label Distance. 3) Coevolution using surrogate (Lines 10): Current population is classified into K classes by the multiclassification surrogate. In each class, offspring solutions are generated by crossover and mutation [36], and three labels of each offspring are predicted by the surrogates, then the promising solutions are selected from each class based on a criteria called ACD. These operations are performed separately in each class. All promising solutions are then reassigned to each class by their predicted value Class and to be the parents of next generation. 4) Selection of solutions to be re-evaluated (Lines [11][12]: K solutions are selected from the last generation population Q of coevolution based the criteria ACD. Then these K solutions are re-evaluated by expensive function evaluations. These K solutions are added to archive A1, and the value of FE is updated.

5) Selection of next generation training set (Lines 13):
11d-1 solutions are selected from archive A1 as the training set of the next generation surrogates. 6) Repeat steps 2-5 until the maximum number of FEs is reached.

B. CREATION OF THREE SURROGATES
In ACDEA, three surrogates are created at each generation according to three labels of the training data, these three labels are Angle, Distance, and Class. The definitions of three labels are as follows:

1) Class
The objective space is divided into several classes, label Class represents the class that the solution assigned to. The specific classification method is as follows.
First, K reference solutions with both good convergence and diversity are selected from current training sets based on the method described in section III-C. Then the entire objective search space is divided into K classes based on the angle to the reference solutions. Similarly, the other solutions in training sets are assigned to each class according to the angle to the reference solutions. An example is given in Fig. 3. By this way, each solution in training set has its label Class, which are used to train a multi-classification surrogate. With the multi-classification surrogate, the label Class of newly generated solutions could be predicted without expensive fitness evaluations. We use the multi-classification surrogate because we want to find promising solutions in each class so that could increase the diversity.

2) Distance
Label Distance represents the distance from the solution to the origin. A distance regression surrogate is created based on this label, this surrogate is used to predict the label Distance of the newly generated solution. Label Distance could indicate the convergence of solutions to some extent.

3) Angle
Label Angle represents the angle between the solution and the reference solution that it assigned to. An angle regression surrogate is created based on this label, which is used to predict the label Angle of the newly generated solution. Angle is help for searching uncertain region and further improving the diversity, which will be discussed in section Ⅲ-D. For instance, let us consider the situation shown in Fig. 4, with two reference solutions R1 and R2, and the other two solutions S1 and S2. As the angle θ1 between the solution S1 and the reference solution R1 is less than the angle between the S1 and the other reference solutions, this solution is assigned to the reference solution R1 and the distance between S1 and the origin is denoted by Dic1. Therefore, the label Class of S1 is denoted by Ⅰ, and the label Distance and Angle of S1 are denoted by Dic and θ1 respectively. Similarly, Ⅱ, Dic2, and θ2 will be the three labels of solution S2 respectively. The multi-classification surrogate, distance surrogate and angle surrogate are both established by Kriging models. It should be noted that since the predicted value of the multiclassification model may be decimal, we integer it as use algorithm 3.

C. SELECTION OF REFERENCE SOLUTIONS
The reference solutions should have both good convergence and diversity. Good diversity helps to make the classification of the search space more uniform, and good convergence makes the label Angle could better explore the uncertain region. This paper selects the reference solutions based on a criterion inspired by KRVEA [21], which is summarized in Algorithm 2. The description of selecting reference solutions is as follow: First, a set of uniformly distributed reference vectors is generated. The solutions in training sets are assigned to the reference vectors according to the angle between the solution and the vector. An example is shown in Fig. 5(a), where solutions S2, S3, S4 are assigned to the vector V2 since the angle between these solutions and V2 is less than the angle between them and the other vectors. Similarly, solution S1 is assigned to the vector V1. The reference vector having at least one solution assigned to is called the active reference vector.
Then, all active reference vectors are clustered into K groups, where K is the number of reference solutions we need. By this way, all solutions are divided into K groups according to the assigned active vector. Different from APD criterion used in KRVEA [21], we select the solution with the shortest distance to the origin in each group as the reference solution. This is due to the fact that the convergence of reference solutions is more important for ACDEA, which has been validated by experiments. An example is illustrated in Fig. 5(b), where seven solutions are divided into two groups with the clustering of reference vectors. In each group, the selected solutions represented by red.

D. COEVOLUTION BY USING SURROGATES
The coevolutionary framework is used to search promising solutions before updating the surrogates, which is described as follows: First, the current population P is divided in to K classes {P 1 , P 2 , ..., P K } according to the multi-classification surrogate (If the real objective values are known, the prediction is omitted), each subpopulation is regarded as the parent population of the class which it belongs to. The number of individuals in each subpopulation is recorded as nclass.
In each subpopulation P class , offspring population O with nclass solutions are generated by the simulated binary crossover [36]. The parent and offspring population are combined and nclass promising solutions are selected according to the predicted label and criterion ACD. The criterion ACD is defined as following: where Dic and θ are the predicted values of label Distance and Angle, respectively. Obviously, a small Dic could indicate a better convergence of solution. Next, we focus on the introduction about how θ promotes diversity and the search of uncertain region.
The reference solutions have a relatively better convergence in current generation, which are probably chosen to the next generation training sets and the region far from reference solutions may miss corresponding training data.
Label θ is helpful to select the solutions far from the reference solutions, where may be located at uncertain region, therefore the solutions with bigger θ can be helpful to search uncertain region. Moreover, a bigger θ is helpful for diversity, because in some scenarios, the solutions far from the origin may have better convergence than those near the origin, therefore bigger θ gives an opportunity to select solutions with good convergence but further distance to the origin. Parameter α represents the importance degree of label θ on ACD. In this paper, we set it to 0.2 empirically.
After the promising solutions are selected from each class, we need to predict their values of label Class. Then the promising solutions are re-assigned to each class and treated as the parents of next generation. The promising solutions are re-assigned because even if two parents are in the same class, their offspring may not belong to this class.
The above process is repeated until reach maximum number of iterations wmax as shown in algorithm 4. The procedure of coevolution is shown in Fig. 6. By this way, K solutions will be evaluated by objective functions at each generation

E. SELECTION OF TRAINING SET
In SAEAs, it is necessary to select some solutions that need to be re-evaluated at each generation. These selected solutions are useful to update surrogates because that they can expand the selection range of the training sets. To be specific, the detailed process of such selection is as follows: When the coevolution reaches its maximum number of iterations, the last generation population Q is divided into K classes according to the multi-classification surrogate. In each class, one solution with minimum ACD will be selected, these K selected solutions are re-evaluated by real (but expensive) objective functions. Note that, the selection of solutions does not use uncertainty information such as the average of the standard deviations obtained from Kriging models, because we select the solution with large label θ, which is helpful to search the unexplored region.
As indicated in [37], the computational complexity of training Kriging is O(n 3 ), where n is the number of training data. Therefore, in order to reduce the computation time of training surrogates, the amount of training data need to be limited. In this paper, the maximum number of training data is set to 11d-1, where d is the number of decision variables. We use the same method as the selection of reference solutions (Algorithm 2) to select the training sets for next generation surrogates.

IV. EXPERIMENTAL STUDY
In this section, we conduct an experimental study to validate the performance of ACDEA in dealing with multiobjective problems. Three state-of-the-art MOSAEAs, i.e., KRVER [21], CSEA [24], and ParEGO [19], are employed as the compared algorithms over the proposed ACDEA. A set of multi-objective benchmark functions are selected from DTLZ [38], where the number of objectives is set to be 3, 4, 6, 8, and 10, and the number of decision variables is set to be 10.
In the experiments, the statistic results are obtained by each algorithm via 25 independent experiments. The Wilcoxon test is employed to show the significant difference of compared algorithms, where symbols "+" means the compared algorithms have a better performance than ACDEA, symbols "−" means the compared algorithms are worse than ACDEA and symbols "=" means there is no significant difference between the compared algorithms and ACDEA.
The other parameter setting are as follows: the Kriging is used as the surrogate which is implemented in DACE toolbox [39], and all experiments are carried out on PlatEMO [40]. For ParEGO, as recommended in [19], the number of weight vectors is set to be 11 when the number of objectives is 2, and it is set to be 15 when the number of objectives is 3. The maximum number of evaluations using surrogates before updating the surrogate is set to be 200 000.

A. PARAMETER SETTINGS
For K-RVEA, as recommended in [21], the parameter δ is set to be 0.05N, where N is the number of populations. The number of iterations of RVEA using the same surrogates before the surrogate updated is set to be 20. The number of solutions to be reevaluated before the surrogates are updated is set to be 5.
For CSEA, as recommended in [24], the maximum number gmax of using surrogate before updating is set to be 3000, the number of reference solutions k is set to be 6, and the number of hidden neurons H is set to be 10.

B. PERFORMANCE INDICATOR
In this paper, we use IGD [41] as the metric to evaluate the performance of the algorithms. In principle, IGD can evaluate both convergence and diversity performance of the obtained solutions. For an algorithm, a smaller IGD value means a better quality of solutions for approximating the PF. To be specific, IGD can be defined as where P * is a set of evenly distributed reference points along the true PF, Ω is the set of achieved nondominated solutions, and dis(x, Ω) is the minimum Euclidean distance between x and the points in Ω. In this paper, the number of reference points is set to be around 10000.

C. PERFORMANCE ON DTLZ PROPLEMS
The experimental results over 25 independent runs are listed in Table Ⅰ, where the best results are bold. There are no results for ParEGO when the number of objectives is more than four, because the authors of ParEGO limited it to lower than four objectives. The excess part is denoted by "NA". As shown in Table Ⅰ, we can see that ACDEA achieves the best results on the 16 test problems, nearly half of all test problems. These results show the competitiveness of ACDEA over its compared algorithms KRVEA and 8 PAREGO. In the follows, we will analyze the test results obtained by the algorithms on each test functions as follow: For DTLZ1 and DTLZ3 with multimodal landscapes, the statistic results obtained by the algorithms can be found in Table I. For sake of clarity, the nondominated solutions obtained by the algorithms of the run producing the median IGD values on three and ten-objective DTLZ1 are illustrated in Fig. 7 and Fig. 8. As shown in Fig. 7, we can clearly observe that the convergence of ACDEA in the circle region is better than in the other regions, and the convergence of the other algorithms is similar in the whole region. Such performance gap between ACDEA and its compared algorithms may be due to the coevolutionary framework used in ACDEA. In ACDEA, the search space is divided into several classes by reference solutions, and the coevolution could reduce the interference between populations from different classes. Therefore, the slow convergence of one population will not affect other populations.
However, we can also observe from For DTLZ2 and DTLZ4, whose PFs are similar to each other, the final nondominated solutions obtained by the compared algorithms of the run producing the median IGD values on three-objective DTLZ2 instance are show in Fig.  9. As shown in the figure, ACDEA has a better convergence than the other algorithms. This is mainly because that the criteria of ACD used in ACDEA is able to accelerate the convergence performance of the algorithm.
For DTLZ7, the PF is discontinuous, the final nondominated solutions obtained by the compared algorithms of the run producing the median IGD values on three-objective DTLZ7 instance are show in Fig. 15. As shown in the figure, KRVEA achieves the best performance, ACDEA achieves better diversity than CSEA because of 12 the use of the multi-classification model which assigns individuals of different diversity into different classes.
In order to further prove the performance of our algorithm, ACDEA is tested on WFG [42] and compared with KRVEA, CSEA and ParEGO. The number of objectives is set to 3, the position parameter in WFG is set to 8, and the maximum number of function evaluations set to 250. The other parameters of the algorithms are set as the above.
The IGD values obtained by four algorithms on WFG1-9 are given in Table Ⅱ. As shown, we can find that ACDEA performs much better in solving WFG1 and WFG2, but performs worse on WFG3 compared with other algorithms. Note that WFG3 has a degenerated front. Although the classification method is used in ACDEA is helpful to solve such problem, there are still many solutions in Pareto set of WFG3, which are far from the origin in the objective space. For ACDEA, label Angle is helpful to find the solutions far from the origin, but it can't find all of them, which may lead to poor diversity and lower IGD. WFG4-WFG9 have several challenges in the decision space. For these problems, three algorithms have shown their own advantages, as shown in Table II.

D. INFLUENCE OF THE NUMBER OF REFERENCE SOLUTIONS
In this section, we analyze the influence of the parameter K over the performance of the proposed ACDEA in dealing with multi-objective problems. For ACDEA, the parameter K not only represents the number of classes of the population, but also denotes the number of new training data (i.e., the solutions that are re-evaluated by real objective functions), which is obtained before the surrogates are updated. This also affects the total number of iterations of the algorithm. As a result, the IGD values obtained by ACDEA with different K on DTLZ6 are shown in Table Ⅲ. From the experimental results, we can observe that when K is 6 or 8, the obtained result is best. When K is too small, the number of classes that the objective space has been divided into is very small, which makes each sub-space that is allocated by each class very large. ACDEA only selects one solution to be re-evaluated from each class at each generation, the selected solution can not well represent this class, which makes the diversity very poor. Therefore, too small K is not practical.
When K is too large, multiple individuals are reevaluated at each generation, which makes the total number of iterations smaller. The smaller number of surrogateupdating iterations indicates that the surrogates cannot reach the ideal level. Therefore, K cannot be too large. In this paper, we set K to be 6.

E. EFFECTIVENESS OF LABEL CLASS
As described in Ⅲ-C, in the process of selecting the reference solutions, all individuals are clustered into K classes through the reference vectors before the selection of reference solutions. However, we do not directly use this clustering result as the result of the label Class and use the reference solutions insteadly. First, the reference solution is a necessary condition for calculating the label θ, which is helpful to select the promising individuals from each class. Secondly, the clustering by reference vectors does not take into account the convergence of individuals, which makes some individuals far away from the pareto front interfere with the classification results. The reference solutions are the individuals with good convergence in each class. Classifying according to these individuals can avoid the interference of individuals with poor convergence.
In order to prove our point, we compare the classification methods using reference vectors and reference solutions. As a result, the IGD values obtained by ACDEA with classification methods on DTLZ6 are shown in Table Ⅳ. All other experimental parameters are set to be the same as those in Table Ⅰ. From the experimental results, we can observe that the classification method using reference solutions achieves better results.

F. RUNTIME COMPARISON
As discussed in Section Ⅱ-B, constructing surrogates is an important computation-consumed process for the implementation of MOSAEs. In this section, we will investigate the computation cost of constructing surrogate in the involved algorithms. Especially, we record and compare the runtime of the algorithms under the same computation environment with the same number of FEs. This is feasible because the evaluation time of the objective function used in the experiment is very small, and accordingly the algorithms' runtime is approximately equal to the computation cost of constructing surrogates.
In most existing MOSAEAs, the surrogates are used to approximate the exact value of the expensive objective function or the fitness function, which always have an ascendant training time as the number of objectives increases. For the control of training time, ACDEA constructs three surrogates at one generation (before surrogates are updated) no matter how many objectives the problem involves.
In order to study the computational efficiency of ACDEA, we compare ACDEA with the other SAEAs on DTLZ3 and DTLZ7 with different number of objectives. The detailed results are shown in Fig. 16 and 17.
It can be observed that the runtime of KRVEA increases as the number of objectives increases, because KRVEA creates surrogate for each objective, resulting in a longer time for the construction of surrogates. The runtime of CSEA does not increase with the number of objective, because CSEA uses the feed forward neural networks as the surrogate and only use one surrogate at each generation. Encouragingly, the runtime of ACDEA is the lowest in most test problems among the compared algorithms, due to the fact that it only needs three surrogates to be constructed and maintained at each generation, even in the case of multi-objective problems with many objective functions (e.g., more than ten objectives).
It can be shown from Fig. 16 and 17 that the runtime of ACDEA will be decreased as the number of objectives increases. This may be due to that as the number of objectives increases, the region of the objective search space assigned to each class is becoming larger, which means there are more opportunities to get a promising solution to be re-evaluated in each class at one iteration. Therefore, at each iteration, it is more easily to get K solutions to be re-evaluated by real expensive function since each class probably provide one solution, which makes the total number of iterations decreases. In this paper, a new surrogate-assisted evolutionary algorithm called ACDEA is proposed to deal with the expensive many-objective optimization problems. ACDEA uses the guidance of the reference solutions to divide the objective search space into several classes, and then three surrogates are constructed to predict the class labels of these solutions in the population, where the multiclassification surrogate is employed to classify the population, and the other two surrogates are used to find promising solutions in each class. In addition, a coevolutionary framework is adopted to search for the promising solutions with the surrogates, which can reduce the bad influence of poor convergence class on the other classes. The ACD criteria is used to evaluate and compare the solutions, which can not only balance convergence and diversity but also search the uncertain region.
The effectiveness of ACDEA is validated by the comparison experiments with KRVEA, CSEA and PAREGO on a set of benchmark problems from DTLZ and WFG. The computation cost of constructing surrogates is lower compared with the other algorithms. This is mainly because that ACDEA controls the number of surrogates at each generation. Experimental results have shown the efficiency of the proposed ACDEA.