A Decomposition Method Based on Random Objective Division for MOEA/D in Many-Objective Optimization

With the number increase of optimization objectives, the selection pressure begins to decrease, and the performance of multi-objective evolutionary algorithms becomes gradually inefficient. A decomposition method based on random objective division (ROD) is proposed in this paper for MOEA/D optimizing many-objective problems. We abbreviate MOEA/D using ROD decomposition as MOEA/D-ROD for easy expression. MOEA/D-ROD adopts a random objective partitioning method transforming a many-objective problem into multiple multi-objective problems. Furthermore, each multi-objective problem utilizes the assigned decomposition method to transform itself into multiple single-objective optimization problems for collaborative optimization. Therefore, different decomposition methods can be combined at the same time to balance diversity and convergence of the algorithm. Three sets of experiments are carried out on two sets of scalable problems DTLZ 1 - 4 and WFG 1 - 9, with the number of objectives from 3-8, 10 and 15. The experimental results verify the effectiveness of the proposed ROD decomposition method in solving those many-objective optimization problems.


I. INTRODUCTION
In real life, we often encounter optimization problems, which have multiple optimization objectives, and these optimization objectives often conflict with each other. These optimization problems are commonly called multi-objective optimization problems (MOPs) [1]. That is, the optimization of one objective may cause the other objectives to be worse, we need to find a tradeoff to balance these optimization objectives [2]- [4]. For a long time, in the field of multi-objective optimization, whether the research of theories and design of algorithms, or its applications in practical engineering, most of the research only concentrates on low-dimensional multi-objective optimization problems with 2 or 3 objectives [4]- [10], and a large number of multi-objective evolutionary algorithms (MOEAs) have been proposed to solve the The associate editor coordinating the review of this manuscript and approving it for publication was Mauro Gaggero .
MOPs [8], [11]- [15] well. In recent years, many-objective optimization with more than 3 objectives has become a hot research area. As the number of objectives increases, the nondominated solutions increase exponentially and the selection pressure decreases, which leads to some algorithms with good performance in solving 2 or 3 objectives optimization problems becoming very inefficient in solving many-objective optimization problems (MaOPs). These algorithms cannot find a good balance in diversity and convergence. The most straightforward method is to relax the dominance relation to enhance the selection pressure [16]- [18]. Another method is to preserve the population diversity for Pareto-based methods [19], [20]. KnEA [21] suggests to explore the knee region of the PF for further promoting convergence.
Indicator-based MOEAs, such as HypE [22] and SMS-EMOA [23] evaluate solutions by indicator, which has the capacity of measuring both convergence and diversity. These algorithms do not suffer from the selection pressure problem since they use the indicator values to guide the search. However, they do have their own issues. For example, calculating the indicator such as HV is very time consuming [24], thus, these algorithms become low-efficient in solving many-objective optimization problems. Moreover, they prefer knee points and have difficulties in producing a uniformly distributed population when the PF is not linear [25]. To solve the low-efficiency problem of HV calculation, some computationally efficient indicators such as R2 [26] and p [27] are used in indicator-based MOEAs [28], [29] for solving many-objective optimization problems. However, a pre-defined set of reference points or a subset of the real PF is usually required, which is another challenging task. Recently, some excellent algorithms, such as BiGE [30], SRA [31], and 1by1EA [32] generally adopt multiple simple indicators to measure the convergence and diversity separately showing significant performance improvements. However, some problem-dependent parameters which need be tuned carefully are included in these algorithms.
As included in another non-Pareto-based algorithm framework, there are a large number of decomposition-based evolutionary optimization algorithms proposed to solve MaOPs [1], [2], [33]- [35]. They decompose an MaOP into some subproblems and obtain an approximation of the Pareto set by solving all the subproblems simultaneously. Usually, a predefined set of well-spread reference vectors is employed to facilitate this decomposition, and a neighborhood structure is also constructed with the help of these reference vectors. Then, a subproblem is optimized by using the information mainly from its neighbor subproblems. For well solving the MaOPs, one main approach in recent decomposition-based MOEAs focuses on improving the diversity management mechanism. These algorithms (e.g., NSGA-III [36], I-DBEA [33], and MOEA/D-DU [37]) try to measure the population diversity using the distance between the solutions and the reference vectors. Since the reference vectors are well-spread, it helps to maintain the population diversity in objective space. Decomposition strategy can also be used to partition the objective space into small subregions. Good solutions are emphasized by using traditional methods in each subregion to maintain a balance between convergence and diversity [35]. MOEA/DD [1], RVEA [38], DEA [39], and SPEA/R [40] are representatives included in this type. The set of reference vectors plays a very important role in decomposition-based many-objective evolutionary algorithms. However, how to set these reference vectors is still an open question.
MOEA/D-M2M, proposed by Liu et al. [41], is very suitable for solving MaOPs. For the degenerated MaOPs with redundant objectives and very low-dimensional PS, the uniform and fixed weight vectors adopted in [41] waste computational resources. Then, many researchers have improved the original MOEA/D-M2M with different mechanisms. Liu et al. [42] introduced an adaptive mechanism into the MOEA/D-M2M framework, which adaptively adjusts weight vectors of each subregion with the Max-Min method according to the distribution information of the solutions found so far, thus enabling a region to be decomposed adaptively. Chen et al. [43] designed a new dominance relation called D-dominance, which combines the advantages of decomposition and domination. Then the D-dominance was employed into the MOEA/D-M2M as the selection criterion for each subpopulation. Besides, the resulting algorithm further utilizes the weight vector and reduces its design difficulty. Lin et al. [44] proposed an algorithm combining MOEA/ D-M2M and I SDE + (the indicator based on the shift-based density estimation) [45] that contains both information on individual distribution and convergence. The algorithm calculates the indicator I SDE + independently in each subregion, which can effectively maintain population diversity and reduce the computational cost.
Ishibuchi et al. [46] demonstrated that the shape and the size of the PF have a large effect on the performance of the weight vector-based algorithms that perform well only when the distribution of weight vectors is consistent with the PF shape. When the PF shape is irregular, the design of weight vectors to ensure the diversity of solutions is challenging. He et al. [47] proposed a dynamical decomposition strategy for many-objective optimization. Different from existing decomposition methods, this method decomposes the objective space into subregions dynamically without employing a set of predefined reference vectors. Instead, the solutions themselves are considered as reference vectors. Thus, the performance of the proposed algorithm is less dependent on the PF shapes and remains robust especially in solving MaOPs with irregular PFs.
In this paper, a decomposition method based on random objective division (ROD) is proposed under the framework of MOEA/D to handle the limitations of MOEA/D [48] in solving many-objective optimization problems. These limitations are roughly as follows: Firstly, MOEA/D cannot generate very uniform and effective weight vectors when the number of objectives processed exceeds 3 [37], [49]- [51]. Secondly, in despite of the shape of the Pareto front, the production way of the weight vector is very simple in MOEA/D. Thirdly, global and local search information are underutilized in MOEA/D due to applying only one of several decomposition methods. Therefore, it is urgent to carry out in-depth research on many-objective optimization problems, and further explore high performance for MOEA/D in solving many-objective optimization problems. In response to the above difficulties encountered by MOEA/D, the basic idea of ROD is that it segments the objectives in a random way and randomly assigns a decomposition method to each segment to improve the performance of MOEA/D. The main contributions of this paper are as follows: (1) Optimization objectives are divided in a random way, so a many-objective optimization problem can be transformed into several ones. One high-dimensional optimization problem can be transformed into several low-dimensional optimization problems, and then each transformed problem is solved at the same time. Therefore, it can play a very good dimensionality reduction effect, which reduces the difficulty VOLUME 8, 2020 of solving the problem and reduces a lot of calculations. In other words, it saves calculation costs and speeds up the problem solving process.
(2) After the dimensionality reduction of an optimization problem with many objectives is performed by random dimension division, each segment is randomly assigned a decomposition method. In other words, it is possible to apply one or several decomposition methods to solve the MaOPs. Combining several decomposition methods together can overcome the limitation that using only one decomposition method may not be suitable for solving a problem with cultural objectives. Therefore, the cooperation of multiple decomposition methods can better balance convergence and diversity, and the distribution of Pareto optimal solutions is more uniform.
The remaining parts of this paper are organized as follows. Section II introduces the background knowledge on multi-objective optimization problems, the basic MOEA/D and the basic decomposition approaches introduced in MOEA/D. Section III describes the decomposition method based on random objective division, and the framework of the proposed algorithm MOEA/D-ROD. Section IV presents a series of experimental studies about MOEA/D-ROD. Finally, section V concludes this paper.

II. RELATED WORKS
Without loss of generality, a multi-objective optimization problem with n decision variables and m objective variables can be expressed as (1) [48]: where x = (x 1 , x 2 , . . . , x n ) ∈ X ⊂ R n is the n-dimensional decision vector, X is the n-dimensional decision space, y = (y 1 , y 2 , . . . , y m ) ∈ Y ⊂ R m is the objective vector with m-dimension, Y is the objective space with m-dimension. The objective function is consist of m mapping functions from decision space to objective space. When m is larger than three, we often call Eq. (1) as many-objective optimization problem.
It is assumed that x A , x B ∈ are two feasible solutions to the multi-objective optimization problem shown in equation (1), then we say that x A dominates x B if and only if: recorded as Suppose is the feasible solutions set, x * ∈ is called as Pareto optimal solution (non-dominated solution), if and only if:¬∃x ∈ : The set of all Pareto optimal solutions is called the Pareto optimal solution set (PS), defined as: The set of objective vectors corresponding to all Pareto optimal solutions PS is called the Pareto front (PF): MOEA/D proposed by Zhang and Li [48] successfully introduces the decomposition methods commonly used in mathematical programming into the multi-objective evolutionary domain and can directly use evolutionary calculations to solve the fitness allocation and diversity maintenance strategy of single-objective optimization problems. Three commonly used decomposition methods will be introduced next.
1) Weighted sum approach The single-objective subproblem that is decomposed by a multi-objective optimization problem using this decomposition method can be described as follows: x is the variable that is need to be optimized. If we substituted different λ values into the formula (1), we can get a set of Pareto optimal solutions. 2) Tchebycheff approach We can describe the single-objective subproblem transformed by this method into the following form: where z * = (z * 1 , . . . , z * m ) T is the reference point and for i that is from 1 to m, z * i = min {f i (x)|x ∈ }. There must be a weight vector λ so that x * that is each Pareto optimal solution for multi-objective optimization problem (1) corresponds to an optimal solution to the problem (6) and vice versa. Therefore, we can change the value of λ to get the set of Pareto optimal solutions.
3) Boundary intersection approach [48] In this approach, the single-objective subproblem is described as: where λ = (λ 1 , . . . , λ m ) T is the weight vector and z * = (z * 1 , . . . , z * m ) T is the reference point, the setting of their values is the same as in the Tchebycheff approach. If we want to handle the restrictions of the formula (6), we can use the method that is called as penalty-based boundary intersection approach (PBI) during the experiments. It's described as follows: where

III. THE PROPOSED MOEA/D-ROD
There are many decomposition methods used in decomposedbased MOEAs for handling many-objective optimization problems [48]. Two commonly used decomposition methods the weighted sum approach and the Tchebycheff approach, have their own advantages and disadvantages. For example, Tchebycheff method usually performs well on MOPs with convex and nonconvex PFs, while weighted sum approach is only suitable for the optimization problems with convex PFs. Further, weighted sum approach converges faster than Tchebycheff method and gets a smoother PF. Each decomposition method used widely has its own advantages and disadvantages. Therefore, it is a simple and effective strategy to combine the complementary advantages of different decomposition methods, which makes the algorithm obtain more solutions.
Along this way, we try to propose a new decomposition method based on random objective division, used in MOEA/D for solving many-objective optimization problem in this paper. The method first divides a high-dimensional many-objective into several low-dimensional ones by random objective division to achieve dimension reduction [52]. Then each segment is assigned a decomposition method randomly. Different decomposition methods can be used in combination to improve the performance of the algorithm.

A. DECOMPOSITION BASED ON RANDOM OBJECTIVE DIVISION
This paper combines the idea of randomly dividing objectives with the decomposition method. First, we introduce the process of random objective partitioning. The random objective division strategy transforms a high-dimensional many optimization problem into several low-dimensional multi-objective optimization problems in order to reduce the difficulty of solving problems and synthetically utilize the decomposition methods. In other words, that is to divide all the objectives into several segments, and ensure that the number of elements in each segment is not less than k [53]. We assume that the objective space is m-dimensional, and the number of elements in each segment is at least 2, that is, k is set to 2. In order to ensure the accuracy and validity of objective division, we can use mathematical formula to describe it as follows: In the formula, L l is the node resulting from the objective division, rand(p, q) is a random partition function, p and q are the two parameters of the function, p indicates the starting position of the division, q represents the ending position of the division. Thus, the result of the function is at the position that is from p to q. Besides, L l indicates the location of the current node. L l−1 , which determines the value range of the next node location, is at the previous node location. The process of generating nodes can be described in Fig. 1. The position where the star sits is the node position we get. Secondly, we randomly assign a decomposition method to each low-dimensional multi-objective problem obtained after segmentation. We can use the following mathematical formula to describe it: where rand() represents a randomly generated integer. For each segment i, Index(i) represents the index of the decomposition method to which the segment is assigned. Since we use three decomposition methods: weighted sum approach (ws), Tchebycheff approach (tch), and penalty-based boundary intersection approach (pbi), we set τ to 3. In experiments, when the value of Index(i) is 1, we use penalty-based boundary intersection approach (pbi); when Index(i) is equal to 2, we use Tchebycheff approach (tch); and when the Index(i) value is 3, we use weighted sum approach (ws). We describe it in Fig. 2 as an example. Trial experimental results have shown that this method combining three decomposition methods can improve the performance of the algorithm. After determining the use of the decomposition methods, the work in the following is based on the MOEA/D framework. VOLUME 8, 2020

B. THE FRAMEWORK OF THE PROPOSED MOEA/D-ROD
Based on the framework of MOEA/D, this paper divides the objectives of a given MaOP into several segments to achieve the effect of dimensionality reduction, and then randomly assigns a decomposition method to each segment. The specific algorithm framework is described in Algorithm 1.
In the initialization step of the Algorithm 1, B(i) stores the indices of the T nearest weight vectors of λ i . In this paper, we measure their nearest relationship by calculating the Euclidean distance between any two weight vectors. It's obvious that the nearest weight vector of λ i is its own and i ∈ B(i). If j ∈ B(i), the jth subproblem can be regarded as a neighbor of the ith subproblem.
In the update step, in order to maintain the diversity of the population and the overall performance of the algorithm, random objective partition is re-executed every t generations. t is an important regulatory factor in ROD for balancing effective exploration and efficient exploitation. The value of t is an integer set between 1 and maxFES/N , where maxFES is the total number of function evaluations. The larger the value of t is, the longer the interval of objective segmentation will be, and the algorithm tends to conduct local exploitation within the interval. The smaller the value of t is, the higher the frequency of objective segmentation will be, and the algorithm will tend to conduct global exploration. Our trial experiments observed that its value was better set between 50 and 100, so in the subsequent numerical experiments, we uniformly set t to 50.
In lines 12-21 of the algorithm, C ws , C tch and C pbi respectively save the objective indices of using weighted sum, Tchebychff and PBI decomposition methods. We use evolutionary operators to generate a new solution y from x k and x l . Then a problem-specific heuristic is used to repair y in the case when y invalidates any constraints. Thus, the resultant solution y is feasible and very likely to have a lower function value for the neighbors of ith subproblem. In lines 28-32 of the Algorithm 1, the objectives are divided into three parts: C ws , C tch and C pbi , respectively representing the use of weighted sum, Tchebycheff and PBI decomposition methods. Only when the the offspring individual is superior to the current individual in all three parts can the current individual be replaced by the offspring individual. The external population EP which is initialized in Step 1 is updated by the new generated solution y. Through the above process, the goal in (1) is to minimize F(x), otherwise the algorithm should move on to the update procedure. FES and maxFES respectively represent the current number of function evaluations and the total number of function evaluations. When the number of function evaluations reaches maxFES, the algorithm terminates and outputs EP.

IV. EXPERIMENTS
In this section, we call the MOEA/D using the Tchebycheff decomposition approach as MOEA/D-TCH, the MOEA/D that uses the weighted sum decomposition method as Generate an initial population x 1 , . . . , x N randomly or by a problem-specific method. Set    , UMOEA/D [50] and NSGA-II [14] to test the whole performance of the proposed method in solving many-objective optimization problems. Finally, in order to further verify the ability of MOEA/D-ROD in solving many-objective optimization problems, we compare MOEA/D-ROD with some other current excellent many-objective evolutionary algorithms, such as NSGA-III [36], MOEAD-DRA-ASTM [61] and MOEA/D-MO-CPS [62] on some relatively new test problems.

A. TEST CASES
In the experiments, the multi-objective optimization problems with dimension expansion are selected for testing. In the classic test problems, there are fewer multi-objective optimization problems with dimensional scalability. We have selected two sets of scalable test problems in experiments, the first set is the DTLZ family test instance consisting of four problems DTLZ 1 -4 [54], [55]. Their characteristics are displayed in the Table 1. The second set is the WFG family test instance consisting of nine problems WFG 1 -9 [56]. Each problem is with 3, 5, 8, 10 and 15 objectives in experiments.

B. PARAMETER SETTINGS
In order to reflect the fairness of comparative experiments, all algorithms use the same evolutionary operators for each test problem. Individuals use real-number coding. According to our experiments and the literatures [57], [58], for the DTLZ problems, compared algorithms use SBX crossover [48], [59], and the polynomial variation for mutation operator. 1) The setting of the population size and the weight vectors: To reflect the fairness of comparison, the population size set by UMOEA/D [50] and NSGA-II [14] is the same as in MOEA/D [48]. However, the setting of the weight vectors adopts the uniform design method in UMOEA/D. Then, in MOEA/D, since the weight vectors are generated by the simplex-lattice design method, the population size and weight vector settings are controlled by the integer H . That is, λ 1 , . . . , λ N all weight vectors, the weights of each objective function in each weight vector are taken from 2) The control parameters in evolutionary operators: In order to reflect the fairness of the comparison, the control parameters are set the same in all comparison experiments. The specific control parameters are set in the Table 2, where n is the number of variables.
3) The number of neighbors T is set the same in all experiments, T = 20. 4) In MOEA/D-PBI, the penalty parameter θ = 5 [48]. 5) We run all compared algorithms 20 times independently on each test problem to calculate the statistical results of the experiments.
6) For all the WFG problems, when the numbers of objectives are 3, 5, 8, 10 and 15, respectively, corresponding to the numbers of populations are 91, 210, 120, 220, 135. The maximum number of evaluations for all experiments is 100,000. As designed by the developer [56], n = k + l, where n is the number of decision variables, k is the number of position-related variables and l is the number of distance-related variables. In this paper, k = M − 1 and l = 10.

C. PERFORMANCE METRIC
In experiments, the inverted generational distance (IGD) [60] and the generational distance (GD) [60] are mainly used as the evaluation standard of the algorithm performance.
Inverted generational distance (IGD): Let P * be a set that is consist of uniformly distributed points in the objective space along the PF, and A is an approximate PF obtained by a multi-objective evolutionary algorithm. Then we can define the inverted generational distance (IGD) from P * to A as: where d(v, A) is the minimum Euclidean distance between v and the points in A. If the ideal Pareto solution in P * is sufficient and can depict the complete PF, the inverted generational distance (IGD) can reflect the diversity and convergence of the obtained PF to a certain extent. If we want to get a smaller the value of IGD, the obtained PF has to be infinitely close to the entire idea PF and can't have any left part. Thus, the value of IGD (P * , A) is smaller, the convergence and diversity of obtained Pareto front are better. Generational distance (GD): Let P * be a set of uniformly distributed points in the objective space along the PF, and let A be an approximation set to the PF. Then, the generational distance from A to P * is defined as: where d(v, P * ) = min shows the minimum Euclidean distance between v and the points in P * , and m is the number of objectives. Therefore, the smaller the value of GD(A, P * ), the better the convergence of the solution obtained by the algorithm, the closer to the ideal Pareto front. The most ideal case is that GD(A, P * ) = 0, which means  Table 3 and Table 4 which contain the median and the standard deviation of the IGD-metric and GD-metric values for each test problem with 3-8, 10 and 15 objectives. We use the bold font for the best value in different comparison. In order to easily observe the performance of the compared algorithms, we use the t-test on the statistical performance metric values to show the advantages of MOEA/D-ROD over other algorithms. The results are shown in the rows labeled 'ss(+/=/−)'. +/=/− shows that MOEA/D-ROD is superior to, similar to, or inferior to the compared algorithm, respectively. Comparing results on each test instance are summed up. Total comparing results are summed up in the last row. According to the data in the tables, we can draw such a conclusion that in most cases, the experimental  results of MOEA/D-ROD are superior to those obtained by other three algorithms. In very rare cases, its performances are not very satisfactory. However, the standard deviation of the performance metrics obtained by MOEA/D-PBI are the smallest in those four algorithms in most cases, which shows that PBI has the best stability in those four decomposition methods. The stability of MOEA/D-ROD is not as good as that of MOEA/D-PBI, which is caused by random objective division. Which objectives are assigned together and which decomposition method is used are set randomly by the algorithm on a regular basis. In order to maintain the diversity of the population and the overall performance of MOEA/D-ROD, the random objective partition is re-executed every 50 generations, which also brings a certain degree of deviation. However, based on the comparisons of the median, we still get a satisfactory result that the performance of ROD performs better than other three single decomposition methods. In order to show that the algorithm also has advantages in solving many-objective optimization problems, we carry out the comparisons of MOEA/D-ROD with MOEA/D [5], NSGA-II [14], and UMOEA/D [7] on DTLZ test problems. All the experimental parameters and operators are set the same. The comparison results are shown in the Table 5 and Table 6, which contain the median and the standard deviation of the IGD-and GD-metric statical values for each test problem. The best performance is labeled in the bold font.
In order to easily observe the performance of the compared algorithms, we use the t-test on the statistical performance metric values to show the advantages of MOEA/D-ROD over other algorithms. The results are shown in the rows labeled 'ss(+/=/−)'. +/=/− shows that MOEA/D-ROD is superior to, similar to, or inferior to the compared algorithm, respectively. Comparing results on each test instance are summed  up. Total comparing results are summed up in the last row. According to the data in the tables, we can observe that in most cases, the experimental results of our proposed algorithm are superior to those obtained by other three algorithms. In rare cases, the experimental results of UMOEA/D are better. However, it should be noted that due to the random objective division used in the algorithm, the standard deviation of the results are not very good, and the stability of the algorithm needs to be further improved. Take DTLZ 2 as an example, when the number of its objectives is 5, 8, 10, and 15, Figs. 3 -6 respectively show the parallel coordinates of non-dominated fronts obtained by MOEA/D-ROD, NSGA-II and UMOEA/D. Each one is associated with the result closest to the average IGD value. From these four figures we can observe that, the effect of MOEA/D-ROD is generally better than the other two algorithms on both convergence and diversity. In contrast, the performance of UMOEA/D is acceptable in terms of convergence, but some objective values are slightly smaller than the maximum value of 1, which means that UMOEA/D obtains many intermediate solutions. NSGA-II can perform well in terms of diversity, but the results obtained by NSGA-II are far away from the PFs because the maximum values of some objectives are much greater than 1. The solutions distribution of UMOEA/D is always the worst among these three algorithms in terms of diversity.  The comparison results are shown in Table 7 and Table 8. In order to facilitate comparison, the median and the standard deviation values of all experimental data are summarized.
In these two tables, all the best experimental data after comparison are shown in black font, which can facilitate our observation and analysis. In order to easily observe the    each test instance are summed up. Total comparing results are summed up in the last row. We can see from the two tables that the median values of MOEA/D-ROD are generally smaller than the other two algorithms, which demonstrates that, compared with NSGA-III, MOEAD-DRA-ASTM and MOEA/D-MO-CPS, MOEA/D-ROD is still competitive in solving these WFG test problems. However, random objective division causes the standard deviation values of the metrics relatively large. As we can see from Table 7 and 8, MOEA/D-ROD shows poor stability on some cases, WFG 2 and WFG 8 for example. In other words, the random objective division makes the performance of the algorithm not completely stable. Generally speaking, the overall performance of MOEA/D-ROD is acceptable and competitively in solving many-objective WFG test problems, but the stability of the algorithm needs to be further improved.

V. CONCLUSION
A new decomposition method based on random objective division (ROD) is proposed in this paper for MOEA/D in solving many-objective optimization problems. The method first divides a high-dimensional many-objective into several low-dimensional ones by random objective division to achieve dimension reduction. Then randomly assigns a  In the future, we will carry out some work on improving the stability of the decomposition method. It is still our future work to further improve the ability of the algorithm in solving many-objective optimization problems.
XUE LU received the B.S. degree in computer science and technology from Shandong Normal University, China, in 2017, where she is currently pursuing the master's degree with the School of Information Science and Engineering. Her research interest includes many-objective optimization.
YANYAN TAN received the Ph.D. degree in intelligent information processing from Xidian University, Xi'an, China, in 2013. She is currently a Lecturer with the School of Information Science and Engineering, Shandong Normal University, Jinan, China. Her main research interests include computational intelligence, multi-objective optimization, data analysis, and machine learning.