Multiple Populations for Multiple Objectives Framework With Bias Sorting for Many-Objective Optimization

The convergence and diversity enhancement of multiobjective evolutionary algorithms (MOEAs) to efficiently solve many-objective optimization problems (MaOPs) is an active topic in evolutionary computation. By considering the advantages of the multiple populations for multiple objectives (MPMO) framework in solving multiobjective optimization problems and even MaOPs, this article proposes an MPMO-based algorithm with a bias sorting (BS) method (termed MPMO-BS) for solving MaOPs to achieve both good convergence and diversity performance. For convergence, the BS method is applied to each population of the MPMO framework to enhance the role of nondominated sorting by biasedly paying more attention to the objective optimized by the corresponding population. This way, all the populations in the MPMO framework evolve together to promote the convergence performance on all objectives of the MaOP. For diversity, an elite learning strategy is adopted to generate locally mutated solutions, and a reference vector-based maintenance method is adopted to preserve diverse solutions. The performance of the proposed MPMO-BS algorithm is assessed on 29 widely used MaOP test problems and two real-world application problems. The experimental results show its high effectiveness and competitiveness when compared with seven state-of-the-art MOEAs for many-objective optimization.


I. INTRODUCTION
ANY promising multiobjective evolutionary algorithms (MOEAs) have been proposed in recent decades to solve multiobjective optimization problems (MOPs) [1] [2].These MOEAs include the elitist nondominated sorting genetic algorithm (NSGA-II) [3], the improved strength Pareto evolutionary algorithm (SPEA2) [4], the indicator-based evolutionary algorithm (IBEA) [5], the MOEA with decomposition (MOEA/D) [6], and the multiple populations for multiple objectives (MPMO)-based algorithm [7].However, almost all these MOEAs face a challenge: the proportion of nondominated solutions in the current population becomes larger when solving problems with more than three objectives, i.e., many-objective optimization problems (MaOPs).Such enormous numbers of nondominated solutions lead to insufficient selection pressure, resulting in the degraded performance of MOEAs.This phenomenon is known as dominance resistance [8].Another challenge in dealing with MaOPs is diversity maintenance since many existing diversity management methods may not work well in high-dimensional objective spaces.Due to the limitation of population size, it is more difficult for MOEAs to approximate the whole Pareto front (PF) when the number of objectives increases.
To tackle the challenges in convergence and diversity that MOEAs encounter when dealing with MaOPs, many approaches have been proposed.In general, the most common approaches can be divided into four categories: dominance-based, decomposition-based, indicator-based, and multi-population-based.Although they have shown good performance in solving MaOPs, some side effects have also been observed.For example, dominance-based approaches usually increase selection pressure by enlarging the dominating area, which can cause the loss of diversity [9].For decomposition-based approaches, the distributed reference vectors in these algorithms may have an adverse effect on diversity maintenance with nonuniform or convex PFs [10] [11].For indicator-based approaches, one well-known drawback is the high computational complexity of indicators such as hypervolume (HV) [12], especially on MaOPs.Among all the above approaches for solving MaOPs, the inherent multiple-population advantages in the MPMO framework make it very suitable to be extended for solving MaOPs with more objectives.In MPMO, a corresponding

Multiple Populations for Multiple Objectives
Framework with Bias Sorting for Many-objective Optimization population is adopted to optimize one objective approximating the PF.This strategy can accelerate each population approaching the PF.In every generation, different populations can communicate and share information.Compared with other approaches, the benefits of MPMO lie mainly in two aspects.First, the MPMO does not face the difficulty of sorting enormous numbers of nondominated solutions caused by the many objectives in MaOPs because the MPMO mainly addresses one objective in each population.Second, the MPMO does not need to deal with a large number of subproblems because it does not need to decompose the original MaOP.Nevertheless, as shown in [13], only considering one objective in each population may cause the PF margin phenomenon when dealing with MaOPs.This indicates that each population in an MPMO-based algorithm must consider the priority of the corresponding optimized objective and the equality among other objectives.
In MOEAs, one of the common strategies that can reflect the equal relationship among all objectives is nondominated sorting (NDS) based on the Pareto dominance relationship.However, similar to the inefficiency of dominance-based approaches, the effectiveness of NDS will deteriorate on MaOPs due to the emergence of a large number of nondominated solutions (more details are provided in Section II-C).Therefore, we propose a bias sorting (BS) method that prioritizes the corresponding population objective without ignoring other objectives.The BS method pays more attention to the convergence on the corresponding population objective to increase the selection pressure.Solutions with better values on the corresponding population objective will be preferentially ranked at the lower front (i.e., better front).Therefore, to combine the advantages of both the BS and NDS methods, this paper proposes an adaptive sorting strategy.First, during the early search stage, solutions can quickly approach the PF with the guidance of the BS.Second, by considering the convergence status of each population, the sorting method is adaptively switched from BS to NDS in the later search stage to further promote convergence.
Based on the consideration above, we propose an MPMO-based MOEA with BS (termed MPMO-BS) for solving MaOPs.The MPMO-BS follows the principle of convergence first and diversity second [14].
For convergence, each population in MPMO-BS optimizes a specific objective, and all of them together can promote the convergence performance on all objectives of the MaOP.For each population, two steps are carried out in every generation.First, solutions are ranked at several fronts via the more suitable sorting method between the BS or the NDS.Herein, the solutions at the lower front are better than those at the higher front.Second, as for the solutions ranked at the same front, we develop an auxiliary convergence fitness (ACF) strategy to further distinguish their convergence status.Therefore, we can measure the relationship between any two solutions in each population through the above two strategies.
For diversity, we use an archive to preserve well-performing solutions.After the parallel optimization process of all populations, the nondominated solutions in these populations are moved into the archive.An elite learning strategy (ELS) is developed to help jump out of possible local optima and help the archiving solutions efficiently converge to the PF.After that, solutions with good diversity in the archive are preserved via predefined uniform reference vectors.
The contributions of this paper are as follows.
1) We propose the BS method, which can increase selection pressure among solutions and accelerate convergence.Via the adaptive sorting strategy, the algorithm can adopt a suitable sorting method between BS and NDS according to the convergence status of the population.
2) We propose the ACF strategy, which can further distinguish the convergence relationship among solutions ranked at the same front and can adapt itself to different sorting methods.
3) Derived from the MPMO framework, multiple populations in MPMO-BS evolve in parallel to ensure convergence on different objectives.The introduced ELS can promote the diversity of solutions selected from all populations.4) To facilitate communication among populations, we develop a population reallocation strategy between the archive and populations.
The remainder of this paper is organized as follows.Section II provides the background.The details of the proposed MPMO-BS are given in Section III.Section IV presents the experimental results and detailed analysis of MPMO-BS on MaOPs and two real-world application problems.Finally, Section V concludes this paper.

A. MOPs and MaOPs
Real-world problems usually involve multiple conflicting objectives that need to be considered simultaneously.Such complex problems are called MOPs.A MOP with M objectives to be minimized can be described as: Ω, where Ω is the decision (variable) space, and x = (x 1 , ⋯, x D ) T is a candidate solution.F: Ω → R M represents M real-valued objective functions, and R M denotes the objective space.When the number of objectives in a MOP (i.e., the M) is larger than three, such a MOP is also called a MaOP.
Because of conflicting objectives, there is no unique optimal solution in a MOP that can optimize all the objectives simultaneously.Thus, the goal of solving a MOP is to obtain a set of optimal solutions, called Pareto optimal solutions.Given two solutions x, y ∈ Ω, x is said to Pareto dominate y (x ≺ y) if fi (x) ≤ fi (y) for ∀ i ∈ {1, ⋯, M} and ∃ j ∈ {1, ⋯, M} that fj(x) < fj(y).x is said to be a Pareto optimal solution if there does not exist another x*∈ Ω such that x* ≺ x.All Pareto optimal solutions form the Pareto set (PS).The PF, the corresponding objective vector set of the Pareto optimal solutions, is defined as:

B. Related Works
The existing approaches for tackling MaOPs can roughly be divided into four categories.
The first category focuses on dominance-based MOEAs, which increase the selection pressure among nondominated solutions by modifying the Pareto dominance relationship.Some typical recent dominance relationships include ε-dominance [15], α-dominance [16], preference order ranking [17], and θ-dominance [18].For example, a method that can control the dominance area of solutions was proposed in [19].A fuzzy-based Pareto dominance was proposed in [20] to distinguish Pareto nondominated solutions.Such fuzzy logic was also adopted in some new dominance relations, such as L-dominance [21] and (1-k)-dominance [22].A grid dominance was proposed in [23] to strengthen the selection pressure among solutions through three grid-based criteria.In addition, a strengthened dominance relation (SDR) was proposed in [24] for MaOPs.A niching technique based on the angles between solutions was adopted in SDR to help maintain solutions with the best convergence in each niche.
The second category is decomposition-based approaches.They convert an original MaOP into several single-objective optimization subproblems and use predefined reference vectors to guarantee distribution.Solutions are guided to approximate the PF following the direction of each reference vector.NSGA-III [25] is one of the representative approaches of this category.In addition, many MOEA/D variants [26]- [28] have been developed recently.For example, an MOEA/D with an effective stable matching model (MOEA/D-STM) was proposed in [29].In [30], a Pareto adaptive scalarizing method was inserted into MOEA/D.In [31], a decomposition-based MOEA called RVEA was proposed, in which an angle penalized distance scalarization approach can equilibrate the diversity and convergence when dealing with MaOPs.
The third category is called indicator-based approaches, in which indicators are used to assess the quality of solutions for selection.This category of approaches includes GDE-MOEA [32], MOMBI-II [33], and AR-MOEA [34].In [35], a distinctive hypervolume-based MOEA called HypE was proposed, in which a Monte Carlo simulation method was adopted to replace the accurate HV estimation for computationally complexity reduction.However, it is worth noting that the accuracy of this approximate calculation is influenced by the number of sampling points.In [36], a utility tensor was introduced to reduce the computational cost of the hypervolume contribution.This method can improve efficiency by avoiding meaningless repetitive calculations.In [37], an inverted generational distance (IGD) [38] indicator-based MOEA called MaOEA/IGD was proposed, and a decomposition-based nadir point estimation method was adopted to promote the calculation of IGD.
The fourth category is multi-population-based approaches.Although many MOEAs have been developed for solving MaOPs, most of them use only one population to simultaneously optimize all objectives.Such a population needs to coordinate the optimized degree of each solution on all objectives, slowing down the approaching speed of the population to the PF.Therefore, it is difficult to explore all objectives by one population.Considering this, the MPMO framework was proposed in [7], where multiple populations are adopted for solving multiple objectives simultaneously.Generally, various optimization algorithms, such as the genetic algorithm (GA) [39]- [41], ant colony system [42]- [45], PSO [7][13][46]- [48], and differential evolution [49]- [52], can be adopted for each population in the MPMO framework.The MPMO framework has shown promising performance in solving MOPs, indicating its potential ability to solve MaOPs.Therefore, an MPMO-based coevolutionary particle swarm optimization was proposed in [13] for solving MaOPs.In [13], the authors proposed a bottleneck objective learning strategy, which can alleviate the side effect of poorly performing objectives (named bottleneck objectives) for particles in approximating the PF.This strategy pays particular attention to two objectives (the bottleneck objective and the optimized objective of the corresponding population) in the process of particle updating while ignoring the influence of other objectives.It does accelerate particles toward the PF but may make populations only focus on their optimized bottleneck objectives.This may cause falling into local optima and be harmful to diversity.
In addition to the above four categories of MOEAs, many studies have been proposed for solving MaOPs.For instance, a hybrid algorithm that mixes the preponderance of Pareto dominance and a reference vector-based decomposition approach, called SPEA/R, was proposed in [53].As knee points are naturally preferred by decision-makers, the information of knee points is used in [54] to promote the convergence and diversity of the population.A preference-inspired coevolutionary algorithm (PICEA-g) was proposed in [55], coevolving a family of preferences simultaneously with the population for better convergence.In [56], a MaOP was transformed into a MOP with two indicative objectives (i.e., convergence and diversity).During optimization, a population was assigned to different clusters according to the two indicative objective values, and well-performing solutions were selected from each cluster via a clustering-based sequential selection method.In [57], the authors integrated different solution selection methods into an ensemble framework to promote the performance of selected solutions.All these works show that the research into MaOPs is a significant and active research topic.

C. Nondominated Sorting
NDS has been widely used in MOEAs since it was proposed in [58].It is a strategy that can rank solutions in the population into different fronts based on their Pareto dominance relationship.Given a population P with a set of solutions, these solutions can be ranked into v different fronts, denoted as Fi (i = 1, 2, …, v).After NDS, each solution at the front Fi is dominated by at least one solution ranked at Fi-1.
Over the past few decades, a number of NDS methods have been proposed in dominance-based MOEAs.Among them, the most representative is the fast nondominated sort in NSGA-II.Since then, many approaches have been proposed to simplify the NDS.For example, an arena's principle-based NDS was proposed in [59].In [60], a dominance tree and divide-and-conquer mechanism were used to reduce the number of redundant comparisons of dominance relations among solutions.In [61], a new M-front sorting method was proposed.By using the dominance relationship information at the last generation, M-front can reduce the best-case complexity.Moreover, an efficient nondominated sort was proposed in [62], in which a strategy of presorting was used to make the latter ranked solutions unable to dominate the former.In this paper, we also adopt the efficient nondominated sorting approach in [62] for NDS.In addition to the abovementioned methods, some other methods have also been reported, e.g., [63] [64].
NDS has been widely used in dominance-based MOEAs for solving MOPs.For MaOPs, NDS has also been used in MOEAs such as NSGA-III [25] and KnEA [54].However, one phenomenon may be observed in NDS when solving MaOPs: most solutions in the population are ranked at similar fronts, which will weaken the selection pressure among solutions.Moreover, NDS should consider the priority of each solution on every objective.As the number of objectives increases, NDS becomes increasingly ineffective.Such NDS results will ultimately deteriorate or even hinder the selection process.To illustrate this, we use NSGA-II, the most representative algorithm using NDS, to conduct experiments on the DTLZ2 and WFG1 test instances with different objectives.The number of sorting fronts in the initial population is recorded, and the curves of the mean results in 10 runs are shown in Fig. 1.It can be seen that the number of sorting fronts versus M declines rapidly.Especially when M > 5, only two or three fronts remain.This indicates that a large number of solutions are located at the same front, which makes the algorithm unable to distinguish promising solutions using only the NDS method.

A. The Framework of the MPMO-BS
For a MaOP with M objectives, the main idea of MPMO is to use M populations to optimize the MaOP, with each population (of size Ns) corresponding to only one objective and all populations working together to approximate the whole PF.MPMO-BS follows the MPMO framework, while the adaptive sorting strategy is used to accelerate the convergence in each population.Through adaptive sorting, solutions are ranked at different fronts to distinguish the convergence status.For solutions at the same front, the ACF is adopted to further qualify their convergence status.An external archive A with size N is used to store the nondominated solutions from all populations.Finally, all solutions in A are reassigned to each population for information sharing.The final archive A is output once the termination condition is satisfied.The framework of MPMO-BS is shown in Fig. 2.

B. BS Method
Without loss of generality, population p i is taken as an example to describe the process of BS.The pseudocode of the BS process is given in Algorithm 1.
First, we carry out M-1 times bi-objective NDS on objective i and each other objective j (j ∈ {1, ⋯, M} and j ≠ i) for all solutions in population p i corresponding to the i-th objective (Lines 3-5 in Algorithm 1).Then each solution has M-1 sorting front results, and the largest one is defined as its bias front in p i (Line 6 in Algorithm 1).Notably, this sorting may cause a front chasm.That is, some solutions may be sorted to the higher front before the lower front has appeared.Under this situation, the bias front should be rearranged to eliminate the chasm (Line 7 in Algorithm 1).For example, the bias front of p i is {1, 1, 4, 4, 5, 2, 5}, and after rearrangement, it becomes {1, 1, 3, 3, 4, 2, 4} as the final bias front result.
Through the above description of BS, we can see that there are some similarities between BS and the one-objective sorting method (i.e., only objective i is taken into account when sorting p i ).They both tend to accelerate convergence by giving priority to one objective.However, compared with the one-objective sorting method, the BS can accommodate the other objectives and minimize the loss of diversity.It can be regarded as a sorting method lying between one-objective sorting and NDS since it can promote convergence while not ignoring diversity.Herein we present an example of a 4-objective space to illustrate the BS method and compare the differences among BS, NDS, and one-objective sorting methods.
The front result of these solutions by the NDS method is {1, /* L is a storage of all bi-objective sorting results */ 2: L i = ∅; 3: For Each j = 1, …, M and j ≠ i Do 1, 1, 1, 1}, which indicates that these solutions are nondominated with each other.At this time, it is impossible to decide among these solutions which one is better without other auxiliary operators.In other words, they are all regarded as optimal solutions in the current generation.In contrast, these solutions are ranked at {1, 2, 2, 4, 3} if the one-objective sorting method is adopted on f1.At this time, solutions a, b, and c are seen as the 3 optimal solutions.However, if we take into account the other objective values, we can see that b is severely deficient on f4 compared with others, whereas d only performs worse on f1 and e performs moderately on all objectives.It seems that choosing b may not be a good choice if diversity is considered.However, different results can appear if BS is used.Fig. 3 illustrates the BS process in p 1 , and the bias front result is {1, 3, 2, 3, 3}.Then, it can be distinguished that in p 1 , which is more biased to objective f1, only solution a is regarded as the best one.It is worth noting that b, d, and e are ranked at the same front, although there are significant differences in their f1 values.This is the result of BS after comprehensively considering the values of these solutions on other objectives.
The above example shows the advantages of BS compared with NDS and one-objective sorting.On the one hand, the BS can make a more detailed rank division among solutions than NDS.This makes the sorting mechanism restore the capability to distinguish the merits of solutions on MaOPs.On the other hand, different from the one-objective sorting, the BS is not completely biased toward a single objective but takes diversity into account.Combined with the MPMO framework, the BS can work efficiently to eliminate the phenomenon of dominance resistance and accelerate convergence rapidly.It is worth noting that herein we use the bi-objective NDS in our BS, but potential performance improvement may be achieved by extending the bi-objective NDS to k-objective NDS (2 < k < M) in the BS.Inevitably, the computational complexity will also increase because there will be more objective combinations in the BS process if k is larger.

C. Convergence Maintenance
In MPMO-BS, multiple populations search in parallel, and each population pays attention to its convergence.In this section, we illustrate in detail how each population maintains convergence.

1) Adaptive Sorting Strategy
During the early search stage, solutions usually do not converge well.At this time, BS is very useful in accelerating convergence since it prioritizes different objectives in different populations.However, as the BS's forward search process and convergence performance improve, there is no need to pay more attention to one specific objective.In contrast, all objectives need to be treated equally to reduce the differences among populations.Considering this, the adaptive sorting strategy is developed here, which can adaptively switch from BS to NDS according to the convergence status of each population.
For any population p i , a trigger of the sorting type (denoted as STi) is set to determine which sorting method should be used.The trigger STi is initialized to 1, which represents the BS method.The value of STi is verified during each generation, and it will be fixed to 2 once verified equal to 2 in a certain generation.In other words, the NDS method is and will be continuously used in current and all generations to come if STi = 2. Otherwise, if STi = 1, BS is used and the trigger STi will be updated.
The update of STi is based on the BS result.After BS, the number of solutions at front F1 is counted as Nopt.Then the solution proportion (SP) of F1 in p i is calculated as: where SP is a number between (0, 1] that can numerically reflect the current population's convergence status.The larger SP is, the more optimal solutions based on BS are, indicating that the smaller difference between BS and NDS on the result of sorting is.Therefore, when SP is greater than a threshold θ, STi is updated to 2 so that NDS can be adopted for subsequent sorting.In this paper, θ is set to 0.8, and its sensitivity is investigated in Section IV-F.

2) ACF Strategy
After sorting, the solutions at the same front are nondominated with each other.The ACF strategy is proposed to further distinguish the convergence status of the solutions at the same front.Mutual evaluation [65] is a loss function that can evaluate the quality of each solution, and ACF follows the main idea of this approach.
First, the objective values of any solution x in p i need to be preprocessed to be positive, as follows: where 10 -6 is to prevent the value from being equal to 0. Second, considering that there are Nr solutions at the r-th front in p i , the calculation method of ACF is designed according to the type of adopted sorting method, which is specified as follows.
If BS is adopted in p i , then for any two solutions at that front, the mutual evaluation of xm (m =1, …, Nr ) evaluated by xn (n = 1, …, Nr and n ≠ m) is: () max , = 1, ..., and () Otherwise, in the case that NDS is adopted in p i , then the mutual evaluation is defined as: () max , = 1, ..., () Finally, the ACF value of each solution is as: ACF can be considered to be a quality indicator to assess the convergence status of solutions at the same front.Solutions with larger ACF values have better convergence status once they are ranked at the same front.Now let us return to the previous example given in Section III-B.Solutions b, d, and e are ranked at the same front via BS in p 1 .Now, we calculate their ACF values according to Eq. ( 5) and (7).The results are fAC(b) = 1, fAC(d) = 2, and fAC(e) = 1.4.Therefore, solution d is regarded to be optimal among these three solutions.At this point, through BS and ACF, 3 optimal solutions selected from the 5 solutions are a, c, and d.

3) Environmental Selection
We choose Ns solutions with better convergence from each population and its offspring population during every generation.First, p i and its offspring population q i form a combined population of size 2Ns.All solutions in this combined population are ranked at several fronts via the adaptive sorting strategy.Starting from F1, the solutions with better ACF values are preserved until the number of preserved solutions reaches Ns.These preserved solutions are adopted as p i for the next generation.

D. Diversity Maintenance
In MPMO-BS, archive A is used for the remaining well-converged solutions from all populations.Then solutions with good diversity are selected from A and preserved.In this section, we describe the diversity maintenance process.

1) Archive Update and ELS
Archive A is the storage of nondominated solutions from all populations.It is initialized to be empty and updated in every generation and is also regarded as the final solution set found by MPMO-BS.Algorithm 2 gives the process of archive updating.The input solution set S is copied into A if A is empty.Otherwise, each solution sS is compared with all the solutions in A. First, the solutions that are dominated by s are removed from A. Then, s will be added into A if all remaining solutions in A are nondominated with s.
After storing all nondominated solutions in each population into A, the ELS proposed in [7] is adopted to help solutions jump out of possible local optima.In MPMO-BS, we randomly select several solutions from A and copy them into a temporary set.The number of selected solutions equals half of the current size of A. For each solution x = (x1, …, xD) in this temporary set, a random dimension d is chosen to mutate by Gaussian distribution as: ) Gaussian(0, ) where x d max and x d min are the upper and lower bounds of the d-th dimension, respectively.Gaussian (0, σ 2 ) is a Gaussian distribution with a mean value of 0 and a standard deviation value of σ (σ = 0.5).Then xd is confirmed to be within the feasible search range of [x d min , x d max ].If not, xd is set to the corresponding bound.After the mutation, all solutions in this temporary set are evaluated, and archive A will be updated by this temporary set.The update process is the same as Algorithm 2 with this temporary set as input.

2) Solutions Preservation
At the end of every generation, solutions with poor diversity performance are deleted from A if the size of A is larger than N.In this paper, the reference-vector-based solution selection method in [13] is adopted.N solutions are selected from A based on reference vectors, which are generated by a double-layer method.

E. Population Reallocation
The solutions preserved in A perform better than those in each population because of the ELS and the multi-population coevolution technique.Therefore, if there are enough solutions in archive A, reassigning solutions in A to each population can greatly improve search efficiency and information communication.Here we develop a population reallocation strategy to achieve this purpose.By this population reallocation strategy, solutions in each population are replaced by those from A. The pseudocode of the population reallocation process is shown in Algorithm 3. First, solutions in A are copied into a temporary set S, and all populations are set to the null set.Then, the solution with the best value on objective i (i =1, 2, …, M) in S is moved to p i one by one.The above operation will repeat until S is empty.Note that the population reallocation strategy is carried out in every generation only when the size of archive A equals N so that all the solutions in A can be allocated to all the populations.

F. Complete MPMO-BS and Complexity Analysis
Based on the above operations, we propose an algorithm MPMO-BS for MaOPs.The framework of MPMO-BS can be summarized into three parts: 1) multi-population coevolution for convergence, 2) solution preservation archive for diversity and 3) population reallocation for information sharing.The pseudocode of MPMO-BS is shown in Algorithm 4.
Within one generation, MPMO-BS consists of the following seven operations: adaptive sorting, ACF calculation, envi-ronmental selection, archive update, ELS, solution preservation, and population reallocation.For a MaOP with M objectives, assume the archive size is N, and the size of each population is Ns.
First, we analyze the computational complexity of each operation in a single population.In the adaptive sorting, it takes O(MNs 2 ) in the worst-case for the combined population of size 2Ns if NDS is adopted.Otherwise, a runtime of O(M 2 ) is needed if the BS is chosen as the sorting method.Then, it takes O(NslogNs) time for the ACF calculation.In the environmental selection, it takes O(NslogNs) for selecting solutions according to their front results and ACF values.For archive updating, the computational complexity of Line 21 of Algorithm 4 is O(NNs).Therefore, the complexity of the MPMO component with M populations is O(M(MNs 2 +NslogNs+NslogNs+NNs)) if NDS is adopted, and is O(M(M 2 +NslogNs+NslogNs+NNs)) if BS is adopted.Usually N > Ns > M, so the worst-case complexity of the MPMO component can be simplified to O(M 2 Ns 2 +MNNs).
The ELS requires O(N) computations, and the archive updating of Line 24 of Algorithm 4 requires O(N 2 ) computations.In solution preservation, the computational complexity here is O(M 2 N+M 3 +N+MN 2 ), which is O(MN 2 ).Finally, the population reallocation takes O(logN).
Considering all the above computations, the overall worst-case computational complexity of one generation of MPMO-BS is O((M 2 Ns 2 +MNNs)+N+N 2 +MN 2 +logN), which can be reduced to O(M 2 Ns 2 +MN 2 ).In all of our simulations, we have used Ns ≈ N/M, so the worst-case complexity can finally be reduced to O(MN 2 ).

IV. EXPERIMENTAL STUDIES
In this section, we present the experimental study of the proposed MPMO-BS and compare it with seven state-of-the-art MOEAs: NSGA-III [25], SPEA/R [53], MaOEA/IGD [37], KnEA [54], NSGA-II/SDR [24], Mo4Ma [56], and VMEF [57].The codes for the first 5 compared MOEAs come from an open platform called PlatEMO [66], and the code of VMEF can be found on the homepage of the original authors.After that, we give an experimental analysis to show the effectiveness of each strategy in MPMO-BS.
The common parameter settings of all algorithms are given as follows.1) Operators: The simulated binary crossover (SBX) [72] with a distribution index of 20 and polynomial mutation [73] with a distribution index of 20 are used to generate offspring.The crossover probability pc and mutation probability pm are set as 1.0 and 1/D, respectively.Compute the fAC of each solution in p i via Eq.( 5) and ( 7 For Each i = 1, …, M Do 10: Generate offspring population q i by p i ; 11: If STi == 1 Then 13: {F1, …, Fv} = BS (p i , i); //Algorithm 1 14: Update STi; 15: Compute the fAC of each solution in p i via Eq.( 5) and ( 7); 16: Else If STi == 2 Then 17: {F1, …, Fv} = NDS (p i ); //Section II-C 18: Compute the fAC of each solution in p i via Eq.( 6) and ( 7 Other parameters not mentioned in the compared algorithms are set according to their original papers for a fair comparison.Specifically, the rate of knee points (T) in KnEA on different problems is listed in Table S.I in the supplementary material.For MPMO-BS, the size of each population Ns is set to ⌈N/M ⌉.Therefore, the size of each population in MPMO-BS is equal to 20, 28, and 16 when M = 8, 10, and 15, respectively.The number of reference vectors in MPMO-BS is set to N.

B. Performance Metrics
IGD and HV are two widely used metrics in MOEAs since they consider both diversity and convergence while evaluating the obtained solutions.A small IGD value typically indicates a better performance of the solution set.In contrast, the smaller the HV value is, the worse the performance of the obtained solution set.
IGD: Considering that P * is a well-distributed reference point set sampled on the true PF, for an approximate solution set P obtained by the optimization algorithm, the IGD is defined as: where ed (v, F(x)) is the Euclidean distance between v and F(x).
Herein, the number of reference points in P * is set as 10000.
HV: Let z = (z1, …, zm) T be a reference point that is dominated by all obtained solutions in the objective space, then the HV metric of solution set P is defined as: where VOL(.) is the Lebesgue measure.In our experiments, z is set as to (1.1, …, 1.1) T for all test instances.Note that each objective value of the obtained solutions was normalized between [0,1] according to the nadir point of true PF.In addition, Monte Carlo simulation is used to replace the accurate HV estimation because of the computational complexity.During each HV estimation, 1000000 sampling points are used.

1) Results on DTLZ and WFG
Due to space limitations, only the statistical results in terms of IGD on DTLZ1-7 and WFG1-6 are given in Table II, where the best result is marked in boldface and the second-best result is italicized.The details of the entire experimental results can be found in the supplementary material.The statistical results in terms of IGD and HV on DTLZ and WFG problems are given in Table S.II and S.III, respectively.For all test instances, Wilcoxon's rank-sum test [74] at a 5% significance level is conducted to compare the significance of differences between MPMO-BS and the compared algorithms.The symbols '+', '-', and '=' indicate that the metric value of MPMO-BS is significantly better than, worse than, and similar to the corresponding compared algorithm, respectively.
From the IGD results in Table II, we notice that MPMO-BS significantly better than the compared seven MOEAs on most instances.When comprehensively considering the best and second-best results, MPMO-BS achieves promising performance on 26 out of 39 instances, superior to all the compared algorithms.Moreover, the significant test results show MPMO-BS is superior to NSGA-III, SPEA/R, MaOEA/IGD, KnEA, Mo4Ma, VMEF, and NSGA-II/SDR on 29, 31, 35, 24, 29, 31, and 26 instances.
For DTLZ1-3, MPMO-BS and NSGA-II/SDR perform better than the other compared algorithms.MPMO-BS outperforms NSGA-II/SDR on most DTLZ1-3 instances whereas it underperforms NSGA-II/SDR on 15-objective DTLZ3.This may be due to the multimodal characteristic of DTLZ3 with many local optimal fronts.As different populations in MPMO-BS focus on different objectives, they may therefore be drawn into different local optimal fronts.This makes it difficult for MPMO-BS to completely jump out of the local optimum through the ELS.Nevertheless, MPMO-BS outperforms the other six MOEAs on DTLZ3.DTLZ4 is a nonuniform problem, which also creates a challenge for MPMO-BS in terms of diversity maintenance.On this problem, MPMO-BS only outperforms NSGA-II/SDR.DTLZ5 and DTLZ6 are dimension reduction problems, and it can be seen that MPMO-BS obtains the best or second-best IGD results on these instances.In contrast, the other compared MOEAs perform poorly on both DTLZ5 and DTLZ6.This shows that these algorithms cannot handle redundant information well in degeneration cases.DTLZ7 is a test problem with disconnected PF, in which KnEA obtains the best IGD results.However, MPMO-BS still outperforms most of the compared algorithms.
WFG1 is designed to examine whether an algorithm can handle a problem with flat bias and a mixed shape of the PF.MPMO-BS and KnEA perform significantly better than the other six MOEAs.WFG2 has disconnected PF, aiming to examine the ability to handle non-separable, variable dependencies.MPMO-BS does not perform the best on this problem.This may be due to some PF fragments in the middle of the objective space, making it difficult for MPMO-BS to choose the suitable population to converge to these PF fragments.In addition, the disconnectedness of PF also makes it difficult for MPMO-BS to share information among populations.Nevertheless, MPMO-BS generally outperforms most of the compared algorithms on these two problems, with the second best performance on 8-objective and 10-objective WFG1 and the best performance on 8-objective WFG2.For WFG3 with a Through the statistical test results both on IGD and HV, MPMO-BS shows excellent performance on DTLZ and WFG test problems.To better understand MPMO-BS, the distribution of the solutions obtained by each algorithm with the median IGD value on 15-objective DTLZ1 is shown in Fig. 4. From Fig. 4, it can be seen that SPEA/R, KnEA, Mo4Ma, and VMEF cannot approximate the PF well.The appearance of inferior solutions in SPEA/R may be due to the phenomenon of falling into local optima.The main idea of KnEA is to use knee points to enhance the search performance in MaOPs, while DTLZ1 is a problem with linear hyperplane PF but no knee point.Therefore, it is impractical to solve DTLZ1 by KnEA.NSGA-III, MaOEA/IGD, and NSGA-II/SDR can converge well to the true PF, but they have poor performance in terms of diversity.In addition, it is worth noting that NSGA-II/SDR approximates only some parts (especially the center) of the PF and biases toward a few objectives, such as f2 and f5.This may be because the PF of DTLZ1 is a hyperplane, while the dominance area shape of SDR resembles a curve bending outward.Such a dominance area can have a greater probability of eliminating boundary solutions and making the obtained solutions concentrated in the PF center.However, the solutions obtained by MPMO-BS can approximate the whole PF.Limited by space, the distribution of obtained solutions with the median IGD value on 15-objective WFG3 is shown in Fig. S.1 in the supplementary material.It can be seen that only the solutions obtained by MPMO-BS and Mo4Ma can approximate the true PF very well, although those by MPMO-BS show some deficiencies in diversity.

2) Results on Other Test Problems
To further validate the performance of MPMO-BS on problems with more complex PF shapes, comparison + 1.0283e+0 (1.78e-1) - = 10 8.9504e-1 (8.52e-1)  III, where the complete statistical results in terms of IGD and HV are given in Table S.IV and S.V in the supplementary material.From the results, we can see that MPMO-BS performs evidently better than NSGA-III, SPEA/R, and MaOEA/IGD, and shows strong competitiveness compared with KnEA, Mo4Ma, VMEF, and NSGA-II/SDR.MaF1 and MaF4 are two inverted DTLZ variants.MPMO-BS does not perform the best on these two problems but is still promising.The PF shapes of these problems are inverted, which means that the population tends to focus on one boundary point when only optimizing one objective.This creates a significant challenge for MPMO-BS in diversity maintenance when dealing with these kinds of problems.MaF2 is a DTLZ2 variant with more difficulty in convergence, and MPMO-BS performs better than the others except for KnEA on this problem.MaF3, MaF5, and CDTLZ2 are three problems with convex PF shapes.MPMO-BS achieves the best or second-best performance on these three problems.This indicates that MPMO-BS is capable of handling these problems with convex PF.Fig. S.2 in the supplementary material gives the distribution of obtained solutions on the 15-objective MaF5 instance, in which only MPMO-BS, Mo4Ma, and VMEF obtain the solutions with good performance both in convergence and diversity.Similar to DTLZ5, MaF6 is a problem for examining whether an algorithm can deal with degenerated PF.From Table S.IV and S.V, we can see that MPMO-BS performs worse than Mo4Ma and NSGA-II/SDR on the two metrics.Moreover, we further verify the performance of MPMO-BS on DTLZ -1 problems with inverted triangular PF shapes.The results from Table S.IV and S.V show that MPMO-BS is still competitive among all compared algorithms, although it may not perform particularly well on some instances (i.e., DTLZ4 -1 instances), whereas it is still competitive among all the compared algorithms.Specifically, MPMO-BS can beat NSGA-III, SPEA/R, MaOEA/IGD, Mo4Ma, and VMEF on most DTLZ1-3 -1 instances.Overall, the performance of MPMO-BS on DTLZ -1 is generally promising, although the advantage is not so significant as that on DTLZ.The reasons may be twofold: (1) MPMO-BS uses reference vectors to maintain diversity, while problems with inverted PF shapes are very challenging for these reference vector-based algorithms in diversity maintenance.For this point, more analysis can be found in [71].(2) DTLZ -1 can be viewed as a DTLZ version with maximized objectives.Therefore, the gap between different objectives gradually widens with convergence.This makes it more difficult to share information among different populations and poses a challenge for ELS to improve diversity.algorithms have poor diversity performance on these instances to varying degrees, whereas MPMO-BS is competitive in both convergence and diversity.In summary, MPMO-BS can be applied to deal with MaOPs with different characteristics.The statistical results from the comparison experiments show the high competitiveness and robustness of MPMO-BS.

D. Contribution of Archive
In this section, we discuss the contribution of the archive adopted in MPMO-BS.We design a variant of MPMO-BS (named noArch) and conduct comparison experiments between it and MPMO-BS.The noArch is a variant that replaces archive A with the set of populations P after environmental selection at each generation.In other words, all operations in Algorithm 4 related to A, e.g., ELS in Line 23, solution preservation in Line 26, and population reallocation in Line 29, are replaced by using P.
Table S.VI in the supplementary material shows the mean and standard deviation IGD values given by noArch and MPMO-BS on DTLZ and MaF test suites, where the best is marked in boldface.It can be seen that MPMO-BS performs better than noArch on almost all instances.The experimental results confirm that the adoption of the archive can significantly improve the performance of MPMO-BS.The reason for this difference is that the optimal solutions of all generations up to now are preserved in the archive.This is equivalent to an elite reservation strategy.It can be seen that the diversity of the solution set obtained by MPMO-BS is much better than that obtained by noArch.This shows the role of the archive in promoting diversity.

E. Analysis of Each Strategy
In this section, the contribution of each strategy in MPMO-BS is exhibited.Adaptive sorting, BS, ACF, ELS, and population reallocation strategies are the core of MPMO-BS.Accordingly, we design six variants with different combinations of the strategies:  1objS: variant that adopts the one-objective sorting method, and a random selection strategy is used to replace the ACF strategy. noBS: variant that only adopts the NDS, and a random selection strategy is used to replace the ACF strategy. noAS: variant that only adopts the BS.In other words, the adaptive sorting strategy is abandoned.In addition, the ACF strategy is replaced by a random selection strategy. noACF: a random selection strategy is used to replace the ACF strategy in MPMO-BS.
 noELS: variant that without ELS. noReA: variant that without population reallocation strategy.
The variants are compared with MPMO-BS on DTLZ and MaF test suites.The IGD and HV value details are presented in Table S.VII and Table S.VIII in the supplementary material.In this paper, only the significance test results in terms of the IGD metric are summarized in Table IV, which is based on Wilcoxon's rank-sum test at a 5% significance level.From Table IV, it can be seen that MPMO-BS outperforms all its variants.Specifically, 1objS, noBS, noAS, noACF, noELS, and noReA are beaten by MPMO-BS on 17, 14, 17, 11, 18, and 16 out of 21 DTLZ instances, and on 14, 10, 12, 4, 11, and 12 out of 18 MaF instances.This comparison result shows that the BS, adaptive sorting, ELS, and population reallocation strategies play significant roles in improving the performance of the MPMO-BS.On the one hand, the comparison among 1objS, noBS, and noAS shows that the performance of noAS is somewhere between that of 1objS and noBS.This result confirms our previous positioning of BS, which can be considered a sorting method between the one-objective sorting and NDS.On the other hand, the comparison among noBS, noAS, and noACF shows that the use of ACF strategies may not make a large difference but can slightly improve the performance of MPMO-BS.
Here, we choose the 15-objective DTLZ1 instance and compare the performance of these variants on it.Fig. S.8 in the supplementary material shows the distribution of the obtained solution set with the median IGD values for MPMO-BS and its variants.In Fig. S.8, it can be seen that MPMO-BS is the best-performing algorithm whether in terms of convergence or diversity.However, noBS does not perform well both in terms of convergence and diversity, whereas 1objS and noAS only have defects in diversity.This further confirms the role of BS or one-objective sorting in convergence maintenance.However, the comparison between 1objS and noAS shows that the solutions obtained by 1objS tend to favor some objectives more seriously, resulting in worse diversity performance.In addition, noACF performs better than both noBS and noACF but slightly worse than MPMO-BS.Solutions obtained by noELS and noReA have a serious deficiency in diversity.These results suggest that they play an important role in diversity maintenance for MPMO-BS.

1) Analysis of Adaptive Sorting Strategy
BS is a strategy that can accelerate convergence.The comparison between noBS and noAS shows that BS alone cannot promote performance heavily since BS and NDS are complementary.Meanwhile, the adaptive sorting strategy is considered a balanced strategy between different sorting methods, and it can improve the performance of MPMO-BS by switching the sorting method between BS and NDS.The cooperation of the two can promote the performance of MPMO-BS to the best.
To illustrate this, we plot the mean IGD curves of 1objsS, noBS, noAS, and noACF on 10-objective DTLZ1, as shown in Fig. 5(a).From Fig. 5(a), it can be seen that the IGD of 1objS is always the worst since the IGD is a metric that comprehensively considers convergence and diversity.However, the one-objective sorting method is only excellent in convergence, whereas diversity maintenance is ignored.In addition, the IGD values by the two algorithms with BS (noAS and noACF) can drop faster than noBS at the early search stage.More specifically, noAS and noACF can reduce IGD values to approximately 0.14 by 50,000 fitness evaluations, while the IGD value of noBS is still above 0.16 at that time.This illustrates that the use of BS can accelerate convergence in the search process.Meanwhile, noBS gradually narrows the IGD gap with noAS and surpasses it.However, noACF is almost always ahead in the IGD value, showing that NDS is more advantageous than BS as the population approximates the true PF.In other words, noACF, the variant adopting the adaptive sorting strategy, can choose the better sorting method appropriately to promote the search capability.The curve comparison of Fig. 5(a) further confirms that the adaptive sorting strategy with BS is one of the keys to the high performance of MPMO-BS.

2) Analysis of ACF Strategy
ACF is an auxiliary strategy that can further distinguish the convergence status among solutions that are at the same sorting front to increase the selection pressure.To investigate the influence of ACF on the performance of MPMO-BS, we use MPMO-BS and its variant noACF to conduct comparison experiments on 10-objective DTLZ6.Note that only the set of all populations P after environmental selection is taken as the output at the end of every generation.However, archive A is not removed, which is different from the variant noArch mentioned above.This is done to eliminate the influence of other strategies on the results as much as possible, and more clearly explain the role of ACF in MPMO-BS.Fig. 5(b) shows the mean IGD curves of the output by MPMO-BS and noACF.In Fig. 5(b), the IGD curves of MPMO-BS and noACF both decrease rapidly to below 1 in very few evaluations.However, the curve of MPMO-BS drops faster than that of noACF.This shows that MPMO-BS converges faster than noACF because of the ACF strategy.In addition, the IGD curve of noACF fluctuates greatly compared with that of MPMO-BS.This is because, in noACF, solutions are randomly selected but not through the ACF value when turning to the second condition in the environmental selection.This random selection method may retain solutions with poor convergence.Through the IGD curve comparison, we can see that the ACF promotes the performance of MPMO-BS, although its effect is slight.

F. Sensitivity Analysis of Parameter Settings 1) Sensitivity Analysis of ELS
ELS was first designed to jump out of the local optima, and it has shown its effectiveness in diversity maintenance.According to our previous research experience in [75], the effectiveness of ELS is affected by the value of σ.In this subsection, we investigate the sensitivity parameter σ in ELS.The value of σ varies from 0.1 to 1.Because of the space limitation, Fig. 6 only shows the curves of different IGD values for different σ settings on DTLZ1 and WFG6 with different objectives, averaging over 20 independent runs.Note that log100(.)function is appended to the IGD calculation on WFG6 instances easy visual comparison.It can be seen that the performance of MPMO-BS is heavily affected by the value of σ on DTLZ1, whereas it is only slightly affected on WFG6.This indicates that the sensitivity of the σ parameter is not fixed for different optimization problems.However, σ values that are too large or too small are inappropriate for ELS since the IGD curves appear to be concave.In Fig. 6(a), the optimal setting for σ gets larger from 0.2 to 0.5 with the increase of objectives.In Fig. 6(b), there is no obvious correlation between the optimal setting of σ and the number of objectives, whereas the optimal σ value is always approximately 0.4.More sensitivity investigation results on other problems can be seen in Fig. S.9 in the supplementary material.Through the experimental comparison in Fig. 6 and Fig. S.9, we can conclude that σ in ELS can be set to between 0.2 and 0.7, and a larger σ seems to be appropriate for problems with more objectives.In paper, we set σ to 0.5.

2) Sensitivity Analysis of Threshold θ
The threshold θ is a trigger that determines the conversion from BS to NDS in the adaptive sorting strategy.In this subsection, we investigate the sensitivity of θ, which varies from 0.6 to 0.9.Fig. 7 shows the IGD curves for different θ settings on DTLZ1 and WFG6 with different objectives, averaging over 20 independent runs.Fig. 7 shows that the performance of MPMO-BS is not particularly sensitive to the value of θ.It can  be seen that all IGD curves show a concave shape with less obvious curvature, and a suitable setting range for θ is between 0.7 and 0.85.Such situations can also be seen on other problems such as DTLZ4 and WFG8 (Fig. S.10 in the supplementary material).However, the performance of MPMO-BS is very sensitive to θ on other problems, such as MaF6 shown in Fig. S.10(c), and the suitable setting range for θ is between 0.8 and 0.85.Considering the above analysis, we set θ to 0.8 in this paper.

G. Experiments on Real-world Problems
In this subsection, we try to investigate the performance of MPMO-BS in dealing with real-world MaOPs.Two application problems (named water resource planning and car cab design) derived from the real world are adopted herein, which were proposed in [76].Some properties of these two problems and the parameter settings for all compared algorithms are given in Table S.IX, and the statistical results in terms of HV are given in Table V.Note that since the true PF is unknown, before HV calculation, the non-dominated solutions obtained by all algorithms after 30 independent runs are formed into a set, and the final solutions of each algorithm are normalized by the ideal and nadir points of this set.We first set the reference point z as (1.1, …, 1.1) T for HV calculation, just like in the above experiments.To make the experimental results more comprehensive, we also use the reference point setting method proposed in [77] for HV calculation.According to this method, z is set as (1.25, …, 1.25) T and (4/3, …, 4/3) T on water resource planning and car cab design, respectively.In addition, the IGD metric is not adopted since the true PF of these two real-world problems is unknown.
From Table V we can see that MPMO-BS obtains the best and third-best HV results on water resource planning and car cab design problems, respectively.These results show that our proposed algorithm has the potential solve real-world ap-plication problems.V. CONCLUSION This paper focuses on the MPMO framework and proposes a multi-population coevolutionary MOEA for many-objective optimization.The proposed MPMO-BS algorithm uses different sorting methods to boost population convergence depending on the convergence status.The BS method is employed for accelerating convergence at the early search stage while switching to the NDS method at the later search stage for all objective equality.For those solutions at the same sorting front, the ACF strategy can further distinguish their convergence status.At environmental selection, well-converged solutions are preserved according to two conditions: 1) at the lower sorting front, and 2) with a larger ACF value.
In addition, with the assistance of the MPMO framework and the ELS in diversity maintenance, MPMO-BS can remain solutions with good convergence and diversity performance.The performance of MPMO-BS is further improved by incorporating a population reallocation strategy, which strengthens communication and information sharing among populations.
To assess the performance of MPMO-BS, the experiments and comparisons have been conducted on 29 test problems with 5, 8, and 10 objectives.The results confirm the effectiveness and robustness of the MPMO-BS in solving MaOPs.The results on water resource planning and car cab design problems also show its effectiveness in solving real-world application problems. (

Fig. 1 .
Curves of the mean number of sorting fronts in the initial population versus M.

4 3
This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: DOI 10.1109/TEVC.2022.3212058This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT)
Fig. S.3 in the supplementary material gives the distribution of the obtained solutions on the 15-objective MaF6 instance.As seen from it, there are still many solutions obtained by VMEF and NSGA-II/SDR that do not converge to the PF, while solutions obtained by MPMO-BS have converged well with poor diversity.SDTLZ1-2 are two problems with scaled PF shapes, and MPMO-BS obtains promising performance on most instances of these two problems.Fig. S.4 in the supplementary material gives the distribution of the obtained solutions on the 15-objective SDTLZ1 instance.
Fig. S.5 and S.6 in the supplementary material give the distributions of the obtained solutions on the 8-objective DTLZ1 -1 and DTLZ2 -1 instances.It can be seen that all (a) MPMO-BS (b) NSGA-III (c) SPEA/R (d) MaOEA/IGD (e) KnEA (f) Mo4Ma (g) VMEF (h) NSGA-II/SDR Fig. 4. Distribution of the obtained solutions with the median IGD value for the eight algorithms on DTLZ1 with 15 objectives.
Fig. S.7 in the supplementary material plots the obtained solution set by noArch and MPMO-BS on 10-objective DTLZ2.

Fig. 6 .
IGD values of MPMO-BS with different setting for σ, averaging over 20 independent runs.

Fig. 7 .
IGD values of MPMO-BS with different setting for θ, averaging over 20 independent runs.This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: 10.1109/TEVC.2022.3212058This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/

Fig. 8
plots the distribution of the solutions obtained by MPMO-BS with the median HV value on these problems when z = (1.1,…, 1.1) T , and those obtained by the other algorithms are given in Fig. S.11 and S.12 in the supplementary material, respectively.
water resource planning (b) car cab design Fig. 8. Distribution of the obtained solutions by MPMO-BS with the median HV value (z = (1.1,…, 1.1) T ) on two real-world application problems.This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: DOI 10.1109/TEVC.2022.3212058This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: DOI 10.1109/TEVC.2022.3212058This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/ Initialize A = ∅, P = {p 1 , p 2 , …, p M }; 2: {ST1, …, STM} = {1, …, 1}; /* 1 stands for BS, 2 stands for NDS */ 3: For Each i = 1, …, M Do 4: {F1, …, Fv} = BS (p i , i); //Algorithm 1 5: This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: DOI 10.1109/TEVC.2022.3212058 This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/

TABLE II MEAN
AND STANDARD DEVIATION IGD VALUES OBTAINED BY EIGHT ALGORITHMS ON DTLZ1-7 AND WFG1-6 PROBLEMS This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: DOI 10.1109/TEVC.2022.3212058 This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/experiments are conducted on MaF, DTLZ -1 , CDTLZ2, and SDTLZ1-2.The significance test results in terms of IGD are summarized in Table

TABLE III SIGNIFICANCE
TEST BETWEEN MPMO-BS AND OTHER SEVEN MOEAS IN TERMS OF IGD ON MAF, DTLZ -1 , CDTLZ2, AND SDTLZ1-2 PROBLEMS This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: DOI 10.1109/TEVC.2022.3212058This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/

TABLE IV SIGNIFICANCE
TEST BETWEEN MPMO-BS AND ITS VARIANTS IN TERMS OF IGD ON DTLZ AND MAF PROBLEMS This article has been accepted for publication in IEEE Transactions on Evolutionary Computation.This is the author's version which has not been fully edited and content may change prior to final publication.Citation information: DOI 10.1109/TEVC.2022.3212058This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/ 1.1, …, 1.1) T