Many-Objective Job-Shop Scheduling: A Multiple Populations for Multiple Objectives-Based Genetic Algorithm Approach

The job-shop scheduling problem (JSSP) is a challenging scheduling and optimization problem in the industry and engineering, which relates to the work efficiency and operational costs of factories. The completion time of all jobs is the most commonly considered optimization objective in the existing work. However, factories focus on both time and cost objectives, including completion time, total tardiness, advance time, production cost, and machine loss. Therefore, this article first time proposes a many-objective JSSP that considers all these five objectives to make the model more practical to reflect the various demands of factories. To optimize these five objectives simultaneously, a novel multiple populations for multiple objectives (MPMO) framework-based genetic algorithm (GA) approach, called MPMOGA, is proposed. First, MPMOGA employs five populations to optimize the five objectives, respectively. Second, to avoid each population only focusing on its corresponding single objective, an archive sharing technique (AST) is proposed to store the elite solutions collected from the five populations so that the populations can obtain optimization information about the other objectives from the archive. This way, MPMOGA can approximate different parts of the entire Pareto front (PF). Third, an archive update strategy (AUS) is proposed to further improve the quality of the solutions in the archive. The test instances in the widely used test sets are adopted to evaluate the performance of MPMOGA. The experimental results show that MPMOGA outperforms the compared state-of-the-art algorithms on most of the test instances.


I. INTRODUCTION
J OB-SHOP scheduling problems (JSSPs) exist in various industrial and engineering management fields, such as printed circuit board production [1], garment manufacturing supply chains [2], and cloud computing [3].In JSSPs, there are a set of jobs with the same number of procedures to be processed, and an industrial factory should determine the process orders of the procedures of all jobs on the available machines for achieving certain objectives.Take Fig. 1 as an example to illustrate the details of JSSPs.There are four jobs (i.e., J 1 , J 2 , J 3 , and J 4 ) and two machines (i.e., M 1 and M 2 ).Each job contains two procedures and each procedure is denoted by O ji , where j represents the job it belongs to and i represents the process ranking.Note that the procedures of a job should be processed according to the ranking, for example, O 12 must be processed after O 11 is finished.Fig. 1(a) shows a procedure-to-machine allocation scheme that is static and fixed because each procedure, depending on its function, must The quality of the schedules can be evaluated by completion time, total tardiness, and some other objectives.In summary, JSSPs require algorithms to determine the process order of procedures on each machine for optimizing certain objectives under various constraints.
In the literature, JSSPs have been extensively studied and the related research can be mainly classified into the following three categories.
The first category is the single-objective JSSP model.Most of the single-objective JSSP models set the completion time as the optimization objective.Inspired by the connections between scheduling problems and mixed graph colorings [4], Al-Anzi et al. [5] converted the JSSP into a coloring problem in graph theory and solved the completion time minimization JSSP by searching for the optimal coloring solution of a special mixed graph.The tabu search (TS) shows the feasibility in solving single-objective JSSPs because of the strong local search ability [6].However, the lack of diverse solutions weakens the performance of TS.Therefore, a hybrid algorithm based on particle swarm optimization and TS is proposed to enhance the global search ability for further reducing the completion time [7].Besides, Peng et al. [8] combined the TS with a path relinking algorithm.Recently, Viana et al. [9] proposed a genetic algorithm (GA) with perturbation functions to avoid premature convergence.
The second category is the multiobjective JSSP model.An energy-efficient model that optimizes both completion time and the total energy consumption is adopted in [10] and [11].To solve the energy-efficient model, a green GA (GGA) [10] and a multiobjective artificial bee colony [11] are proposed.Specifically, in GGA, each solution has a strength value to measure its performance on the two objectives.Nguyen et al. [12] built a three-objective dynamic JSSP model that considered completion time, total weighted tardiness, and mean absolute percentage error.Note that the mean absolute percentage error objective is related to the dynamic feature and measures the error in flowtime estimation.A diversified multiobjective cooperative coevolution algorithm is proposed to solve this three-objective dynamic JSSP model and shows better performance than some classic multiobjective evolutionary algorithms, including NSGA-II [13] and SPEA2 [14].Meng et al. [15] constructed a model that aimed at minimizing completion time, average flow time, and machine idle time simultaneously.A dualpopulation particle swarm optimization algorithm is designed for the model.
The third category is the many-objective JSSP (MaJSSP) model with more than three objectives.Specifically, a fourobjective model (i.e., mean flow time, maximum flow time, mean weighted tardiness, and maximum weighted tardiness) is constructed in [16]- [18].To solve this model, a genetic programming-based nondominated sorting GA approach [16], a reference point adaption method [17], and a fitness-based selection technique [18] are proposed.
As seen from the literature, most of the existing work only considers the time-related objectives, regardliess of whether it is the single-objective or multiobjective, or even many-objective JSSP models.For example, in the multi/many-objective JSSP model, the flow time and tardiness are both calculated according to the completion time.Moreover, the mean and maximum flow time may not strongly conflict, the same would apply for the mean and maximum weighted tardiness.However, in industry and engineering management, the cost is also a deeply concerned objective besides time.
To alleviate the limitations of the existing models, we additionally consider the production cost and machine loss which can better meet the demands of factories.Hence, an MaJSSP model is proposed, which contains five optimization objectives: 1) completion time; 2) total tardiness; 3) advance time; 4) production cost; and 5) machine loss.Considering these five objectives to construct the MaJSSP is an innovation in the JSSP model research, which makes the model more practical in industry and engineering management.Since the time and cost in MaJSSP are conflicting objectives [19]- [21], the algorithm design should deal with the difficulty of how to balance the optimization of time and cost.
To efficiently solve the proposed MaJSSP, the multiple populations for multiple objectives (MPMO) [22] framework that has shown excellent performance on both multiobjective optimization [23], [24] and many-objective optimization [25] is adopted.Evolutionary computation algorithms are promising optimizers for various optimization problems [26]- [29].In particular, the GA has shown powerful efficiency in solving discrete optimization problems [30], [31].Therefore, the GA optimizer is embedded with the MPMO framework, thus generating a new MPMOGA approach to solve MaJSSP.Under the MPMO framework, MPMOGA employs five populations to optimize the five optimization objectives, respectively.During the evolutionary process, an archive is constructed to store the elite solutions found by the five populations.An archive sharing technique (AST) is designed to avoid the populations only focusing on their own optimization objective and achieve the coevolution of different populations.Moreover, an archive update strategy (AUS) is developed for enhancing the quality of the solutions in the archive.
The contributions of this article are as follows.1) To the best of our knowledge, this article is the first paper that comprehensively considers the completion time, total tardiness, advance time, production cost, and machine loss to construct an MaJSSP model.The proposed MaJSSP model considers not only the timerelated objectives but also the production cost and machine loss.Thus, our MaJSSP model can reflect the various demands of factories in a more practical way.2) To effectively solve the MaJSSP, the MPMOGA is proposed, which is based on the MPMO framework.
In addition, an AST and an AUS are incorporated into MPMOGA to enhance search efficiency.
3) The experiments are conducted on two widely used test sets (i.e., FT test set [32] and LA test set [33]).Compared to four state-of-the-art algorithms, including three many-objective evolutionary algorithms (MaOEAs) and a JSSP algorithm, our proposed MPMOGA approach not only can find more diverse solutions distributed in the objective space but also can obtain better-converged solutions on each objective.The remainder of this article is organized as follows.Section II introduces the basic concepts of the many-objective optimization problem (MaOP) and the proposed MaJSSP model.Then, the detailed descriptions about the proposed MPMOGA approach are given in Section III.In Section IV, the experimental results are presented and analyzed, and the advantages of MPMOGA in solving the MaJSSP are also discussed.Finally, we draw a conclusion in Section V.

A. MaOP
In recent years, MaOPs have attracted wide attention [34]- [36].In general, a minimization MaOP can be formulated as where x is a decision vector that represents a solution of the MaOP.F(x) represents the fitness value of x on each objective.K denotes the number of optimization objectives and is larger than three in MaOPs.Usually, one solution cannot be optimal on every objective due to the conflict among the objectives [37].Therefore, the Pareto dominance rule is used to compare two solutions in MaOPs.As shown in (2), the solution x 1 dominates (i.e., is better than) the solution x 2 if x 1 performs better than x 2 on at least one objective and does not perform worse than x 2 on any objective.The solutions that are not dominated by any other solution are called nondominated solutions.The Pareto set (PS) stores all the nondominated solutions and the Pareto front (PF) stores the corresponding fitness of the nondominated solutions in PS (2)

B. MaJSSP Model
In MaJSSP, the number of jobs is denoted by J and each job contains M procedures of different types.The M procedures of a job should be processed on M respective machines, and the processing time of each procedure is predefined and fixed.The task of MaJSSP is to determine the process order of procedures on each machine while satisfying the following constraints.
Constraint 1: All jobs are released at time 0. Constraint 2: One machine can only process one procedure simultaneously.
Constraint 3: No interruption is allowed during the processing of a machine.That is, one machine cannot process another procedure until finishing the current processing procedure.
Constraint 4: The procedures of a job should be processed sequentially according to a predetermined ranking.
Constraint 5: Each procedure can only be processed once.
The MaJSSP considers completion time, total tardiness, advance time, production cost, and machine loss as the five optimization objectives, which are formulated as follows: (work_time m × wpc + sleep_time m × spc) (7) where f 1 represents the maximum completion time of all jobs and C j represents the completion time of job j. f 2 represents the total tardiness and D j denotes the due date of job j.If C j is larger than D j , max(C j − D j , 0) = C j − D j is viewed as the penalty, yielding a larger f 2 .Otherwise, job j is completed before the due date and max(C j − D j , 0) equals 0. f 3 represents the total advance time of jobs that finish before the due date.
To save energy, machines will switch to the sleeping mode when idle and if a new procedure arrives, they will turn to the working mode.f 4 calculates the production cost generated by each machine during working and sleeping modes, where wpc and spc represent the unit production cost of the working and sleeping modes, respectively.f 5 measures the machine loss.count m counts the time that the machine m changes from the sleeping mode to the working mode.A larger count m will cause more machine loss on m.It is worth mentioning that the machines are in the shutdown mode in the beginning, that is, the count will not increase when machines turn to the working mode at the first time.The previous work mainly considers time-related objectives, but the cost is also a deeply concerned objective since it directly influences the profit of companies.Therefore, to better meet the QoS requirements, our proposed MaJSSP model considers both time-related and cost-related objectives.Specifically, f 1 , f 2 , and f 3 describe the work efficiency of industrial factories from the aspect of time, including the completion time, total tardiness, and advance time.f 4 and f 5 estimate the operational costs of industrial factories in terms of production cost and maintenance cost.Note that higher machine loss will induce higher maintenance costs.Obviously, smaller values of these five objectives are preferred.In this article, we develop an MaOEA, called MPMOGA, to efficiently solve our proposed MaJSSP.

III. PROPOSED ALGORITHM
To solve the MaJSSP, we propose an MPMOGA approach that incorporates the MPMO framework with GA.The MPMO framework helps the algorithms optimize each objective sufficiently, which has been validated in [22]- [25].In MPMOGA, an archive is constructed to store elite solutions found during the evolutionary process.Besides, the AST is proposed in the crossover operation to achieve coevolution among all the populations.To further improve the quality of elite solutions in the archive, the AUS is developed.The detailed procedures of MPMOGA are described as follows.

A. Solution Encoding
MPMOGA adopts an operation-based encoding method [38] that uses a sequence of J× M dimensions to represent a schedule.Variable j ∈ [1, J] identifies J different jobs and each variable j appears M times in a solution to denote the M procedures of job j.Fig. 2 further illustrates the operation-based encoding method through an encoding example.
Given an instance with J = 2, M = 3, job 1 = [(1, 3), (2, 2), (3, 2)], and job 2 = [(1, 2), (3, 1), (2,4)].Each job has three procedures and each procedure is labeled with (m, pt), which means that this procedure must be processed on machine m for pt processing time.For example, in job 1, the first procedure (i.e., O 11 ) needs to be processed on machine 1 with a processing time of 3, the second procedure (i.e., O 12 ) needs to be processed on machine 2 with a processing time of 2, and the third procedure (i.e., O 13 ) needs to be processed on machine 3 with a processing time of 2.
A feasible schedule x in this instance can be encoded as [1, 2, 2, 1, 2, 1] to represent a process order of procedures in each machine, as shown in Fig. 2(a).The genes encoded with 1 represent the procedures of job 1, while the genes encoded with 2 represent the procedures of job 2. Since 1 appears for the first time in the first dimension of x, it means the first procedure of job 1 (i.e., O 11 ).In the fourth dimension, 1 appears for the second time which means the second procedure of job 1 (i.e., O 12 ).Accordingly, the six dimensions in Based on the process order and processing time, the Gantt chart of x is obtained as shown in Fig. 2(b).Note that the Pr c (crossover probability).

B. MPMO Framework
Under the MPMO framework, five populations are employed to optimize five objectives, respectively.The performance of the solution in each population is evaluated by its corresponding optimization objective rather than the Pareto dominance rule, as the condition of the Pareto dominance is hard to satisfy in MaOPs and, thus, may slow down the evolutionary process [39].Note that the GA-based operators (i.e., crossover operator and mutation operator) are adopted as the optimizer in each population to generate offspring since GA is suitable for solving discrete optimization problems.More details of the GA-based operators are illustrated in Section III-C.As the MPMO framework can obtain a deep search on each objective, the AST is proposed to avoid each population fully focusing on one objective and lead the coevolution among the different populations, which are described in detail in Section III-D.

C. Crossover and Mutation Operators
The precedence operation crossover (POX) proposed in [40] can retain the good characteristics of parents and always  generate legal offspring in solving JSSPs [41].Hence, the POX is adopted in the MPMOGA to operate genetic crossover.The procedures of POX are illustrated as follows.
Step 1: The job set is divided into two nonempty subsets JS 1 and JS 2 randomly.
Step 2: The genes of parent p 1 that belong to JS 1 are copied to offspring c 1 with the same positions, and the genes of p 2 that belong to JS 1 are copied to offspring c 2 also with the same positions.
Step 3: The genes of parent p 1 that belong to JS 2 are copied to offspring c 2 with the same order, and the genes of p 2 that belong to JS 2 are copied to offspring c 1 also with the same order.Take Fig. 3 as an example.Given J = 3, M = 3, parent p 1 = [2, 1, 3, 3, 2, 2, 3, 1, 1], and parent p 2 = [1, 2, 1, 3, 2, 3, 2, 3, 1].Considering that two nonempty subsets are randomly generated as JS 1 = {2} and JS 2 = {1, 3}.The genes of p 1 that are marked with red (i.e., the 1st, 5th, and 6th dimensions) belong to JS 1 , so that they are copied to c 1 and their locations remain the same as in p 1 .Similarly, the genes of p 2 that are marked with blue (i.e., the 2nd, 5th, and 7th dimensions) are copied to c 2 with the same positions.Afterward, the remaining genes in p 1 that belong to JS 2 are copied to c 2 in the order of [1, 3, 3, 3, 1, 1], while the genes in p 2 that belong to JS 2 are filled in c 1 in the order of [1,1,3,3,3,1].Finally, two offspring The insertion mutation (IM) operator [42] is adopted in the MPMOGA.First, two dimensions a and b (a ≤ b) are randomly selected from [1, J × M].Then, the genes from dimension a + 1 to dimension b are all shifted one place left, and the gene in a is moved to dimension b.Note that no gene will change if a equals b.Fig. 4 shows an example of the IM operator, where a = 4 and b = 8.Specifically, the genes marked with red (i.e., the 5th, 6th, 7th, and 8th dimensions) of p are all shifted one place left, and the gene marked with blue is moved to location b (i.e., the 8th dimension).Then, a mutated solution c is generated.

D. AST
The archive, denoted by A, is constructed in MPMOGA to store the elite solutions found during the evolutionary process.Note that A is initialized as all the randomly generated solutions in five populations without duplication.After that, the AST is developed to exchange information among populations and assist the crossover operator, which helps achieve efficient population coevolution.
The AST is designed to assist the crossover operator, that is, the POX in Algorithm 1. Specifically, for the kth (k ∈ [1,5]) population, each solution in population k is selected as p 1 and a solution in A is randomly chosen as p 2 (lines 4-6).In each crossover operation, a random value r ranging in [0, 1] is generated.If r is less than or equals the crossover probability Pr c , the POX operator is conducted on p 1 and p 2 to generate offspring c 1 and c 2 , and the one with the best performance on the kth objective among p 1 , c 1 , and c 2 is preserved to generate a new population (lines 8-11).Otherwise, the better one between p 1 and p 2 is preserved (lines 12-14).Therefore, the AST in MPMOGA not only can retain the good characteristics in its corresponding optimization objective but also can utilize the information on the other four objectives effectively.

E. AUS
In MPMOGA, the AUS is proposed to update A according to the newly generated populations, so as to generate solutions with better performance.The pseudocode of the AUS is shown in Algorithm 2. As the number of nondominated solutions increases during the evolution, it is recommended to define a threshold for the archive's size [43].Therefore, in MPMOGA, the maximal size of the archive is set to NA.For each generation, all solutions in the current five populations are added to an empty set S (lines 2-4), all elite solutions in A are also added to S (line 5), and each elite solution in A is perturbed and then added to S (lines 6-10).The perturbation operation on A is implemented based on the IM operator introduced in Section III-C.Based on local perturbation, the AUS mainly improves the convergence of elite solutions thus enhancing the exploitation ability of the MPMOGA.Since the solutions gathered from the five populations and the archive may have duplications, the duplication elimination operation not only can save the storage but also can help preserve more diverse solutions in the archive.If the size of S' is less than or equals the predefined size NA, all the solutions in S' are copied to A, otherwise, the nondominated sorting approach and diversity preservation method in [13] are executed on S' to select NA elite solutions into A (lines [12][13][14][15][16][17].The AUS can enhance the quality of the solutions in the archive, so as to help the AST better lead the coevolutionary process among populations.Besides, the detailed experimental analysis of the AUS is presented in Section IV-D.

F. Complete Procedures of MPMOGA
The mechanism of our proposed MPMOGA is shown in Fig. 5. Initially, five populations are initialized to optimize the five optimization objectives, respectively.In each generation, first, the current population Pop k (k ∈ [1,5]) cooperates with A to implement POX for generating crossover offspring.The one with the best performance on the kth objective is preserved to generate the new population Pop k .Second, each solution p i in Pop k may generate a mutated solution c i with a mutation probability Pr m based on the IM operator.The solution with smaller fitness on its corresponding optimization objective between p i and c i is preserved to the mutated population Pop k .Third, Pop k is updated by Pop k .Fourth, the AUS is executed based on the newly updated populations Pop k .
In summary, the MPMO framework helps each population optimize its corresponding objective sufficiently.To avoid each population only concentrating on its corresponding optimization objective and performing poorly on the other objectives, the AST is developed to let populations share information through the archive.In addition, the AUS aims to improve the quality of the elite solutions, so as to better lead the evolution of all the populations.Therefore, the MPMOGA mechanism can approximate the true PF of MaJSSP in a cooperative coevolutionary way.
The flowchart of MPMOGA is shown in Fig. 6.First, five populations are randomly initialized by the operation-based encoding method with equal size.Then A is initialized as all solutions in five populations.During the iteration process, five populations focus on optimizing five objectives, respectively.The POX-based AST and IM operators are executed to update each population, and then A is updated by AUS.Repeat the evolutionary process until the termination condition is met.The elite solutions in A are the final output of the MPMOGA.

G. Computational Complexity of One Generation of MPMOGA
Given the population size (i.e., the number of solutions in five populations) NP, the dimension of each solution D, and the maximum size of archive NA.Note that NA is set as the same as NP in MPMOGA.Typically, the computational complexity of one generation of MPMOGA is determined by the genetic operators to generate well-performed offspring and the AUS to enhance the quality of the elite solutions.
When generating the offspring, the crossover operation which generates NP crossover offspring with D dimensions requires O(NP × D) computations.To select solutions with the best performance, the evaluation of the newly generated solutions is needed, which consumes O(NP) computations.Besides, the complexity of the mutation operation is O(NP × D) and the complexity of solution evaluation for the mutated solutions is O(NP).Therefore, the overall computational complexity of generating offspring by the POX-based AST and IM operator is O(2NP × D + 2NP).
In the AUS, first, the local perturbation and solution evaluation require O(NA × D + NA) computations.Second, the duplication elimination operation requires O(|S| 2 ) computations due to the pairwise comparison.Note that the maximum size of S is NP + 2NA and NA is equal to NP; thus, O(|S| 2 ) can be reduced as O(NP 2 ).Third, the complexity of the nondominated sorting approach and diversity preservation method is O(NP 2 ) according to [13].Hence, the overall computational complexity of the AUS is O(NA × D + NA + 2NP 2 ).
To sum up, the overall complexity of MPMOGA in genetic operators and AUS is O(3NP × D + 3NP + 2NP 2 ), where NA is replaced by NP.After simplifying, the computational complexity of one generation of MPMOGA is O(NP 2 + NP × D).

A. Experimental Design
Two widely used test sets (i.e., FT test set [32] and LA test set [33]) are adopted in experiments.Specifically, three typical test instances of the FT test set (i.e., FT06, FT10, and FT20) and all 40 test instances of the LA test set (i.e., LA01-LA40) are selected to evaluate the algorithm as they have different dimension sizes vary from 36 (i.e., J = 6 and M = 6) to 300 (i.e., J = 30 and M = 10).According to [16], the due date D j of job j is calculated as where t is a coefficient which is set to 1.8 and pt ij represents the processing time required for the ith procedure of job j.
The unit production cost of the working mode wpc and the unit production cost of the sleeping mode spc in ( 7) are set to 4.0 and 2.0, respectively.The experimental results of our proposed algorithm are compared to three effective MaOEAs and a GGA [10] proposed for JSSPs.Specifically, three MaOEAs are: 1) the Pareto based nondominated sorting GA III (NSGA-III) [44]; 2) the decomposition-based multiobjective EA with distance-based update strategy (MOEA/D-DU) [45]; and 3) the indicator-based stochastic ranking algorithm (SRA) [46].Since MaJSSP is a discrete optimization problem and the crossover and mutation operators of these three MaOEAs are originally designed to solve continuous problems, the crossover and mutation operators of the three MaOEAs are adjusted to POX and IM, respectively, to make them capable to solve MaJSSP.Note that the other techniques are still the same as in their corresponding reference.Besides, the GGA is adopted for comparison without any modification.
In MPMOGA, crossover probability Pr c is set to 0.9; mutation probability Pr m is set to 0.1; population size NP is set to 210; and the maximum size of archive NA is also set to NP (i.e., 210).Note that each population has 210/5 solutions.For the competitor algorithms, the settings of Pr c , Pr m , and NP are the same as those in MPMOGA, except that the Pr c of GGA is set to 1.0 as suggested by the authors.Besides, the settings of other parameters are consistent with their original algorithms.
The termination condition for all the tested algorithms is the maximum running time.For the test instances whose dimension sizes are smaller than or equal to 100 (i.e., J × M ≤ 100), the maximum running time is 10 s.For the test instances with dimension sizes in the range of (100, 200], the maximum running time is 15 s.For the test instances with dimension sizes larger than 200, the maximum running time is 20 s.The algorithms tested in this article are all implemented with C++ and experiments are performed on a PC with Intel Core i5-7400 CPU 3.00 GHz and 8.00-GB RAM.For a fair comparison, all algorithms conduct 20 independent runs. The C-metric [47], invert generational distance (IGD) [48], and hypervolume (HV) [47] are used as performance metrics to evaluate all algorithms.C(U, V) is calculated by (10), which compares the dominance relationship between two solution sets U and V. Specifically, C(U, V) represents the percentage that the solutions in U dominate or are at least equal to the solutions in V. C(U, V) will equal 1 if each solution in V is dominated by or equal to at least one solution in U.If the solutions in V are not dominated by or equal to any solution in U, then C(U, V) will be 0. Note that the sum of C(U, V) and C(V, U) does not necessarily equal 1. C(U, V) > C(V, U) means that the overall quality of the solutions in U is better than that in V.In the experiments, all the nondominated solutions obtained in 20 runs are collected to compute C-metric on each test instance Before calculating IGD and HV, the PFs obtained by the algorithms and the true PF are normalized as where f k (•) represents the fitness value on the kth objective, norm_f k (•) represents the normalized fitness value on the kth objective, and Z max,k and Z min,k represent the maximum value and minimum value of the true PF in the kth objective, respectively.Since the true PF is unknown in the real-world optimization problems, the nondominated solutions among the final solutions obtained by all algorithms in 20 runs are treated as the true PS, then the objectives of true PS are treated as the true PF.
After that, IGD is formulated as where T represents the true PF and R is the PF obtained by the algorithm.dist(v, R) represents the minimum Euclidean distance between v in T and the objective vectors in R in the normalized objective space.A smaller IGD indicates the better performance of the algorithm in terms of convergence and diversity.Besides, HV measures the volume of the region bounded by a nadir point and the normalized PF obtained by the algorithm.A larger HV indicates a better performance of the algorithm.In this article, the widely used Monte-Carlo sampling method is adopted to calculate the HV in MaOPs.Specifically, the nadir point is set to (1.1, 1.1, 1.1, 1.1, 1.1), and 100 000 sampling points are randomly generated in [0.0, 1.1] 5 .
Note that both the IGD and HV metrics are calculated in 20 runs independently, and both the average result and standard deviation result among 20 runs are presented.On each test instance, the Wilcoxon rank-sum test with a significant level of 0.05 is performed to detect whether there is a significant difference between the IGD (or HV) results of the two algorithms.The symbols "+," "≈," and "−" denote our proposed MPMOGA approach performs significantly better than, not significantly different from, and significantly worse than the competitor algorithms, respectively.

B. Experimental Results With Competitor Algorithms
The solutions obtained by MPMOGA are compared to competitor algorithms from three aspects.The first aspect is the C-metric, which compares the dominance relationship of two solution sets obtained by two algorithms.The second aspect is the IGD and HV metrics, which can comprehensively evaluate the algorithm's ability in balancing convergence and diversity.The third aspect is the best objectives obtained during evolution which reveals the capability of the algorithm to optimize each objective.
1) C-Metric: Table I shows the comparisons between MPMOGA and four competitor algorithms on C-metric.The better one between C(U, V) and C(V, U) is bolded.In the last row of Table I, we count and present the numbers of test instances that the C(MPMOGA, -) value is larger and smaller than the C(-, MPMOGA) value, respectively.According to Table I, C(MPMOGA, -) values are larger than C(-, MPMOGA) values on all 43 test instances compared to three MaOEAs.In addition, C(MPMOGA, GGA) is significantly better than C(GGA, MPMOGA) on 42 test instances while slightly worse than C(GGA, MPMOGA) on the remaining 1 test instance.The above experimental results show that MPMOGA significantly outperforms the four competitor algorithms on C-metric.Specifically, most of the C(MPMOGA, -) values are larger than 0.7 which indicates that most of the solutions obtained by the competitor algorithms are dominated by the solutions of MPMOGA.Besides, for SRA and GGA, the C(MPMOGA, -) values are equal to 1 on more than half of the test instances, which means each solution obtained by these two algorithms is dominated by or at least equal to one solution in MPMOGA.In summary, the C-metric results illustrate that MPMOGA can obtain solutions with good quality.
2) IGD and HV Metrics: The IGD results of all algorithms are listed in Table II.Note that the average results and standard deviation results are presented in the form of avg (std).The best results on each test instance are bolded.It can be seen from Table II that MPMOGA performs the best on all test instances.According to the results of the Wilcoxon ranksum test, MPMOGA is significantly better than NSGA-III, MOEA/D-DU, SRA, and GGA on 40, 42, 40, and 41 test instances, and is not significantly different from them on the remaining 3, 1, 3, and 2 test instances.
Table S.I in the supplementary material shows the HV results of all tested algorithms.We can see that MPMOGA performs significantly better than NSGA-III, MOEA/D-DU, SRA, and GGA on 43, 42, 43, and 39 test instances, respectively.GGA performs better than MPMOGA on only one test instance while the other three competitor algorithms cannot perform better than MPMOGA on any test instance.Note that on the LA34 instance, MPMOGA obtains the best average HV, but due to the large standard deviation, the Wilcoxon rank-sum test shows that MPMOGA performs worse than GGA.
Specifically, NSGA-III and MOEA/D-DU have shown their superiorities in solving MaOPs in the benchmark.The shapes of true PF in MaOP benchmark are uniform and regular so that the uniformly generated reference points in these two IEEE TRANSACTIONS ON CYBERNETICS algorithms can provide efficient guidance.However, the shapes of true PF in MaJSSPs may not be uniform and regular, thus resulting in the unsatisfactory performance of NSGA-III and MOEA/D-DU in IGD and HV metrics.Besides, the objectives in MaJSSP have different scales and ranges, which may lead to inaccurate calculation of convergence indicator, diversity indicator, and density estimator, thus degrading the performance of SRA and GGA.Compared to these four competitor algorithms, the advantages of MPMOGA are as follows.First, the MPMOGA approach is a referencepoint-free algorithm.Instead of using reference points to guide the evolution, MPMOGA employs multiple populations and an archive to efficiently solve MaJSSP in a cooperative coevolutionary way.Second, MPMOGA will not be affected by the different objective scales.Under the MPMO framework, the solutions in each population can be distinguished by their corresponding optimization objective.To sum up, MPMOGA uses five populations to optimize five different objectives and constructs an archive to achieve coevolution among the populations, which is beneficial to solve MaJSSP.
To further compare the performance of all tested algorithms, the normalized true PF and normalized PFs obtained by different algorithms on the test instances are plotted.Figs.7-10 present the normalized PFs obtained by all tested algorithms on FT06, LA14, LA26, and LA38 test instances, where subfigure (a) is the normalized true PF.
Based on the normalization, the distribution of true PF can be maintained in the range of [0, 1].As for the PFs obtained by the tested algorithms, if the normalized objective of PF is larger than 1, it means the algorithm does not fully converge to true PF.The higher degree of the normalized objective greater than 1 indicates the worse convergence of the algorithm.Besides, a more similar distribution between PF obtained by the algorithm and true PF represents the better diversity of the algorithm.Concretely, on the FT06 and LA14 test instances, although the normalized objectives of four competitor algorithms do not exceed 1, these four competitor algorithms only converge to a subregion of the true PF.On the contrary, MPMOGA achieves the results closest to the true PF.Due to the large problem sizes of the FT26 and FT38 test instances, all tested algorithms fail to converge to the true PF.Among them, MPMOGA can better approximate the true PF while the other algorithms perform poorly on both convergence and diversity.In addition, MPMOGA successfully obtains the minimum or near-minimum value on most objectives on the FT26 and FT38 test instances, while the other algorithms fail.The above experimental results show the effectiveness of MPMOGA in solving MaJSSP.
3) Capability of Optimizing Each Objective: Table III shows the overall performance of the algorithms in terms of optimizing each objective.To save space, the detailed results of the best objectives obtained by the algorithms are listed in Table S.II in the supplementary material.In Table III, the "Total times of obtaining the best objectives" represents the sum of the times that an algorithm obtains the best objectives on 43 test instances, the "Average rank" is the average of the rank of an algorithm on 43 test instances, and the "Total times of the first rank" records the number of the first ranks obtained by an algorithm on 43 test instances.As shown in Table III, the MPMOGA obtains the smallest objective values for 164 times in total, which is significantly better than the four competitor algorithms.In addition, MPMOGA achieves the best average rank, that is, 1.12.Besides, MPMOGA ranks first on 38 out of all 43 test instances, which is significantly superior to other competitor algorithms.The experimental results shown in Table III and Table S.II in the supplementary material illustrate the great capability of MPMOGA to optimize each objective.

C. Analysis of Schedules Obtained by MPMOGA
To further discuss the features of MPMOGA in solving MaJSSP, we select five schedules from the output of the MPMOGA for detailed analysis.The Gantt charts of the five schedules obtained by the MPMOGA approach on the FT06 test instance are plotted in Fig. 11.Note that FT06 is a test instance with J = 6 and M = 6.In the figure, blocks with different colors denote different jobs, the label j-i in each block represents the ith procedure of the job j, and the length of each block indicates the corresponding processing time.It can be seen from Fig. 11(a) that schedule S 1 is optimal on the first objective with a maximum completion time of 55 (i.e., the completion time of both job 1 and job 5).The schedule S 2 achieves the best performance on the second objective.According to (9), the due date for each job on FT06 can be calculated (i.In particular, job 2, job 3, job 4, job 5, and job 6 are finished 16.6, 10.2, 24, 16, and 19 time units before the due date, respectively, yielding the best advance time.The best value of the fourth objective is obtained by schedule S 4 .Since the work time for each machine is fixed, the production cost is affected by the sum of the sleep time on each machine.The sleep time of all machines in S 4 (i.e., 87) is the smallest among the five schedule solutions, while the sleep time in the other four schedules is 102, 107, 131, and 153, respectively.Thus, schedule S 4 has the lowest production cost.The machine loss of schedule S 5 is only 4 that is optimal on the fifth objective.In conclusion, the five schedules obtained by MPMOGA obtain the minimum value on the five objectives, respectively.Each population in MPMOGA focuses on optimizing one objective while learning the information on the remaining four objectives from the other populations through the AST.Consequently, the obtained schedules are wellconverged and well-distributed, which can meet the various demands of factories.

D. Validation of Perturbation in Archive
To investigate the effectiveness of the perturbation (i.e., the IM operator) in the AUS, we design a variant algorithm with no perturbation (called MPMOGA-np) and a variant algorithm with swap mutation (called MPMOGA-sm) for comparison in this section.Note that the swap mutation is a simple perturbation operation that is executed by randomly selecting two dimensions of a solution and exchanging their genes.The C-metric results, IGD results, and HV results between MPMOGA and its variant algorithms are listed in Tables IV and V and Table S.III in the supplementary material, respectively, where the best results are bolded.As for the IGD results, it can be seen from Table V that MPMOGA is superior to MPMOGA-np and MPMOGA-sm on 38 and 37 test instances and performs not significantly different from MPMOGA-np and MPMOGA-sm on the remaining five and six test instances, respectively.
From the HV results shown in Table S.III in the supplementary material, we can see that MPMOGA performs significantly better than MPMOGA-np and MPMOGA-sm on 40 and 39 test instances, respectively, and does not perform worse than these two variant algorithms on any test instance.
To sum up, the C-metric results, IGD results, and HV results show that the perturbation (i.e., the IM operator) in the AUS can enhance both the convergence and diversity of the solutions, thus improving the overall performance of MPMOGA.

V. CONCLUSION
In this article, an MaJSSP model with five objectives covering the aspects of completion time, total tardiness, advance time, production cost, and machine loss was proposed.To efficiently solve this model, a novel algorithm based on the MPMO framework, called MPMOGA, was developed.Specifically, the MPMO framework aims to use different populations to optimize different objectives simultaneously.Besides, the AST is put forward within MPMOGA to guide the coevolution of all the populations, and the AUS is designed to enhance the quality of the elite solutions in the archive.
Two widely used test sets are adopted to evaluate the performance of the proposed MPMOGA.The experimental results show that our proposed MPMOGA performs well not only in terms of three widely used metrics (i.e., C-metric, IGD, and HV) but also in obtaining the best values on each objective.According to the experimental analysis, the MPMO framework is illustrated as an efficient way to deal with MaJSSP.In addition, we investigate the contribution of perturbation in the AUS for improving the overall performance of MPMOGA.

Fig. 1 .
Fig. 1.Example of a JSSP.(a) Procedure-to-machine allocation scheme.(b) Schedule that determines the process order of procedures on each machine.

(
AUS) is proposed to further improve the quality of the solutions in the archive.The test instances in the widely used test sets are adopted to evaluate the performance of MPMOGA.The experimental results show that MPMOGA outperforms the compared state-of-the-art algorithms on most of the test instances.Index Terms-Archive sharing technique (AST), archive update strategy (AUS), genetic algorithm (GA), manyobjective job-shop scheduling problem (MaJSSP), many-objective optimization, multiple populations for multiple objectives (MPMO).

Fig. 7 .Fig. 8 .
Fig. 7. On FT06 test instance, (a) normalized true PF, (b) normalized PF obtained MPMOGA in an independent run, (c) normalized PF NSGA-III in an independent run, (d) normalized PF obtained by MOEA/D-DU in an independent run, (e) normalized PF obtained by SRA in an independent run, and (f) normalized PF obtained by GGA in an independent run.

Fig. 9 .Fig. 10 .
Fig. 9. On LA26 test instance, (a) normalized true PF, (b) normalized PF obtained by MPMOGA in an independent run, (c) normalized PF obtained by NSGA-III in an independent run, (d) normalized PF obtained by MOEA/D-DU in an independent run, (e) normalized PF obtained by SRA in an independent run, and (f) normalized PF obtained by GGA in an independent run.
According toTable IV, C(MPMOGA, MPMOGA-np) values are larger than C(MPMOGA-np, MPMOGA) values on all 43 test instances; C(MPMOGA, MPMOGA-sm) values are larger than C(MPMOGA-sm, MPMOGA) on 42 test instances, while worse than C(MPMOGA-sm, MPMOGA) on only one test instance.Specifically, most C(MPMOGA, -) values are close to 1 while most C(-, MPMOGA) values are close to 0, which shows that the IM operator in the AUS can efficiently improve the quality of solutions.
Si-Chen Liu (Student Member, IEEE) received the B.S. degree in computer science and technology from the South China University of Technology, Guangzhou, China, in 2020, where she is currently pursuing the M.S. degree in computer science and technology with the School of Computer Science and Engineering.Her research interests mainly include multiobjective optimization, many-objective optimization, and their applications in real-world problems.
This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/be processed on a specific machine.Concretely, O 11 , O 22 , O 31 , and O 42 should be processed on M 1 , while O 12 , O 21 , O 32 , and O 41 should be processed on M 2 .The key issue in JSSPs is to arrange the process orders of the procedures for both M 1 and M 2 .A feasible schedule is shown in Fig. 1(b), in which the process order of M 1 is O 31 , O 22 , O 11 , and O 42 , and the process order of M 2 is O 21 , O 12 , O 41 , and O 32 .Note that a machine can only process one procedure at the same time.
x represent procedure O 11 , O 21 , O 22 , O 12 , O 23 , and O 13 , respectively.The corresponding machines for processing these six procedures are M 1 , M 1 , M 3 , M 2 , M 2 , and M 3 , which can be obtained by the data in job 1 and job 2. Hence, the process order of each machine is determined by x, that is, M 1 first processes O 11 and then processes O 21 , M 2 first processes O 12 and then processes O 23 , and M 3 first processes O 22 and then processes O 13.
Pop k (populations generated by crossover, k ∈ [1, 5]) blocks with different colors denote different jobs, the label j-i in each block represents the procedure O ji , and the length of each block indicates the corresponding processing time.The Gantt chart is explained as follows.First, M 1 processes O 11 from the beginning to time 3, as all jobs are released at time 0. Second, due to Constraints 2 and 3 in the MaJSSP model, M 1 processes O 21 from time 3 to time 5. Third, although M 3 is idle, O 22 cannot be processed immediately due to Constraint 4. Specifically, O 21 (O 22 's prior-order procedure) is finished in time 5; thus, M 3 starts to process O 22 in time 5.Other procedures are processed according to the above rules and, finally, the Gantt chart is obtained.

TABLE I C
-METRIC RESULTS BETWEEN MPMOGA AND THE COMPETITOR ALGORITHMS ON 43 TEST INSTANCES

TABLE II IGD
RESULTS BETWEEN MPMOGA AND THE COMPETITOR ALGORITHMS ON 43 TEST INSTANCES

TABLE III COMPARISONS
BETWEEN MPMOGA AND THE COMPETITOR ALGORITHMS IN TERMS OF OPTIMIZING EACH OBJECTIVE

TABLE IV C
-METRIC RESULTS BETWEEN MPMOGA AND ITS VARIANT ALGORITHMS ON 43 TEST INSTANCES TABLE V IGD RESULTS BETWEEN MPMOGA AND ITS VARIANT ALGORITHMS ON 43 TEST INSTANCES