Application Research for Multiobjective Low-Carbon Flexible Job-Shop Scheduling Problem Based on Hybrid Artificial Bee Colony Algorithm

A hybrid artificial bee colony (HABC) solved the multiobjective low-carbon flexible job-shop scheduling problem (MLFJSP) was proposed in the paper. HABC algorithm uses a two-layer coding method to establish the initial population as the nectar source for the employed bees. In the optimization process, the employed bee phase and the onlooker bee phase adopt improved crossover mutation strategies and adaptive neighborhood search strategies to generate new nectar sources, and the greedy method is used to retain better solutions. The scout bee update mechanism prevents the algorithm from falling into a local optimum and enhances the convergence of the algorithm. In order to prevent the loss of the optimal solution, the optimization results of each phase are saved in the Pareto archive (PA). Finally, two sets of international standard instances are used to carry out simulation experiments. The simulation results demonstrated that HABC algorithm is an effective algorithm to solve the multiobjective low-carbon flexible job shop scheduling problem.


I. INTRODUCTION
Nowadays, the environmental and energy crises have become a worldwide problems. With the continuous development of society, energy consumption will increase in the future. Due to the increase in energy consumption, the environmental problems have become increasingly serious. The industrial sector is the largest energy consumer and currently accounts for about one-half of the world's total energy consumption [1]. Manufacturing is recognized as a very important subsector in industry [2]. The literature [3] pointed out that the actual processing energy requirement is less than 15% of the total. This means that focusing solely on machines or processes to improve energy efficiency is no longer an effective way to save energy. Currently, ''low-carbon scheduling'' as a novel scheduling model has become a hot topic The associate editor coordinating the review of this manuscript and approving it for publication was Shunfeng Cheng.
in the scheduling area due to the costs of increased energy consumption and environmental pollution [4].
Flexible job-shop scheduling problem (FJSP) is proposed by Brucker et al. in 1990 [5], which is an extension of the traditional job-shop scheduling problem (JSP), and is more suitable for the actual production environment, so it has important theoretical significance and application value for low-carbon FJSP. Swarm intelligence (SI) method is an effective type of meta-heuristic method, including ant colony optimization (ACO) algorithm, particle swarm optimization (PSO) algorithm, and artificial bee colony (ABC) algorithm and so on. Zhao et al. [6] constructed the mathematical model for the multi-objective job shop scheduling problem, which contains two sub-systems. The father ant colony system solves the flexible processing route decision problem, and the children ant colony system solves the sorting problem of the process task set generated by the father ant colony system. Shao et al. [7] proposed a hybrid discrete particle swarm optimization (DPSO) and simulated annealing (SA) VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ algorithm is suggested to identify an approximation of the Pareto front for FJSP. Gao et al. [8] proposed a new two-level artificial bee colony algorithm. In the paper, a new initialization rule of a honey bee colony is adopted in the paper, and an integrated local search algorithm is proposed to improve the performance of the algorithm. Zou et al. [9] proposed a hierarchical ant-genetic algorithm-based multiobjective intelligent algorithm for FJSP, which achieved remarkable results.
Li et al. [10] designed an improved artificial bee colony (IABC) algorithm to solve the multi-objective lowcarbon job-shop scheduling problem (MLFJSP) with variable processing speed constraint. The optimization objectives of MLFJSP include minimizing the makespan, total carbon emission and machine loading. The results demonstrate that the IABC algorithm can get better performance for solving the MLFJSP. Wu et al. [11] established a mathematical model with three optimization objectives and developed a hybrid algorithm (MOEA/D-PSO) based on MOEA/D and PSO for solving FJISP. A three-layer encoding scheme is designed in the paper to improve the quality of the initial population. Dai et al. [12] proposed an enhanced genetic algorithm to solve the flexible job shop scheduling problem with transportation constraints, and established an optimization model with the goal of minimizing energy consumption and completion time. The final comprehensive experiment verifies the performance of the algorithm. Zhang et al. [13] studies a multi-objective flexible job shop scheduling problem (MOFJSP), and establishes a new energy-saving scheduling model that considers makespan and total energy consumption at the same time. The model includes processing energy, idle energy, transportation energy, and on/off energy sharing.
Sun et al. [14] researched multi-purpose flexibility problem (MOFJSP) various types of energy sources, comprehensive energy sources, energy sources, and import energy sources. They studied six variants of non-dominated sorting genetic algorithm-III (NSGA-III), and established a multi-objective flexible job shop scheduling problem (MO-FJSP) mathematical model with minimum completion time, carbon emissions, and machine load. The NSGA-III algorithm achieved effective results. Jiang et al. [15] put forward a multi-objective FJSP optimization model, in which the energy consumption, makespan, processing cost, and costweighted processing quality were considered. Liu et al. [16] proposed a multi-objective scheduling method with reducing energy consumption as one of the objectives, and establishes a bi-objectives model for minimizing total electricity consumption.
Nowadays, more and more scholars are shifting their research goals to complex low-carbon FJSP. Lei et al. [17] proposed a two phase meta-heuristic algorithm (TPM) based on imperialist competitive algorithm (ICA) and variable neighborhood search (VNS). The target of TMP is to minimize completion time and total tardiness when the total energy consumption does not exceed a given threshold. Palacios et al. [18] developed a hybrid genetic tabu search algorithm for fuzzy FJSP. Jamrus et al. [19] developed a hybrid method that integrates particle swarm optimization algorithm with Cauchy distribution and genetic operator (HPSO + GA) to solve the FJSP problem. Yuan et al. [20] proposed a new hybrid harmony search algorithm (HHS) to solve the FJSP problem, a discrete double vector coding technique and a scheme that combines heuristics and random strategies to effectively initialize the population were proposed in the paper.
In this paper, we propose a hybrid algorithm combining an Pareto archive set (PA) and the hybrid artificial bee colony (HABC) algorithm to solve the multiobjective lowcarbon FJSP(MLFJSP). The algorithm take the makespan, total machine workload and total carbon emissions as the target. In the HABC algorithm, The HABC algorithm has the following four advantages: (1) The use of a two-layer encoding method combining process sequence (OS) encoding and machine allocation (MA) encoding can effectively reduce the complexity of the algorithm and ensure that the generated chromosomes are feasible; (2) Designed an improved multiple parental crossover (IMX) method, which is presented for information sharing among the employed bees. This not only allows the gene to be fully reorganized, but also speeds up the convergence speed of the algorithm. In the crossover process, first select the machine with the shortest processing time to ensure the load balance of the processing machines.
(3) The use of adaptive crossover probability prevents the chromosomes from generating poor genes during the recombination, and improves the search efficiency of the algorithm.
(4) After each crossover and mutation operation to update the Pareto archive (PA),which can prevent the loss of the optimal solution. Four different neighborhood structures are utilized to search for the optimal solution in the PA, which not only improve the search efficiency, but also prevents the algorithm from falling into a local optimum.

II. PROBLEM DESCRIPTION
The FJSP problem is described as follows: there are n jobs J = (J 1 , J 2 , . . . , J n ), which are processed on m different machines M = (M 1 , M 2 , . . . M m ). Each job contains one or more operations. A multiobjective mathematical model considering makespan, machine workload and total carbon emissions is proposed. The corresponding assumptions of the model are as follows: (1) Each operation can be processed on different machines, but the processing time on different machines is different.
(2) The loading time and unloading time of each operation on different machines are different.
(3) Each machine can only process one operation at a time.
(4) The processing sequence of the operation is given in advance.
(5) Once the processing (including loading, processing and unloading) starts, it cannot be interrupted.
(6) Special circumstances such as machine failures are not considered during processing. The indices, sets, parameters and decision variables used in this paper are shown in Table 1.
The FJSP model in this paper is described as follows, including three goals, namely the completion time, the total workload of machines and the total carbon emissions. These optimization goals are expressed as: (1) Completion time, also known as makespan, i.e. f 1 .
(3) Total Carbon Emissions (TCE), i.e. f 3 . The energy consumption of the machine changes with the change of the processing state. The energy consumption generated by the three processes of the machine is considered in the paper, namely the processing energy consumption, the idle energy consumption and the standby energy consumption of the machine. Equation (3) is used to calculate the Total Carbon Emissions (TCE), where ε is the conversion coefficient between energy consumption and carbon emissions. In this paper, we set ε = 0.7559 [21]. The processing energy consumption of the machine comes from the energy consumption generated by the processing state of the machine. Equation (4) is used to calculate the total energy consumption in the processing state. The idle energy consumption refers to the energy consumption generated when the machine is idle. This part of the energy consumption mainly comes from the transmission system of the machine. The idle time of the machine is equal to the completion time of the last operation on the machine minus the start time of the first operation, then subtract the sum of the processing time, loading time and unloading time of all operations on the machine. The standby energy consumption of the machine comes from the sum of the energy consumption when the process is loaded on the machine and the energy consumption when it is unloaded. That is, the product of the sum of the loading time and the unloading time and the standby power. Equation (5) is used to calculate the total energy consumption in the idle state. The standby energy consumption of the machine comes from the sum of the energy consumption during process loading and the energy consumption during unloading. That is, the product of the sum of the loading time and the unloading time and the standby power. Equation (6) is used to calculate the total energy consumption in the standby state.
The constraints of the problem are shown in Equation (7) to (10): Equation (7) ensures that the operations belongs to the same job satisfy the precedence constraints, Equation (8) shows that one machine must be selected from the set of available machines for each operation. Equation (9) Indicates that the completion time of the last operation on machine k is not greater than the total completion time.

III. PROBLEM DESCRIPTION HABC ALGORITHM TO SLOVE MLFJSP A. ENCODING AND DECODING
The most frequently used encoding method for FJSP is twolayer coding [22], [23]. Every individual in the population is a solution for FJSP, and it consists of two parts. The first part is operation sequence (OS), the length is L 1 , which uses VOLUME 9, 2021 the operation-based coding in the general JSP problem: each gene is directly encoded by the job number, and the order in which the job numbers appears represents the processing order of the various operations of the job. The job number appearing in the jth time represents the jth operation of the job i, and the number of occurrences of the job number is equal to the total number of operations of the job, so that the generated solutions are all feasible scheduling. The second part is the machine allocation (MA), the length is L 2 : the value of each gene is the number of optional processing machines for its corresponding operation. The two parts have the same length, that is, The MA is generated by the method of global sequential encoding (GSE). For the 3 × 4 P-FJSP instance in Table 2  The insertion greedy decoding method is adopted in this paper. The process is as follows: Decode in the order of OS, and process each operation on its optional processing machine as early as possible. Sequentially decode in this way until all operations are arranged at the earliest possible position. If the processing time is the same, the machine with less energy consumption will be selected first to obtain the corresponding MA order. Due to priority constraints between operations of the same jobs, there may be idle time between operations on the machine. We use the greedy method of shifting to the left as much as possible to operate the process. Given a time interval [S i,j,k , C i,j,k ], it represents the start time S i,j,k and end time C i,j,k of the operation O i,j on the machine k. O i,j can only start after its direct process predecessor O i,j−1 is completed, so that the start time of O i,j can be described as (11).
When the operation O ij is allocated on the machine k, we check the idle time interval between the scheduled processes on the machine from left to right to find the earliest available time S i,j,k . If there is enough time span from can be used for O ij that satisfies (12). In other words, O ij can be shifted to the left. We use such a left-shift decoding scheme and assign each operation to their designated machines from left to right according to the operation sequence.

B. POPULATION INITIALIZATION
ABC algorithm is a new swarm intelligence optimization algorithm [24]. The ABC algorithm solves the problem by simulating the actual honey-gathering mechanism of bees. The algorithm divides the bee colony into three categories: employed bees, onlooker bees and scout bees. In the ABC algorithm, the employed bee uses the previous nectar source information to find a new nectar source and shares the nectar source information with the onlooker bee. The onlooker bees waiting in the hive and find a new nectar source based on the information shared by the employed bees. The job of scout bees is to find new and valuable nectar sources, and they will search randomly near the hive. Therefore, the algorithm can be divided into four steps: that is, population initialization phase, employed bee phase, onlooker bee phase, and scout bee phase.
In order to generate a better quality and more stable initial population, a method that combines OS random selection and MA GSE [25] selection was proposed to generate the initial population based on the literature [26]- [28]. The process is as follows: (1) OS sequence: a method of random selection is adopted to ensure the diversity of the population.
(2) MA sequence: a method combining GSE selection and random selection (where GSE accounts for 80 percent and random selection accounts for 20 percent) is used. In the GSE method, the machine with the shortest processing time is selected each time, so the machine with a long processing time can be avoided. Every time a machine is selected, the processing time of the machine will be adjusted to ensure the workload balance of the machine.
In the phase of employed bees, different crossover and mutation were developed to increase the diversity of the population. In the onlooker phase, an effective adaptive variable neighborhood search method was proposed to improve the search ability of ABC. In order to reduce the blindness of the scout bee, a strategy of generating a new source of nectar in the PA during the scout bee phase was proposed.

C. EMPLOYED BEE PHASE
The target of the employed bee phase is to find a better nectar source near the exiting nectar source. In order to improve the efficiency of the algorithm, we combine the characteristics of MLFJSP and introduce the following crossover and mutation.

1) CROSSOVER
According to the encoding method described in section 3.1, the crossover operations of the OS and MA are separated.
(1) Crossover of OS Based on the literature [29], an improved multi-parent crossover (IMX) approach is proposed. The crossover of IMX is as follows: Step 1: Take n parent chromosomes in the population, and record them as P 1 , P 2 , . . . , P n (n is the total number of jobs); Step 2: Take out each operation of the ith job in P i in sequence, copy it to the offspring chromosomes C i , and keep the gene position unchanged (i = 1, 2, . . . , n); Step 3: Copy the other jobs except for the i + 1th job in P i to the offspring C i+1 in a sequence, where i = 1, 2, . . . , n−1, and finally copy the other jobs except for the first one in P n to the offspring C 1 according the sequence of their operations; the sequence of the operations remains unchanged during the process of copying, which ensures that the chromosomes are feasible scheduling after the crossover operation.
(2) Crossover of MA. The crossover of MA changes with the change of OS.

2) MUTATION
(1) Mutation of OS Randomly select a chromosome, and select a gene g . Since the operation has a processing order constraint, the position of the precursor operation g f and the subsequent operation g s of g should be determined. Randomly select a position between the two positions to insert the operation g to ensure that the generated scheduling is a feasible solution.
(2) Mutation of MA The mutation of MA adopts the method of selecting the minimum processing time, the execution process is as follows: Step 1: Randomly generate r random numbers, r ∈ [1, M], and M is the total number of operations; Step 2: Randomly select r positions from the MA of the chromosome; Step 3: For the operation corresponding to each position, the machine with the shortest processing time is selected from the set of available machines for replacement. If the original MA gene is the machine with the minimum time, select the machine with the second shortest machining time.
The example of mutation of MA is shown in Figure 1, the mutated genes are shown in bold and underlined.

3) ADAPTIVE CROSSOVER PROBABILITY
The literature [30] pointed out that a larger crossover probability p c (0.5 < p c < 1.0) and a smaller mutation probability p m (0.001 < p m < 0.05) are necessary for the success of the genetic algorithm. The larger p c can make the gene recombination fully and the smaller p m is used to increase the diversity of the population and prevent falling into the local optimum. In order to fully recombine the genes of the chromosome, the paper adopts the adaptive crossover probability in (13). where f is the minor of the two chromosomes that involved in the crossover operation, f min andf denote the minimum and the average fitness value of the population respectively. In order to ensure that all solutions better than or equal to f min can participate in the crossover operation, the value of k 1 is assigned to 1. As the fitness value tends to f min , the crossover probability decreases. When the fitness value equals to f min , the crossover probability is 0. p m is a randomly generated random number between 0.001 and 0.005.

D. ONLOOKER BEE PHASE 1) NEIGHBORHOOD STRUCTURE DESIGN
The main task of the onlooker bee is to accept the nectar source information shared by the employed bees. The onlooker bee in the classic ABC algorithm uses a random method to search, and there is no guarantee that a better nectar source will be obtained after the search. An adaptive variable neighborhood search method, which makes the algorithm adaptively select the neighborhood with high search efficiency to search during the search process was proposed in this paper. This ensures that the algorithm moves towards a better nectar source, and strengthens the local search capability of the algorithm. The paper proposes the following four different neighborhood structures.
(1) The method of VNS1 is for the MA, the process of VNS1 is somewhat the same as the mutation process of MA, except that when the processing time is altered, VNS1 did not choose the shortest processing time but randomly selected.
(2) VNS2 is for OS, the process of VNS2 is as follows: Step 1: Randomly select two jobs J 1 and J 2 , make the number of operations of J 1 less than J 2 's, and record the positions of J 1 and J 2 respectively; Step 2: From left to right, place each operation of J 1 in sequence at the position of operations of J 2 , and place each operations of J 2 in the remaining position.
For the OS of the chromosome: 3 2 1 2 3 1 1. Randomly generate two jobs J 1 and J 2 : e.g. 1 and 3. Since the number of operations of job 3 is less than the number of operations of job 1, set J 1 = 3 and J 2 = 1. After exchanging operations of J 1 and J 2 , the new OS is: 1 2 3 2 1 3 1.
(3) VNS3 is for OS, the method of VNS3 is as follows: Step 1: Randomly select the gene segment t, the length of t is less than the total number of operations; Step 2:Reverse the sequence of the gene segment and reinsert it to the original position.
(4) VNS4 is for OS, the method of VNS4 is as follows: Step 1:Randomly select the gene segment t, the length of t is less than the total number of operation; Step 2: After randomly changing the order of the gene segment t, reinsert it to the original position.  In the classic ABC algorithm, if the nectar source has not been updated after the limit generation in the search space, the nectar source is discarded, the scout bee mode is activated to generate a new nectar source, and the optimization is continued. In this paper, in order to increase the diversity of the population, if the nectar source is not updated after the limit generation, we use the following methods to generate a new nectar source during the scout bee phase: Step 1: Randomly select a non-dominated solution S from PA, S and an optimal solution in PA perform crossover and mutation, and generate a new solution S t ; Step 2: If S t is better than S , then S t replaced S . Through this process, the search behavior of the entire ABC algorithm can be enriched, and the premature convergence of the algorithm can be avoided to a certain extent.

F. MULTIOBJECTIVE PROBLEM
Each chromosome in the population needs to calculate the following two values: (1) The Paroto rank i rank of each solution; (Refers to Deb [31] proposed a fast non-dominated ranking method in 2002); (2) The Crowding Distance of each solution D i . Use equation (14) and (15) to calculate the Crowding Distance D i . If the two chromosome are in the same Pareto level, the chromosome with the smaller Crowding Distance is selected.
where, i and j represent two different chromosome, and f s and f t represent three different target values for each chromosome. n is the number of jobs, and d i,j is the distance of solutions with the same Pareto ranking.

IV. HABC ALGORITHM FLOW
Based on the above analysis and discussion, the HABC algorithm for solving the MLFJSP problem is as follows: Step 1: Initialize the parameters of the algorithm, and determine the population size P, the size of PA, the size of PA equals to one-third of P. Determine the number of employed bees(N eb ), the onlooker bees(N ob ) and the scout bees(N sb ); the number of iterations T, the number of neighborhood search VT, the control number limit and other parameters, etc..
Step 2: Initialization. Encode and calculate the fitness value of each individual. Calculate the Pareto optimal solution set of the current population using a fast non-dominated sorting method. According to the Pareto ranking and Crowding Distance of each chromosome according to the method described in subsection B of Chapter III, and save the best 20 percent of the chromosomes in the population into the Pareto archives (PA).
Step 3: Determine whether the following convergence criteria are satisfied, if one of them is met, the optimization process ends, go to Step 8; otherwise go to Step 5.
The number of iterations reaches a given upper bound T, and in this paper, T = 10 × n × m; There is not any new optimal solution for continuous h (in this algorithm, the value of h is 10) generation.
Step 4 (Employed bee phase): Step 4.1: Crossover. Perform crossover according to the steps described in subsection C of Chapter III, and calculate the Pareto optimal solution set and update the PA.
Step 4.2: Mutation. Perform mutation according to subsection C of Chapter III, update the PA.
Step 5 (Onlooker bee phase): Perform neighborhood search in the PA, the detailed description of the neighborhood search process is described in subsection D of Chapter III.
Step 6 (Scout bee phase): If the nectar source is not updated after the limit generation, the algorithm enters the scout bee phase, and the scout bees update mechanism is performed according to the method in section E.
Step 7: Output the Pareto optimal solution, and end. The overall flow chart of the HABC algorithm is shown in Figure 2.

V. SIMULATION AND ANALYSIS
In order to verify the performance of the HABC algorithm, the standard Kacem instances were used to simulate the HABC algorithm. The algorithm was written in Python (Python 3.7.3) and was run on a MacBook Pro with an Apple M1 processor and 8 GB RAM. The parameters in the following simulation experiments are obtained through the well-known Taguchi test method [32] through multiple test designs. Due to the length of the paper, we will not describe them more detail here.

A. METRICS FOR ALGORITHM EVALUTION
Three comparison metrics are applied to evaluate the performance of the algorithms in the paper [33] [34].
Mean ideal distance (MID): By applying this metric, the closeness between the Pareto solutions and the ideal point (f 1,best , f 2,best ) can be calculated according to (16). This metric is used to calculate the closeness between the calculated pareto solution (f 1,i , f 2,i ) and the ideal points (f 1,best , f 2,best ). Where f 1,best , and f 2,best are the ideal point, according to each fitness function, and n is the number of nondominated solutions. The lower the MID, the better the algorithm.

Diversification Metric (DM):
This metric can measure the extension or spread of the solution set of each algorithm. The metric computed according to (17).
Where, n is the number of objective functions, f j,i is the jth objective value of the ith solution. A higher value of this metric is preferable.
Spacing metric (SM): This metric is adopted to compute the uniformity of the distribution of Pareto solutions set. It can be determined by (18). n is the number of solutions. The lower the SM, the better the algorithm.

B. SIMULATION 1) KACEM INSTANCES WITHOUT ENERGY CONSUMPTION
In order to verify the validity of the HABC algorithm, we selected four Kacem [26] classical instances including: 4 × 5 instance, 8 × 8 instance, 10 × 10 instance, and 15 × 10 instance respectively. For this set of instances, the parameters of the HABC algorithm are set as follows: The number of iterations P is 4×n×m; the population size is 80. The size of PA is a quarter of the population, that is, 20, limit is 10. The number of employed bees (N eb ) is 40, the total number of onlooker bees (N ob ) is 40, and the number of scout bees (N sb ) is 4. The simulation results are compared with the classic GA algorithm and the results obtained by excellent algorithms: a hybrid multiobjective evolution method (H-MOEA) proposed by Xiong et al. [35], a particle swarm optimization algorithm based on variable neighborhood search (MOPSO-VNS) proposed by Huang et al. [36], the Newton-based heuristic algorithm (NBHA) proposed by Miguel et al. [37] and the variable neighborhood weed algorithm (VIWO) proposed by Cao et al. [38]. The data are all from the original literature, and the comparison of results are shown in Table 3. ''-'' indicates that the data is not given in the literature. Figure 3 is the Gantt chart of Kacem 4×5 instance, Figure 4 is the Gantt chart of Kacem 10 × 10 instance.   The performance of each algorithm is evaluated by calculating the three metrics of MID, DM, and SM of five comparison algorithms. Table 4 displays the four instances of Kacem, comparing the obtained value of metrics by the HABC algorithm and the existing four algorithms. The comparison result shows that the HABC Pareto optimal solutions are better in comparison with the other algorithms. As shown in Table 4, the obtained results of 4 × 5 instance, HABC is a good algorithm, according to the spacing metric, SM = 0.1 (the lower, the better). For the obtained results of 8 × 8 instance, the HABC algorithm obtained the smallest mean ideal distance metric MID = 2.   439 (the lower is the better). Although the spacing metric, SM = 0.0795, is not the smallest of the five algorithms involved in the comparison, it is also smaller than algorithm NBHA and VIWO. In Table 4, the minimum MID value, the maximum DM value, and the minimum SM value obtained in the comparison algorithms are all displayed in bold.

2) KACEM 8 × 8 INSTANCE WITH ENERGY CONSUMPTION
We use the Kacem 8 × 8 instance for simulation experiment. A random method is used to obtain the loading and unloading time of the process in this experiment, the simulation data is shown in Figure 6. The test parameters are selected as follows: The number of iterations P is 2×n×m; the population size is 100. The size of PA is a quarter of the population, that is, 25, limit=10. The number of hired bees (N eb ) is 50, the total number of scout bees (N ob ) is 50, and the number of scout bees (N sb ) is 5. Figure 6 lists the loading time, processing time, unloading time and processing energy consumption of 8 jobs on 8 machines. In this paper, the unit of power is watt (w), the unit of processing time is hour (h), and the unit of energy consumption is kw/h. For example, ''380/220'' X. Gu: Application Research for MLFJSP Based on HABC Algorithm in the third row and third column of the table means that the loading power and unloading power of the machine are both 380 watts, and the idle energy power is 220 watts. The ''0.7/5/0.9/1047'' in the fourth row and third column means that the loading time is 0.7 hours, the processing time is 5 hours, the unloading time is 0.9 hours, and the processing power is 1047 watts.
The HABC algorithm simulates the Kacem 8 × 8 instance, and the Gantt chart of the solution with the shortest makespan is shown in Figure 5. The makespan is 22.7 hours, the total energy consumption is 85.92 kw/h, and the carbon emission equals 64.95. The corresponding disjunction chart is shown in Figure 7. S G and T G in the disjunction chart indicate the beginning and the end respectively. The circles in the disjunction chart indicate each operation, the solid line indicates the processing sequence constraint of the operation, the dashed line indicates the processing sequence constraint of the machine, and the bold solid indicate the critical path of the disjunction chart, and the length of the critical path is the makespan of the chromosome.   The Gantt chart of the solution with minimum energy consumption obtained by the HABC algorithm is shown in Figure 8. The makespan of the solution is 23.1 hours, but the energy consumption is relatively minimal, which is 83.58kw/h. Figure 9 is a comparison chart of machine workload, which shows the processing time, standby time   and idle time of each machine. Figure 10 is a comparison chart of machine energy consumption. In the Figure9 and Figure 10, it is obvious that the orange part is the workload condition during machine processing, the purple part is the standby time of the machine, which is the time of process loading or unloading, and the green part is the idle time of the machine. It can be seen from the figure that among the 8 machines, only the M 1 machine produced 11.8 idle time, and 2.6 idle energy consumption; M 2 produced 5.9 idle time, and 1 idle energy consumption; M 3 and M 6 produced shorter standby time; M 4 , M 5 , M 6 and M 7 produced no idle time, and no energy consumption. Figure 11 shows a comparison chart of the processing energy consumption and standby energy consumption of each machine.

3) BRDATA INSTANCES
In order to further prove the effectiveness of the HABC algorithm, we simulate the international BRdata set. BRdata set contains 10 instances, ranging from 10 parts and 6 machines to 20 parts and 15 machines. Use the HABC algorithm to compare the simulation results of the BRdata problem with the classic GA algorithm and the HABC algorithm proposed by Li et al. [10]. The population size of the three algorithms is the same as the number of iterations, namely: the number of iterations is 60; the population size is 80. The GA algorithm randomly generates the initial population, the crossover and the mutation method uses the method in Chapter III Section C, the crossover probability is a random value between 0.6-0.8, and the mutation probability is a random value between 0.01-0.5. The parameters in the HABC algorithm are taken from the parameters in [39]. The other parameters of the HABC algorithm are the same as those in Chapter V Section B. The Gantt chart of the solution with minimum energy consumption obtained by the HABC algorithm is shown in Figure 12. Figure 13 shows the disjunction chart of the Pareto optimal solution of the MK01 instance. The makespan is 65.2 hours, the total energy consumption is 227.29 kw/h, and the carbon emission equals 171.81.
It can be seen from the data in the table 5 that the solutions obtained in the MK01 and MK03 instances in the HABC algorithm dominate the solutions obtained in the GA algorithm and the IABC algorithm; For the MK02 problem, although the IABC algorithm gets the same makespan as the HABC algorithm, it generates more carbon emissions than the HABC algorithm. From the above analysis, it can be seen that the HABC algorithm proposed in the paper is feasible and effective for solving the multiobjective low-carbon flexible job shop scheduling problem.

VI. CONCLUSION
This paper proposes a new HABC algorithm to discuss MLFJSP problem. MLFJSP mainly considers three optimization objectives, namely makespan, machine workload and total carbon emissions. In the HABC algorithm, the initial population adopts a coding method based on a combination of process coding and machine allocation coding, which not only ensures the load balance of each machine, but also reduces the load of bottleneck machines. In the employed bee phase, improved genetic operations (crossover and compilation) are used to speed up the search of the entire algorithm and enhance the diversity of the population; in the onlooker bee phase, using the nectar source information shared by the employed bees, an adaptive neighborhood search method is adopted to effectively search for the global optimal solution near the optimal solution to prevent the loss of the optimal solution.
The HABC algorithm takes advantage of the information sharing between bee colonies, makes up for the robustness and premature convergence of genetic algorithms and other shortcomings, prevents the search from falling into a local optimal state, while protecting the diversity of the population, and accelerating the convergence speed of the algorithm. The final simulation results show that the HABC algorithm can effectively solve the complex multiobjective low-carbon FJSP problem.
The MLFJSP solved in this paper only considers the three objectives of makespan, total machine workload and total carbon emissions, while carbon emissions only considers the energy consumption of loading, processing and unloading of processes. However, the actual enterprises have complex production conditions and disturbances of various uncontrollable factors, such as stability measures, machine stoppage, and orders changes etc.. How to solve the problems that are more suitable for the actual workshop situation and expand the FJSP model, such as changing or adding objectives (such as minimum delays, minimum production costs and minimum failure rates, etc.) are the directions of our future research.