Dynamic Programming Algorithms for Two-Machine Hybrid Flow-Shop Scheduling With a Given Job Sequence and Deadline

In “Shared Manufacturing” environment, orders are processed in a given job sequence which is based on the time of receipt of orders. This paper studies a problem of scheduling two-task jobs in a two-machine hybrid flow-shop subject to a given job sequence which is used in production of electronic circuits under shared manufacturing. Each job has two tasks: the first one is a flexible task, which can be processed on either of the two machines, and the second one is a preassigned task, which can only be processed on the second machine after the first task is finished. Each job has a processing deadline. Three objective functions related to deadlines are considered. The computational complexity of the problem for any of three objective functions is showed to be ordinary NP-hard, a dynamic programming algorithm (DPA) is presented for each case and the time complexity of each algorithm is given. The results of computational experiments show the relationship between the running time of DPA and the parameters, and also show the advantages of DPA in dealing with this problem compared with branch-and-bound algorithm and iterated greedy algorithm.


I. INTRODUCTION
Two-machine hybrid flow-shop problem is a sort of scheduling problem which is widely used in CNC machining, production of electronic circuits [1], computer graphics processing [2], [3] and the health care systems [4]. For example, the production of electronic circuit usually needs two procedures. The first procedure usually requires low precision and can be processed by low-level machine. And the second procedure requires high-level machines for more precise processing. These high-level machines often utilize detachable tool magazines that allow for off-line setups. If necessary, these high-level machines can process the first procedure by replacing low-precision tools. I.e., the first procedure can be processed by any one of low-level machine and high-level machine with the same processing time but The associate editor coordinating the review of this manuscript and approving it for publication was Muhammad Zakarya . the second procedure only can be processed by high-level machine. Once the machine starts to process the electronic circuit, it is not allowed to interrupt, otherwise the products will be scrapped. So pre-emption is not allowed in this production scenario.
In ''Shared Manufacturing'' environment, manufacturing platforms arrange order processing sequence based on the importance of the customers or the time of receipt of orders, i.e., the processing order is given in advance according to some principle. So, in shared manufacturing environment, the electronic circuit manufacturing problem can be described as a two-machine hybrid flow-shop scheduling with a given job sequence.
This two-machine hybrid flow-shop problem can be described as follows. A set of n jobs J = {J 1 , J 2 , · · ·, J n } is processed in a two-stage two-machine flow-shop which consists of Machine M 1 at stage 1 and Machine M 2 at stage 2. Each job J i has two tasks A i and B i . The first task A i is a flexible task, which can be processed on any of Machine M 1 and Machine M 2 for the same processing time a i ; the second task B i is a preassigned task, which can only be processed on Machine M 2 for b i time units and must be processed after A i is finished. Each job J i has a deadline d i and needs to be completed as soon as possible before its deadline. If some jobs miss their deadlines, the objective function value related to the deadline will increase, and this is what we should avoid as far as possible. All tasks and machines are available at time 0. Pre-emption is not allowed, i.e., once a job starts being processed on a machine, it has to be finished before any other job can be processed on that machine. All tasks are processed in the order of subscription (given job sequence constraint), i.e., if task A i and A j are both arranged to be processed on machine M 1 and satisfy i < j, the task A i is processed before A j (The processing sequence of tasks on Machine M 2 also meets this requirement). Obviously, all jobs will be completed in the order of subscription from 1 to n on Machine M 2 . The goal of the problem is to minimize one of the following three objective functions: the maximum lateness (L max ), the total weighted tardiness ( w i T i ) or the weighted number of tardy jobs ( w i U i ).
In this paper, we consider a two-stage two-machine hybrid flow-shop problem with a given job sequence which is applied to the production of electronic circuit in shared manufacturing environment. For three objectives of this problem, the computational complexity is analysed and the pseudo-polynomial time dynamic programming algorithm (denoted as DPA) is designed respectively. According to the three-field representation, the three problems discussed in this paper can be expressed as: The problem (1) is a hybrid Flow-Shop problem with Fixed job sequence whose objective is minimize the maximum Lateness. So it is denoted as FSFL. Similarly, the problem (2) and (3) are denoted as FSFT and FSFU in the following discussion. The rest of this paper is arranged as follows: Section II presents a brief review of the literature. In Section III, we give the basic symbolic hypothesis and the characteristics of the optimal solution of the problem, then prove that the three problems are all NP-hard in ordinary sense. In Section IV, we present the DPA for problem FSFL.
In Section V we give the DPAs for problem FSFT and FSFU.
Computational experiments are carried out and the results are analyzed in Section VI. Finally, we conclude the paper and suggest future research topics in Section VII.

II. LITERATURE REVIEW
If the presumption of a given job sequence is not considered and the objective is to minimize the maximum completion time (makespan), the problem was first proposed by Wei and He [3] in 2005. They called it Semi-Hybrid Flow-Shop problem (denoted as SHFS). They showed that the problem is NP-hard, and gave a pseudo-polynomial time algorithm and a polynomial time approximation algorithm with a worst-case ratio of 2. Then Wei and Jiang [5] gave an improved polynomial time approximation algorithm with the worst-case ratio of 8/5. Lately, Wei et al. [6] presented constant-time solution algorithms for the cases with identical jobs and analysed the relationship between the hybrid benefits and performance difference between the two machines. It is obvious that the researches on the makespan objective of this problem have been more in-depth, but the researches on other objective functions have not been reported yet. Other typical models for two-stage hybrid flow shop problems include the following: Vairaktarakis and Lee [1] discussed the problem that two tasks both can be processed on any machine, and gave an approximate algorithm with the worst-case ratio of 1.618; Tan et al. [7] considered a flexible flowshop scheduling problem with batch processing machines at each stage, and gave an iterative stage-based decomposition approach to solve this problem; Feng et al. [8] studied a two-stage hybrid flowshop with uncertain processing time and gave a heuristic algorithm for their problem; Ahonen and Alvarenga [9] proposed a new two-stage hybrid flow shop problem where the job's processing time is related to the starting time of the job, and used annealing algorithm and tabu search method to solve their problem; Hidri et al. [10] addressed a two machines hybrid flow shop scheduling problem with transportation times between two machines, and presented a heuristic based on the optimal solution of the parallel machine scheduling problem with release date and delivery time; Zhang et al. [11] considered a hybrid flowshop problem with four batching machines, and used the clustering and genetic algorithm to calculate the good solution for this problem. For multi-stage hybrid flow shop problems, recently, Jiang and Zhang [12] investigated an energy-oriented scheduling problem deriving from the hybrid flow shop with limited buffers. They developed an efficient multi-objective optimization algorithm under the framework of the multi-objective evolutionary algorithm based on decomposition. However, none of the above problems is considered the presumption of a given job sequence.
In recent years, with the rise of intelligent manufacturing modes such as shared manufacturing and cloud manufacturing, the researches with the presumption of a given job sequence became very meaningful. Another important industrial application of the given job sequence setting is the scheduling of bar-coding operations in inventory or stock control systems [13]. The earliest scheduling problem with a given job sequence constraint was proposed by Shafransky and Strusevich [14]. They studied an open shop problem to minimize makespan with a given job sequence, proved the problem is strongly NP-hard and gave an approximate algorithm with a worst-case ratio of 5/4. Afterwards, Liaw et al. [15] studied the same problem, but the objective function changed to minimize the total completion time. They VOLUME 8, 2020 presented a heuristic and a branch-and-bound algorithm for this problem. Lässig et al. [16] introduced the constraint of given job sequence into the common due-date scheduling problem, and presented a linear algorithm for this problem. Cheref et al. [17] considered an integrated production and outbound delivery scheduling problem with a given job sequence, showed this problem is NP-hard, and gave polynomial time algorithms for some particular cases. Lately, Cheng et al. [18] considered server scheduling on parallel dedicated machines with fixed job sequences to minimize the makespan. They designed a polynomial time algorithm to solve the two machine case of the problem and proved the problem is strongly NP-hard when the number of machines is arbitrary. They also designed two heuristic algorithms to treat the case where the number of machines is arbitrary and all the loading times are unit.
As can be seen from the above, two-machine hybrid flowshop scheduling with a given job sequence and deadline has not been investigated with any exact or heuristic method in the literature so far. Hence, the DPA presented in this article provides a feasible method to solve this problem. The computational experiments show that DPA has obvious advantages in running time compared with branch-andbound algorithm and has more than 30% advantages in the accuracy of calculation results compared with iterated greedy algorithm.

III. SYMBOLIC HYPOTHESIS, STRUCTURE OF SOLUTIONS AND COMPUTATIONAL COMPLEXITY
In this section, firstly we give the basic symbols needed in the following sections. And then we analyse the properties of the optimal solution of SHFS with a given job sequence. Finally, we show the problems studied in this paper are all NP-hard.

A. NOTATION
The notations used in the rest of this paper are listed in Table 1.

B. THE STRUCTURAL CHARACTERISTICS OF THE OPTIMAL SCHEDULE OF THE PROBLEMS
An optimal schedule is a scheduling scheme to minimize the objective function of the problem. Whether the objective function is minimizing the maximum lateness, the total weighted tardiness or the weighted number of tardy jobs, it is easy to get that there is an optimal schedule of this problem satisfying the following properties.
Proposition 3.1: There is an optimal schedule where machine M 1 does not have idle time from time 0 to the end of processing.
Proof: Suppose that there is an optimal schedule φ where machine M 1 has idle time between some successive processed tasks. Using schedule φ, we construct another schedule ϕ: All tasks on machine M 1 are processed as early as possible to fill all the idle time in the same order as that in schedule φ; the tasks on machine M 2 are processed in the same way as that in schedule φ. For the jobs in V 1 , the first tasks in ϕ on machine M 1 are finished no later than them in φ, and the second tasks on machine M 2 in ϕ and φ start at the same time. So the second task of each job in V 1 is processed after the first task of this job is finished in ϕ. We have ϕ is feasible. For the tasks on machine M 2 are processed in the same way as that in schedule φ, the complete time of each job in ϕ is the same as that in φ. So the maximum lateness, the total weighted tardiness and the weighted number of tardy jobs in ϕ are all the same as those in φ. Since φ is optimal, ϕ is also optimal and there is on idle on machine M 1 in schedule ϕ. So Proposition 3.1 holds.
Proposition 3.2: There exists an optimal schedule which satisfies that the idle time on machine M 2 appears only before the second tasks of some jobs in V 1 , but can't appear elsewhere.
Proof: Suppose that there is an optimal schedule φ where machine M 2 has idle time before the tasks of some jobs in V 2 . Using the idea similar to the proof of Proposition 3.1, we construct a schedule ϕ: The tasks on machine M 1 and the tasks of the jobs in V 1 on machine M 2 are all processed in the same way as those in schedule φ; the tasks of the jobs in V 2 on machine M 2 are processed as early as possible to fill all the idle time before them in the same order as that in schedule φ. Since the jobs in V 1 are processed the same as that in φ, the second task of each job in V 1 starts to be processed after the first task of this job is finished. So ϕ is feasible. In V 1 , the completion time of each job in ϕ is the same as that in φ. In V 2 , the completion time of each job in ϕ is less than or equal to that in φ. So the maximum lateness, the total weighted tardiness and the weighted number of tardy jobs in ϕ are not more than those in φ. Since φ is optimal, ϕ is also optimal and there is on idle before the tasks of the jobs in V 2 on machine M 2 in ϕ. So Proposition 3.2 holds. Proposition 3.3: There exists an optimal schedule which satisfies that the task A 1 of first job J 1 is processed on Machine M 2 , and the task A n of the last job J n is processed on Machine M 1 .
Proof: Suppose that there is an optimal schedule φ where the task A 1 of first job J 1 is processed on Machine M 1 . Using schedule φ, we construct another schedule ϕ. Firstly, change the processing mode of job J 1 , i.e., A 1 and B 1 are all processed together on Machine M 2 . In φ, considering that B 1 can't be processed until A 1 is completed and B 1 is the first task to be processed on Machine M 2 , there is an idle with length a 1 before task B 1 is processed on Machine M 2 . So, in ϕ, we can process A 1 on this idle before task B 1 on Machine M 2 . It is easy to see that the complete time of J 1 has not changed in ϕ. Then we can processed the other jobs in the same way and at the same time as those in schedule φ. Obviously, the complete time of all jobs is the same in φ and ϕ, so the maximum lateness, the total weighted tardiness and the weighted number of tardy jobs in ϕ are the same as those in φ. Since φ is optimal, ϕ is also optimal. This establishes the first part of the proposition. Using the similar way, we can get the second part of the proposition.
According to Property 3.1, Property 3.2 and Property 3.3, it can be concluded that there must be an optimal schedule of the problem as shown in the following figure: There must be an optimal schedule where the task A 1 is processed on Machine M 2 , the task A n is processed on Machine M 1 and the continuous processing tasks are separated from the idle time before the second tasks of some jobs in V 1 on machine M 2 . The continuous processing tasks between two idle time periods do not contain any idle time which are called ''Continuous Blocks''. In Section IV and Section V, the continuous blocks will help us design DPAs for the problems studied in this paper. In the following, we only need to find the optimal solution in the feasible solutions which meet the above three properties.

C. COMPUTATIONAL COMPLEXITY ANALYSIS
Now, we use polynomial time Turing Reduction to prove that FSFL, FSFT and FSFU are all NP-hard.
Proof: Firstly, we present an instance of Partition Problem which is a known NP-hard problem. We denote this instance as Instance I: Let set of integers S = {s 1 , s 2 , · · · s n } and an integer bound s = 1 2 n i=1 s i . Is there a partition S 1 and S 2 of set S, such that S = S 1 ∪ S 2 , S 1 ∩ S 2 = ∅ and We create an instance of FSFL with n + 2 jobs denoted as Let the deadline of job J i d i = 0 (i = 0, 1, · · ·, n, n + 1). If all jobs are processed in the order of subscription, is there a feasible schedule that makes the maximum lateness L max = 2s + nε?
Next, we prove that the solutions of Instance I and Instance II can be derived from each other. Let S 1 and S 2 be a partition of S in Instance I. A feasible schedule is constructed as follow.
all tasks be processed in the order of their subscription. Obviously, A i is processed on machine M 1 if J i ∈ V 1 and A j is processed on machine M 2 if J j ∈ V 2 . So we have the maximum lateness is a solution of II as depicted in Fig. 2. Assume now that there is a feasible schedule the maximum lateness of which is exactly 2s+nε. Since deadlines of all jobs are 0, the maximum lateness of this feasible schedule is equal to its makespan. So the makespan of this feasible schedule is also 2s + nε. Since the sum of the processing loads of all jobs is 4s + 2nε, this feasible schedule is also an optimal schedule and no idle time is allowed on either machine. By Proposition 3.3, without loss of generality, we let φ be an optimal schedule VOLUME 8, 2020 of Instance II with maximum lateness L max = 2s + nε where task A 0 of the first job J 0 is processed on Machine M 2 and the task A n+1 of the last job J n+1 is processed on Machine M 1 . We let V 1 be the job subset {j i |A i is processed on M 1 inφ } and V 2 be the job subset {j i |A i is processed on M 2 inφ }. According the sum of the processing loads of all jobs is 4s + 2nε and the maximum lateness is 2s + nε, we have the loads of two machines are all 2s + nε, i.e., the load of Machine M 1 J i ∈V 1 a i = 2s + nε and the load of Machine Since Partition Problem is a known NP-hard problem and reduction process is polynomial time. We have problem FSFL is NP-hard.
Only need to let ε = 0 in the proof of Theorem 3.4, we have problem FSFL is also NP-hard even if b i = 0, i = 1, 2, · · · , n. Theorem 3.6: Problem FSFT and FSFU are all NP-hard. Proof: We create an instance of FSFT denoted as Instance III and an instance of FSFU denoted as Instance IV from Instance I.
Instance III: The assumption of the job set V = {J 0 , J 1 , J 2 , · · · J n , J n+1 } is the same as that in the proof of Theorem 3.4. But let the deadline d i = 2s+nε and the weight w i = 1 for i = 0, 1, 2, · · · , n, n + 1. If all jobs are processed in the order of subscription, is there a schedule that makes the total weighted tardiness w i T i = 0? Instance IV: Replace w i T i = 0 in the Instance III with w i U i = 0. By using techniques similar to the proof in Theorem 3.4, it can be proved that the solutions of Instance I and Instance III (Instance IV) can be derived from each other. So FSFT and FSFU are all NP-hard.
Since we will give pseudo-polynomial time algorithms for these problems in the following sections, FSFL, FSFT and FSFU are all NP-hard in ordinary sense.

IV. A DYNAMIC PROGRAMMING ALGORITHM FOR FSFL
According to the structure of the optimal schedule obtained by Property 3.1 and Property 3.2, we have the optimal schedule is composed of several continuous blocks and the idle time between them as shown in Fig. 1. So the DPA we designed includes two stages: the first stage is to construct the optimal continuous block, and the second stage is to arrange the optimal continuous blocks into the optimal schedule of the problem through idle time periods.

A. CONSTRUCT THE OPTIMAL CONTINUOUS BLOCK
Firstly, a strict definition of continuous block of FSFL is given as following.
Definition 4.1: A subschedule named five-element Continuous Block in state (m, i, j, h, l) as a subschedule for jobs J i , J i+1 , · · · , J j satisfying the following conditions (see Fig. 3):   One way is to obtain continuous block (m, i, j, h, l) by adding job J j ∈ V 1 to continuous block (m, i, j − 1, h , l ) (see Fig. 4a). The other way is to obtain continuous block (m, i, j, h, l) by adding job J j ∈ V 2 to continuous block (m, i, j − 1, h , l ) (see Fig. 4b) . Let f (m, i, j, h, l) be the maximum lateness of the optimal continuous block composed of the jobs J i , J i+1 , · · · , J j . For ease of description, define the following function: For i < j, according to the definition of continuous block (m, i, j, h, l), the following formula was clearly established: a t , we have the value interval of the gap l is l,l . The range of other parameters is obvious: m ∈ {1, 2}, 1 ≤ i < j ≤ n and 0 ≤ h ≤ j t=i a t . Now we present the DPA of f (m, i, j, h, l) as following.
Case 2: A j is processed on M 2 as shown in Fig. 4b.
The initial conditions in DPA CB (L) are obviously valid. The recursions are analysed as following. For a feasible combination of m, i, j, h, l, the derivation of f (m, i, j, h, l) can be given by considering two ways regarding the assignment of the flexible task of the last job J j . In Case 1, task A j is processed on machine M 1 as shown in Fig. 4a. We have h + a j = h and l + b j − a j = l. Thus, it can be shown that h = h − a j and l = l + a j − b j , as given in Equation (4), subject to the condition l ≥ a j , i.e. l ≥ b j , which is satisfied anyway according to the value range of l. In Case 2, task A j is processed on machine M 2 as shown in Fig. 4b. We have h = h and l = l + a j + b j , i.e. h = h and l = l − a j − b j , as given in Equation (5), subject to the condition l = l − a j − b j ≥ 0, i.e. l ≥ a j + b j .

B. COMPLETE DYNAMIC PROGRAMMING ALGORITHM
After the continuous block construction, a complete schedule can be generated with a concatenation of appropriate optimal continuous blocks in backward recursion. According to Properties 3.1 and 3.2, we note that every two adjacent optimal continuous blocks are separated by an idle time period on machine M 2 . Let's first define partial schedule set (m, i) which represents the set of all partial schedules of job subset {J i , J i+1 , · · · , J n } where the first job J i is processed by machine M m . Denote by g(m, i) the minimum maximum lateness among all the partial schedules in set (m, i). A DPA for calculating g(m, i) is given as following. For easy narration, we set up a dummy job J n+1 with a n+1 = b n+1 = +∞ beforehand.
DPA Sch(L) Initial conditions: Recursions (A j+1 can only be processed on M 1 as shown in Fig. 5): For each m, i satisfying m ∈ {1, 2}, 1 ≤ i ≤ n, Goal: min L max = min m∈{1,2} {g(m, 1)}. The initial conditions and the goal are apparently true. The recursions are analysed as following. In recursions of Sch(L), when the optimal continuous block (m, i, j, h, l) is given, partial schedule (m, i) can be structured by (m, j + 1) and (m, i, j, h, l). Since there is an idle time period between (m, j + 1) and (m, i, j, h, l), according to Property 3.2 we have m = 1 in (m, j + 1), i.e. task A j+1 is processed on M 1 (see Fig. 5). It is easy to see that g(m, i) = min max f (m, i, j, h, l), h + g(1, j + 1) . Since there is an idle time period between job J j and J j+1 , there must be l < a j+1 . So we have that Equation (6) holds.
Next, we will give the time complexity of DPA Sch(L). Theorem 4.2: Problem FSFL is solvable in O(n 2 ( n i=1 a i ) 2 ) time. So problem FSFL is NP-hard in ordinary sense. Proof: To calculate the optimal continuous block, we need to search variable l from l tol, variables i, j from 1 to n, variable h from 0 to n i=1 a i and variable m from 1 to 2. So we have that it takes O(n 2 ( n i=1 a i ) 2 ) time to calculate all optimal continuous blocks. When all optimal continuous blocks are given, to calculate the optimal schedule, there are O(n) states, each of which takes at most O(n 2 ( n i=1 a i )) time due to the loops over all possible subscripts of the min operator. So the run time for calculating the optimal schedule is also O(n 2 ( n i=1 a i ) 2 ). We have problem FSFL can be solved in O(n 2 ( n i=1 a i ) 2 ) time. So problem FSFL is NP-hard in ordinary sense.

V. DYNAMIC PROGRAMMING ALGORITHMS FOR PROBLEM FSFT AND FSFU
Next, we will use techniques similar to Section IV to design DPAs for problem FSFT and FSFU. The tardiness of the jobs within the continuous blocks and partial schedules created via the procedures for the maximum lateness cannot be fathomed. We therefore instead determine the optimal objective value for the total weighted tardiness or the weighted number of tardy jobs within the continuous blocks and partial schedules subject to the condition that the first job starts at a specified time point. We introduce an extra parameter S in the continuous blocks and partial schedules, which is the start time of the continuous block or partial schedule, i.e. the interval from time 0 to the start time of the first task of the continuous block or partial schedule.
Definition 5.1: A six-element Continuous Block in state (m, i, j, h, l, s) as a subschedule for jobs J i , J i+1 , · · · , J j satisfying the following conditions (see Fig. 6): One way is to obtain continuous block (m, i, j, h, l, s) by adding job J j ∈ V 1 to continuous block (m, i, j − 1, h , l , s ) (see Fig. 6a). The other way is to obtain continuous block (m, i, j, h, l, s) by adding job J j ∈ V 2 to continuous block (m, i, j − 1, h , l , s ) (see Fig. 6b) . Let f (m, i, j, h, l, s) be the total weighted tardiness of the optimal continuous block composed of the jobs J i , J i+1 , · · · , J j . To facilitate notation, we denote the tardiness of job J i completing at time C in some continuous block by For easy narration, we set up a dummy job J 0 with a 0 = b 0 = 0 for the following procedures as shown below.
DPA CB(T) Initial conditions: For any s ∈ 0, i−1 t=0 a t , Recursions: t=0 a t , Case 1: A j is processed on M 1 as shown in Fig. 6a.
Case 2: A j is processed on M 2 as shown in Fig. 6b.
Next, we use techniques similar to Section IV to construct the optimal partial schedule for problem FSFT. Let's first define partial schedule set (m, i, s) which represents the set of all partial schedules of job subset {J i , J i+1 , · · · , J n } where the first job J i is processed at time s by machine M m . Denote by g(m, i, s) the minimum total weighted tardiness among all the partial schedules in set (m, i, s). For easy narration, we also set up a dummy job J n+1 with a n+1 = b n+1 = +∞ beforehand. A DPA for calculating g(m, i, s) is given as following.
Recursions (A j+1 can only be processed on M 1 as shown in About the time complexity of DPA Sch(T), through an analysis similar to Section IV, it's easy for us to get the following theorem.
Theorem 5.2: Problem FSFT is solvable in O(n 2 ( n i=1 a i ) 3 ) time. So it is also NP-hard in ordinary sense.
The above two procedures for problem FSFT can be easily adapted for problem FSFU with the same time complexity by replacing function T i (C) with U i (C) in Equation (8), (9) and (10) in DPA CB(T) where U i (C) is defined as following: 3 ) time and NP-hard in ordinary sense.

VI. COMPUTATIONAL EXPERIMENTS A. TIME COMPLEXITY ANALYSIS OF OUR ALGORITHM
In Section III-V, we present the computational complexity of the three scheduling problems considered in this paper, their DPAs and the theoretical time complexity of algorithms. The detailed results are provided in Table 2.
To demonstrate the practical performance of the PDAs, we conducted computational experiments for the problem FSFT whose objective is minimize the total weighted tardiness. The computational experiments were implemented in MATLAB R2017b on a notebook computer equipped with an Intel Core i7 5500U CPU, 8GB RAM and Windows 10 64-bit operating system. The specific experimental environment was as follows: the weight of each job w i , the processing time  Table 3 where the leftmost column represents the number of jobs n and the top row represents the sum of processing times of all flexible tasks n i=1 a i . From Table 3, we can get even if the number of jobs reaches 200 and n i=1 a i reaches 500, the average running time only needs to be less than 138s. When n i=1 a i reaches 1000, the running time needs to be less than 567s. It shows that the actual running time of DPA is acceptable even when the parameters are large.
Next, according to the data in Table 3, we first analyze the change of running time with n i=1 a i under different number of jobs. When n i=1 a i is equal to 20, 100, and 200, the change of running time is shown in Fig. 8. Obviously, we can get that when the number of jobs is larger, the greater the slope of the curve, i.e., the growth rate of running time will be accelerated with the increase of n i=1 a i . Then, according to the data in Table 3, we consider the change of running time with the number of jobs n under different n i=1 a i . When n i=1 a i is equal to 100, 500 and 1000, the change of running time is shown in Fig. 9. We have that when n i=1 a i is larger, the growth rate of running time will be accelerated with the increase of the number of jobs.

B. COMPARISON WITH TRADITIONAL ALGORITHMS
Because other algorithms specifically for this problem have not been reported, we mainly analyze the effect of this DPA VOLUME 8, 2020  by comparing it with other common algorithms with good effect for similar hybrid flow-shop problem.
In the existing research, there are two main kinds of algorithms used to solve similar hybrid flow-shop problems considered in this paper: one kind is the exact algorithm whose time complexity is exponential, such as enumeration method, branch-and-bound algorithm; the other kind is the heuristic algorithm which gives approximate solution in polynomial time, such as ant colony algorithm, greedy algorithm [19]. By retrieving the research results, branch-and-bound algorithm is a commonly used and effective method in getting the exact solution of hybrid flow-shop problem [10], [20]- [22]. And, in the aspect of heuristic algorithm, there are many effective algorithms based on iterated greedy idea for hybrid flow-shop problem [23]- [25]. Next, we also take the problem FSFT as an example to compare the effects of DPA (given in this paper), branch-and-bound algorithm (denoted as B&B, based on Lee and Kim [22]) and iterated greedy algorithm (denoted as IG, based on Wang and Wang [24]) on the two-machine hybrid flow-shop problem considered in this paper. The time complexity of above three algorithms in the worst-case for problem FSFT is as follows: DPA is an exact pseudo polynomial time algorithm whose time complexity is O(n 2 ( n i=1 a i ) 3 ); B&B is an exact exponential algorithm whose time complexity is O(n2 n ) [22]; IG is a polynomial time approximation algorithm whose time complexity is O(n 2 ) [24]. IG has advantages in terms of time complexity in the worst-case. B&B looks terrible, and the time complexity of DPA is between IG and B&B. However, considering the accuracy of the solution and the actual running time, the effect analysis of the three algorithms still needs to be verified by following computational experiments.
The software and hardware environment of computational experiments were the same as the previous subsection. Considering that the computational time complexity of B&B and IG is independent of n i=1 a i , in order to make a comparison in the same standard, the computational experiments in this subsection were no longer classified according to n i=1 a i as in the previous subsection. We only grouped experiments according to the number of jobs. The specific experimental environment was as follows: the weight w i , the processing time of the first task a i , the processing time of the second task b i and the deadline d i were generated as uniformly distributed random numbers within the interval [0, 1], [0, 10], [0, 10] and [0, 1000] respectively; the first experiment was concerned with small-scale instances where 10 ≤ n ≤ 50 with an interval of 5 and the second experiment was concerned with large-scale instances where 100 ≤ n ≤ 250 with an interval of 50. We generated 20 random test instances for each n. Table 4 shows the test results of two kinds of experiments.
From Table 4, we can get even if the number of jobs is only 50 (not large) most of the instances processed by B&B algorithm can't be completed in 1200 seconds. It is easy to see that B&B algorithm is only suitable for small-scale instances.  For large-scale instances, the B&B algorithm takes too much time to complete the experiment.
For small-scale instances, the average running times of DPA, B&B and IG are shown in Fig.10. It is easy to see that the running time of B&B increases with the number of jobs n much faster than the other two algorithms. When the number of jobs exceeds 45, B&B algorithm runs too long to solve this problem. The difference between DPA and IG in terms of running time is not obvious for small-scale instances.
For large-scale instances, the average running times of DPA, and IG are shown in Fig.11. Compared with IG, the disadvantage of DPA in running time is obvious and it gradually expands with the increase of the number of jobs n. But the average running times of DPA are still within the acceptable range for large-scale instances.
Although IG has advantages in running time, it can only get approximate solution rather than exact solution which can be given by DPA and B&B. The Average Relative Percentage Deviations [26] (denoted as ARPD) is usually used to compare the approximate effect of heuristic algorithm. Since we can get optimal solution by B&B and DPA, we let ARPD = C A −C * C * × 100%, where C A is the solution of algorithm A and C * is the optimal solution of the problem. We use ARPD to measure the approximation degree of IG, that is, the closer ARPD of IG is to 0, the better the approximation effect of IG is. Using the results of the small-scale experiment and the large-scale experiment, we get the ARPD of IG listed in Table 5.  The ARPD of IG for all instances is shown in Fig.12. We have that ARPD of IG is basically stable between 0.3 and 0.6 when the number of jobs exceeds 25 and it decreases slightly with the increase of the number of jobs.

C. SUMMARY OF ALGORITHM COMPARISON
Considering the running time and the accuracy of the calculation results, we have: (1) When the number of jobs is less than 15, the difference of the three algorithms in running time has little effect on the actual situation. The DPA and B& B can get exact solution, so they have greater advantages in the accuracy of calculation results. VOLUME 8, 2020 (2) For small-scale instances with more than 15 jobs, compared with B&B, DPA has obvious advantages in running time. And compared with IG, DPA has obvious advantages in the accuracy of calculation results, while the difference in running time is not obvious. (3) For large-scale instances, although DPA is not as good as IG in running time, it is still within the acceptable range (average running time less than 350 seconds), and has more than 30% advantages in the accuracy of calculation results. However, B&B needs too long running time to be used in practice.

VII. CONCLUSIONS
This paper discusses a two-stage two-machine hybrid flowshop problem, which is widely used in shared manufacturing, cloud manufacturing and bar-coding operations in inventory or stock control systems. We mainly consider three objective functions with respect to deadline: minimizing the maximum lateness (L max ), the total weighted tardiness ( w i T i ) and the weighted number of tardy jobs ( w i U i ). Firstly, we prove that they are all NP-hard in ordinary sense. Then, the pseudopolynomial time DPA is designed respectively, and the time complexity of the algorithm is analysed. Finally, through computational experiments, we get that when the number of jobs is larger the growth rate of running time will be accelerated with the increase of n i=1 a i and when n i=1 a i is larger the growth rate of running time will be accelerated with the increase of n. The results of computational experiments also show that DPA has obvious advantages in running time compared with B&B and has more than 30% advantages in the accuracy of calculation results compared with IG.
For future research, we could consider designing efficient polynomial time approximation algorithms for these problems. The efficiency and effectiveness of the approximation algorithms and the DPAs in this paper will be compared.