An Improved Evolution Strategy Hybridization With Simulated Annealing for Permutation Flow Shop Scheduling Problems

Flow Shop Scheduling Problem (FSSP) has significant application in the industry, and therefore it has been extensively addressed in the literature using different optimization techniques. Current research investigates Permutation Flow Shop Scheduling Problem (PFSSP) to minimize makespan using the Hybrid Evolution Strategy (HESSA). Initially, a global search of the solution space is performed using an Improved Evolution Strategy (I.E.S.), then the solution is improved by utilizing local search abilities of Simulated Annealing (S.A.). I.E.S. thoroughly exploits the solution space using the reproduction operator, in which four offsprings are generated from one parent. A double swap mutation is used to guide the search to more promising areas in less computational time. The mutation rate is also varied for the fine-tuning of results. The best solution of the I.E.S. acts as a seed for S.A., which further improved the results by exploring better neighborhood solutions. In S.A., insertion mutation is used, and the cooling parameter and acceptance-rejection criteria induce randomness in the algorithm. The proposed HESSA algorithm is tested on well-known NP-hard benchmark problems of Taillard (120 instances), and the performance of the proposed algorithm is compared with the famous techniques available in the literature. Experimental results indicate that the proposed HESSA algorithm finds fifty-four upper bounds for Taillard instances, while thirty-eight results are further improved for the Taillard instances.


I. INTRODUCTION
In a flow shop production environment, machines are arranged in series, and the product is moved from one machine to the next machine in a fixed sequence [1]. In FSSP, when the processing sequence for all the machines is the same, it is termed Permutation Flow Shop Scheduling (PFSSP). It has a wide range of applications in the industries, i.e., automobile, pharmaceutical, fertilizer, and food industry, and several researchers in literature have addressed it. The FSSP was first proposed by Johnson to minimize makespan. Since then, makespan is considered as most used objective in the literature to optimize PFSSP (Pinedo [1]).
The associate editor coordinating the review of this manuscript and approving it for publication was Noel Crespi .
Makespan is the total time required to complete all the jobs on all the machines [2]. For the current world's dynamic environment, the makespan criterion is considered the most relevant for PFSSP [3]. PFSSP is regarded as a complex problem (Yenisey and Yagmahan [4]), and it is NP-hard (Garey, Johnson [6]).

II. LITERATURE REVIEW
PFSSP is addressed in the literature using different optimization techniques, including Exact methods, Heuristics, and Meta-heuristics. Numerous researchers used exact methods to solve flow shop problems. Initially, Schrage [7] applied branch and bound (B&B) to minimize the 2-machines flow shop problem's mean completion time.
Tzeng [29]; Ye, Li [30]). These heuristics improved some of the already developed heuristics by considering specific knowledge of some problems. Moreover, PFSSP has also been addressed by composite heuristic (e.g., Benavides and Ritt [26]; Ribas, Companys [31]; Lin, Wang [32]), which combines different heuristics to solve PFSSP. Composite heuristics have yielded much better results as compared to constructive and improvement heuristics. However, most of the heuristics developed in literature are independent of a time limit, and they stop after a predefined number of steps. It can give a possibility to trap in the local optima.
Over the past years, significant research has been carried out on combining various Meta-heuristics. So that valuable features of each Meta-heuristic are used to get the desired results. A good option is to combine a global search technique with a local search technique to fine-tuning results. In this research, I.E.S. is combined with S.A. to minimize the makespan of PFSSP. I.E.S. performs best for global search; however, sometimes it gets stuck around local minima. Hence I.E.S. is combined with S.A., as S.A. avoids local minima and finds the best solution available in its neighborhood. S.A. was first used by.Kirkpatrick, Gelatt [58] to solve the traveling salesman problem. S.A. is a stochastic local search method taken from nature. In annealing, metals are slowly cooled to form a uniform crystallization instead of fast cooling, leading to poor crystallization. Similarly, the search process for a global minimum in S.A. mimics the crystallization cooling method. S.A. starts from a random solution and then finds the best solution available in its neighborhood. E.S. is a type of evolutionary algorithm that mimics natural evolution to solve optimization problems [59]. E.S. has been developed in Germany by Rechenberg in the late 1960s, which operates with a population of size (µ+, λ), where µ stands for individual parent and λ represents the offspring. Rechenberg [60] completed the first dissertation in the field of E.S. Rechenberg used rectangular corridor and hypersphere models for the approximate analysis of the (1 + 1)-E.S. with Gaussian mutation. E.S. is an iterative process that uses a population of individual solutions to search the solution space [61]. Each individual represents a possible solution to the optimization problem. E.S. has been developed for numerical optimization problems and is widely used for its efficiency and robustness.
The performance of E.S. is mostly dependent on the adjustment of its internal parameters, i.e., mutation strength [61]. In E.S., all parents can be chosen to produce offsprings, as there is no compulsion that parents involved should be different. In E.S., there are no mating selection criteria. In literature, different reproduction operators have been used in ES i.e. (1 + 1), (1 + 4), (1 + 9) and (1 + 16) [62]. One parent can produce 1, 4, 9, and 16 offsprings in these operators, respectively.
E.S. has been used in flow shop problems of limited size. For example, de Siqueira, Souza [56] applied E.S. on hybrid flow shop problems to minimize makespan considering 50 jobs and eight machines. They used a random N.E.H. heuristic and Iterated Greedy Search (I.G.S.) meta-heuristic to create the solutions' initial population. Khurshid, Maqsood [57] used Hybrid Evolution Strategy for Robust PFSSP to minimize the makespan. Khurshid, Maqsood [63] used a fast E.S. algorithm to solve Carlier and Reeves benchmark PFSSP and validated the algorithm's result to solve a battery manufacturing case from the industry. In addition to flow shop problems, E.S. is also used in the evolutionary design of digital circuits (Miller [64]), forecasting foreign currency exchange rates (Rehman, Khan [65]), and for feedforward and recurrent networks (Mahsal Khan, Masood Ahmad [66]). However, Limited researchers used E.S. to solve PFSSP of large sizes instances. Furthermore, E.S. is better in performance than the other metaheuristics, including G.A. (Costa and Oliveira [67]), and is used in current research to solve the considered PFSSP. In Table 1, various techniques used for solving PFSSP are summarized.
In this paper, an I.E.S. algorithm is hybridized with S.A. to minimize makespan for PFSSP. I.E.S. is recommended for global search; however, it tends to get trapped in local minima after few iterations. Hence, to use a salient feature of the local search technique, it is hybridized with SA. S.A. avoids local minima by accepting new solutions in its neighborhood even if it is inferior to the previous solution. Combining both these algorithms gives improved results for PFSSP.
The following section reports the problem statement, which provides assumptions used in PFSSP. Next, the methodology is presented, which explains the proposed improvement over E.S. and S.A. Computational experiments, and results are shown in section 4, and the final section reports the conclusions and recommendations.

III. PROBLEM STATEMENT
PFSSP can be formulated as follows. Flow-shop scheduling involves n number of processed on m number of machines in the sequence of machines arranged in the shop. The processing time of Job J i on machine M j is given as P i,j . The machine executes only one job, and it is processed in the same order. The goal is to find an optimum sequence so that the makespan (C max ) is reduced. Processing times are known in advance, and they are non-negative with fixed values. The assumptions used in the current problem and the objective function and constraints are as follows.
• At any time, one and only one job is operated by a machine.
• Anticipation is not permissible, all jobs are independent, and any job can be started as first.
• Machine downtime is ignored, and machines are continuously available.
• The Job processing sequence is the same for each machine.
• The setup time is incorporated into the machine processing times. For n jobs and m machines, the makespan can be calculated using Eq. 1-Eq. 4. where, Minimization of makespan is the most common objective for PFSSP as it directly correlates to the maximum utilization of machines [2]. This research aims to reduce makespan for PFSSP using a Hybrid E.S. In this research 120, Taillard PFSSP's comprises 12 different problem sets, ranging from 20 jobs and five machines to 500 jobs 20 machines are solved using the proposed technique.

A. INTRODUCTION TO ES
Evolutionary strategy imitates the principle of natural evolution to solve parameter optimization problems. E.S. was VOLUME 9, 2021 introduced by [60]. E.S. depends on the collective learning model gathered from natural evolution and principles of reproduction, recombination, mutation, and selection. During the optimum search, E.S. tries to adapt its strategy parameters by using a collective self-learning mechanism. In E.S., strong emphasis is done on the mutation to create offsprings. For faster results, mutation parameters are changed during the execution of the program. In the evolution strategy, floating-point representation is used, and mutation is the only recombination operator. Initially, experiments were performed having one descendant and one ancestor per generation, and mutation was done by subtracting two numbers drawn from a binomial distribution. The offspring replaced the ancestor if it was found better. After the arrival of computers, this two-membered or (1 + 1)-E.S. technique is complemented by the multi-membered version with recombination. Now within one cycle, parents create offsprings. Two or more parents may be involved in the recombination step, two extreme forms known as intermediate and discrete, respectively. In intermediate recombination, parental variable average values are shifted to the new offspring, while discrete recombination selects each component from one parent at random.
The basic steps of E.S. are as follows: Step 1: Initialization Step 2: Reproduction Step 3: Recombination Step 4: Mutation Step 5: Selection Step 6: Termination  For reproduction Siqueira (2013) used (1 + 1)-E.S., from 1 parent, one offspring was generated, and the selection pools consist of 2 entities. Although less computational time is consumed for the (1+1)-E.S. reproduction operator, the selection pool is tiny to exploit the solution space thoroughly. Therefore ample iterations are required to find the optimum solution.
The mutation rate used by Siqueira (2013) is not changed constantly; however, to increase genetic variation in the population and improve results in fewer iterations, a variable mutation rate should be used. Siqueira(2013) applied E.S. on HFFL problems with jobs up to 50 and machines up to 8. E.S. should be tested on complex benchmark PFSSP's (i.e., Taillard, Vallada, Carlier, and Reeves Flow Shop problems) to validate its performance.

C. THE PROPOSED I.E.S. ALGORITHM
In this proposed I.E.S., the following improvements have been made as compared to Siqueira (2013).
• To thoroughly exploit solution space, (1 + 4)-E.S. has been used instead of (1 + 1)-E.S. Four offsprings are generated from one parent. The selection pool consists of 5 entities, one parent, and four offsprings.
• For maximum exploitation of solution space in minimum computational time, Double swap mutation is used.
• Initially, a high mutation rate is used; however, the mutation rate varies to avoid local minima and fine-tuning results. Variation in mutation rate is the crucial advantage of E.S.
• To test I.E.S. on a complex problem, it has been applied to Taillard Problems with the number of jobs ranging from 20 to 500 and the number of machines ranging from 5 to 20. (Taillard problems are the most complex benchmark flow shop problems available in the literature). Pseudocode for the proposed I.E.S. is shown in Figure 2. Flowcart for HES SA is shown in graphical form in Figure 5.

1) SELECTION OPERATOR (PARENT)
The parent population is randomly generated. For a population size of five, the randomly generated parent population is as follows. Parent 2 1 4 3 5

2) REPRODUCTION OPERATOR
Siqueira (2013) used (1 + 1)-E.S. for reproduction, although (1 + 1)-E.S. is fast, but the solution space is not thoroughly exploited. To overcome this problem, (1 + 4)-E.S. is used in this paper as it explores more solution space and find better results from small to large sized problems. The reproduction operator selects the parents who take part in the generation of offsprings. From 1 parent, four offsprings are generated randomly, as shown in Figure. 3. Other reproduction operators, i.e. (1 + 5), (1 + 9), and (1 + 16), can be used; however, they will take ample computational time to solve complex scheduling problems.

3) RECOMBINATION OPERATOR
Recombination operator brings similarities between parents and their offsprings. Recombination itself has no benefit; however, it is useful when combined with enormous mutation strength and selection. A mutation is mandatory for evolutionary progress and new offsprings production; however, most offsprings are harmful. The selection operator must select suitable mutants. The recombination then extracts standard features, i.e., the similarity in these selected individuals and reduce uncorrelated part. Hence the chosen similarities are the most beneficial ones. Discrete recombination is used in this research. In discrete recombination, variable values of individuals are exchanged. Equal probability is used by the parent to share its variable with the offspring and is done randomly.

4) DOUBLE SWAP MUTATION OPERATOR
The mutation operator is the most important operator of the E.S. besides the selection and reproduction operator. It introduces genetic variation in the population. In E.S., mutation operators are problem-dependent. Their accurate design is essential in E.S. The double Swap mutation operator is used in VOLUME 9, 2021   this research; the procedure of double swap mutation operator (with 40% mutation rate) is illustrated in Figure. 4. The position of Gene 1 is interchanged with Gene 5, while the position of Gene 2 is interchanged with Gene 4 simultaneously. Double swap mutation takes less time and guides the solution to more promising areas. The mutation rate varies after a specified time interval to reduce genetic variation with an increasing number of iterations and fine-tune the results. Variable mutation rate increases the chances of attaining the best results in minimum computational time and prevents the algorithm from trapping in the local minima. The mutation rate varies depending on the size of problems as for large-size problems low mutation rate is used; otherwise, the mutation operator becomes a random search operator. Taillard 120 benchmark problems can be divided into five categories depending on the number of jobs, i.e., 20, 50, 100, 200, and 500, respectively. For each category, a specific mutation rate is used depending on the computational time. The mutation rate against the number of jobs is mentioned in Table 2.
In (µ + λ)-E.S., parents, and offsprings are considered in the selection pool, (µ + λ)-the selection is recommended for the combinatorial optimization problem. While in (µ, λ)-E.S., only offsprings are considered in the selection pool, while parents die out of the selection pool, (µ, λ)-Selection is recommended for real-valued parameter optimization.
In this paper (µ + λ) selection scheme is used as it guides solutions to promising areas. Since four offsprings are generated from 1 parent, hence the selection pool consists of 5 entities. The parent can survive for many generations unless replaced by a better offspring.

6) TERMINATION
Termination criteria, i.e., the maximum number of iterations, computational time, and fitness value, are commonly used. The stopping criteria used in HES SA is the Maximum computational time, set at n 2 /2 × 10 ms for each instance. Hence the whole algorithm constituting of I.E.S. and S.A. is run for n 2 /2 × 10 ms.
D. SIMULATED ANNEALING S.A. is a local search procedure originating from material science and was initially used as a simulation model in the solids' annealing process. S.A. does not guarantee an optimal solution; however, it will find a better neighborhood solution. At each iteration, S.A. searches within the neighborhood and evaluates the possible candidate solutions. Based on the acceptance-rejection criteria, the candidate solution is either accepted or rejected, and the correct selection of these criteria has a significant effect on the performance of the S.A. algorithm. The main criteria in designing the S.A. algorithm are i) Schedule representation ii) Neighborhood design, iii) Searching within the neighborhood, and iv) Acceptancerejection criteria. In S.A., a probabilistic procedure is used for the acceptance-rejection criteria.
Several iterations are performed in S.A. At iteration k, the best-known schedule is termed as S o , while the current schedule is termed as S k . G(S o ) and G(S k ) are the corresponding values. G(S o ) is also termed as aspiration criteria. S.A. algorithm moves from one schedule to another in search of an optimal schedule. At iteration k, a new schedule is searched in the neighborhood of Sk. A candidate schedule Sc is either randomly selected or through a genetic operator.
A move is made If G(S c ) < G(S k ) and set S k+1 = S c . If G(S c ) < G(S 0 ), then S 0 is equal to S c . While a move is allowed with probability P(S k , S c ) if G(S c ) ≥ G(S k ), If G(S c ) ≥ G(S k ), then a random number (µ k ) between 0 and 1 and compared with probability if µ k ≤ P(S k , S c ), then set S k+1 = S c else set S k+1 = S k . β k is the cooling parameter (analogous to the annealing process). Its initial value is between 0.9 to 0.95, which reduces with an increase in the number of iterations.
Unlike E.S., in S.A., the worst move is allowed, giving S.A. a chance to escape local minima and find a suitable solution in a later search. As β k reduces with the number of iterations, the acceptance probability of a non-improving search is minimal as the number of iterations approaches its limit. If a neighborhood is worse, then acceptance probability ensures that a move is avoided. Several stopping criteria can be used in S.A., i.e., the number of iterations, the value of an objective function is met, or no improvement is observed for a specific interval. In this S.A., the computational time is used.
The best solution of I.E.S. is used as a seed for the S.A. algorithm. S.A. algorithm uses it as the initial schedule and then finds candidate schedules in its neighborhood. Mutation, cooling parameter, and acceptance-rejection criteria induce randomness in the solution search procedure. Insertion mutation is used in S.A. while cooling parameters vary between 0.95 and 0.6. The pseudocode for the S.A. algorithm is shown in Figure 6.

A. EXPERIMENTAL SETUP
The algorithm is coded in MATLAB and run on a Core TM i5 with 2.6 GHz and 4 G.B. memory and tested on Taillard [71] benchmark PFSSP. Taillard PFSSP is the most challenging combinatorial optimization problem. A cushion for improvement is still available; for more than 100, the Upper bound schedule is still unknown for most instances. Taillard instances data is taken from OR Library. The benchmark set contains 120 different problems and divided into 12 groups, with each group containing ten instances with machines ranging from 5 to 20 and jobs ranging from 20 to 500. For each instance, Taillard has used a seed. Computational time was used as the termination criteria, and each instance was run for n 2 /2 * 10 ms.

B. COMPARISON OF RESULTS
Results of the empirical tests for the suggested HES SA are reported, and computational results from algorithms of Zobolas, Tarantilis [72], Chen, Huang [73], Marinakis, and Marinaki [74], and Abdel-Basset, Manogaran [54]. Zobolas, Tarantilis [72] suggested a Hybrid meta-heuristic (NEGA VNS ) by combining a Greedy randomized constructive heuristic, a Genetic algorithm, and a Variable VOLUME 9, 2021   path relinking technique. Starting from a small-sized neighborhood, the neighborhood sizes increase with each iteration, and the neighborhood ends to a limit so that all swarms are included in it. The algorithm utilizes the global neighborhood technique's exploration ability and exploitation ability of the local neighborhood technique. The algorithm was coded in Fortran 90 and tested on Taillard [71] benchmark instances, and the termination criteria for instances with 20, 50, 100, 200, and 500 jobs was 60 sec, 120 sec, 180 sec, 300 sec, and 500 sec respectively.
Chen, Huang [73] proposed a revised discrete particle swarm optimization algorithm (RDPSO) for PFSSP to minimize makespan. A new particle swarm learning strategy is introduced in the RDPSO algorithm for guiding the search to find the personal and global best solutions. A new filtered local search is applied to avoid premature convergence, which guides the search to new solution areas and avoids already reviewed regions. The algorithm was tested on Taillard [71] benchmark instances with a P.C. having Intel Pentium IV at 2.6 GHz. The termination criteria were 1000 iterations for all the instances, and the population size is set at 60. Abdel-Basset, Manogaran [54] proposed a Hybrid algorithm (H.W.A.) that combined a Whale optimization algorithm as a Local search strategy to minimize makespan in PFSSP. To use the most considerable rank value, discrete search space is required in the algorithm. By using swap mutation, the candidate solution's diversity is improved, and to escape local optima. An insert-reversed block operation is incorporated in the algorithm. The performance of the initial solution was improved by using the N.E.H. heuristic [20]. The algorithm was coded in Java and run on a Core TM i5-3317U with 1.7 GHz and 4 GB Ram. The algorithm was tested on Taillard [71], Carlier [75], Reeves [76], and.Heller [77] benchmark instances.
For a fair comparison of HES SA with NEGA VNS , PSOENT, RDPSO, and H.W.A., the termination criteria for all these algorithms were computational time. The maximum computational time was set at n 2 /2 * 10 ms. These algorithms were tested on the same processor, i.e., Core TM i5 with 2.6 GHz and 4 GB RAM.
The effectiveness of the suggested technique is analyzed in terms of solution quality. For each group, the quality of the algorithm was evaluated using Eq. 6.
where PRD-Percentage relative difference C m = Makespan found from the algorithm of HES SA , NEGA VNS ,

PSOENT, RDPSO and HWA C = Upper bound for Taillard Instances
In the case of HES SA , the values are averaged values over 30 runs of each instance. We summarized the results in Table 3. A positive value of PRD shows that the results are better than C, and the best values of PRD are highlighted in Bold. For the first three groups where the number of jobs is 20 respectively, PRD values for all these algorithms, i.e., HESSA, NEGAVNS, RDPSO, and H.W.A., are zero, which means all algorithms have found the same optimal makespan for these groups. PRD values for PSOENT are zero for Group 1 and 2 while it is −0.04 for group 3; hence, it cannot find the optimal schedule for Group 3 and In terms of overall performance, HES SA outperformed all others algorithms as its overall PRD value is less than all other algorithms. As the Average PRD value of HES SA is minimum than all other algorithms, it shows the HES SA algorithm's robustness for all size problems, i.e., small, medium, and significant problems. Table 4 Taillard Instances except for  TA-07. So all algorithms can find the optimal schedules for small-size problems. For Group 5 , all algorithms' performance is leveled as they find the same makespan values. Afterward, with the increase in the Several jobs and machines, PSOENT and RDPSO, lag behind other algorithms. In the case of Group 6 (50 -20), both PSOENT and RDPSO fall behind HES SA , NEGA VNS , and H.W.A. algorithms. In comparison, H.W.A. performs better than other algorithms in this group. In Group 8, HES SA , improve the solution for seven instances and performs better than other instances.

C. NEW UPPER BOUNDS AND IMPROVED SOLUTION FOR TAILLARD INSTANCES
The main feature of the HES SA algorithm is its robustness and effectiveness in solving all sized problems, as it has found Upper bounds for small instances, i.e., 20 × 5 (TA-01), and even for large instances, i.e., 200 × 10 (TA-95). Moreover, it has improved solutions from medium-sized problems to large-sized problems (Highlighted in bold and underlined in Table 4).  Figure 7 compares all algorithms' makespan values for TA 01-30 instances, and it appears that almost all algorithms found an upper bound for small-sized Taillard instances. For instance, in TA-07 (20 × 5), the makespan values for NEGA VNS , PSOENT, RDPSO, and H.W.A. with HES SA is 1239, while for the upper bound is 1234. This is currently the only unresolved problem in the first 30 Taillard instances whose Upper bound is still pending. Figure 8 compares makespan values for the following 30 instances, i.e., TA 31-60. From Figure 8, it is apparent that HES SA provides a lower makespan compared to NEGA VNS , PSOENT, RDPSO, and H.W.A. Results of PSOENT and RDPSO are inferior to NEGA VNS , H.W.A., and HES SA . Figure 9 compares the makespan values for the following 30 instances, i.e., TA 61-90. From Figure 8, it is apparent that HES SA provides the best results for all the instances compared to NEGA VNS , PSOENT, RDPSO, and H.W.A. However, the results of PSOENT, RDPSO are much inferior to other algorithms. Figure 10 compares the makespan values for the last 30 instances, i.e., TA 91-120. From Figure 9, it is apparent that HES SA performs best for all the Taillard Table 5, a comparison is made for the makespan calculated with-seed and without-seed HES SA algorithm for the twelve different groups of Taillard Problems. The value of % Diff Makespan for each instance is calculated using Eq. 7.
%Diff C max = 100 * C seed − C w/seed C seed (7) where C seed = Makespan calculated using Taillard seed C w/seed = Makespan calculated without using Taillard seed It shows that the with-seed HES SA algorithm exploits more solution space and finds better makespan values than the without-seed HES SA algorithm. % Diff Cmax values depict the algorithm's performance; a negative value of the % Diff Cmax shows that the makespan with-seed algorithm has a better makespan value compared to a without-seed algorithm, as shown in Table 5. Makespan values are calculated at two termination criteria based on computational time, i.e., n 2 /2 ms and n 2 /2 × 10 ms. for the 1st termination criteria (n 2 /2 ms), all twelve %Diff Cmax values are negative. For the   2nd termination criteria (n 2 /2 × 10 ms), all the twelve % Diff Cmax values are also negative. Hence by increasing the computational time, the % Diff Cmax values of the with-seed HES SA algorithm are still better than the without-seed HES SA algorithm as it explores more solution space. Hence, a with-seed HESSA algorithm should be used to start from a fixed starting point and then improve the solution, which helps the algorithm yield better results. All the above results confirm that the proposed HES SA has outperformed the algorithms of NEGA VNS , PSOENT,  RDPSO, and H.W.A. in terms of makespan values. Also, with-seed HES SA performs better than without-seed HES SA ; hence it is recommended to solve complex problems. HES SA has been a robust technique as it has solved small, medium, and significant size problems, respectively. Since HES SA has also proven its robustness and efficiency, it should be applied to real-life problems from industry to its effectiveness.

VI. CONCLUSION AND RECOMMENDATIONS
In this paper, Hybrid Evolution Strategy (HES SA ) is proposed to minimize makespan for PFSSP, and the results are validated on Taillard benchmark PFSSP. In HES SA , an Improved Evolution Strategy is combined with simulated annealing to find optimal schedules for PFSSP, and the program was coded in MATLAB. To avoid trapping of I.E.S. local minima and fine-tune the results, it is hybridized with S.A. to improve the results further. The hybridization ensures that exploitation of solution space and exploration of neighbors can be carried out simultaneously. In I.E.S., double swap mutation is used to save computational time, and also, the mutation rate is VOLUME 9, 2021 varied to find better schedules. While in S.A., an insertion mutation is used, and the cooling parameter is gradually reduced for fine-tuning results. The results obtained from the proposed approach in terms of PRD and makespan values are compared with NEGA VNS , PSOENT, RDPSO, and H.W.A. algorithms. Results suggest that the HES SA algorithm is a robust technique and is equally applicable to small, medium, and significant size problems, as new upper bound are found in 54 instances, while improved makespan values are found for 38 instances.
Since HES SA has been applied to the Scheduling problem for the first time, the following work can be performed in the future. For significant size problems (i.e., jobs ranging from 200 to 500 and machines up to 20), ample computational time is required to solve them; hence a quad swap mutation operator should reduce the computational time. Makespan minimization has been the performance measure in this research. Different performance measures can be implemented using HESSA, i.e., Tardiness, maximum utilization of the machine in future studies. By developing multi-objective HES SA , this technique can be applied to multi-objective PFSSP's. Additionally, this technique should be applied to a real-life case from any industry to validate its practical implementation. Later, he pursued further research and development work in mining and earth sciences as an Assistant Professor with major focus on geostatistics, artificial intelligence, operations research, fuzzy logic, neural networks, and computer application softwares with the University of Engineering and Technology. He has authored more than ten research article, some of which are published in international journals, such as Geostandards and Geoanalytical Research, Computers and Concrete, Archives of Mining Sciences, Applied Artificial Intelligence, ISPRS International Journal of Geo-Information, Applied Sciences, Resources Policy, and Intelligent Data Analysis and Minerals.