Greedy Adaptive Search: A New Approach for Large-Scale Irregular Packing Problems in the Fabric Industry

The 2-dimensional irregular packing problems are important in the fabric industry. Under several restrictions, fabric packing problems require placing a given set of parts within a ﬁxed-width rectangular sheet, aiming at a minimum length use. In textile industry production, the fabric packing problems are usually large-scale with time limits, where the total number of parts is large, and a high-utilization solution should be computed in several minutes. However, there are few existing works on large-scale packing problems. In this paper, we propose a greedy adaptive search algorithm by constructing a new evaluation function and introducing a new restricted local search strategy. In our algorithm, with a given initial sequence of parts, we iteratively search the best-ﬁt part in succeeding several parts and place it on sheet. Moreover, we employ a two-stage heuristic searching algorithm to search over all the possible sequences for a good initial sequence with high utilization. Numerical examples involve some large-scale industrial instances, together with some large-scale instances generated from benchmarks. Numerical tests show that our algorithm outperforms existing state-of-the-art solvers in large-scale packing problems. The results show the potential of our algorithm to large-scale packing problems in industrial production.


I. INTRODUCTION
In the fabric industry, solving the 2-dimensional irregular packing problems is a critical operation in the cutting process. The main purpose of these problems is to place a given set of 2-dimensional parts within a rectangular sheet with a fixed width and find the solution with the best utilization of the sheet while no part overlaps with others. The utilization of the packing results is important both economically and environmentally. In fabric cutting problems, the total number of parts is large, while the time limit is strict. Normally, a solution should be generated within several minutes. In this paper, we focus on constructing an efficient algorithm for solving large-scale fabric packing problems. In general, large-scale fabric cutting problems have the following properties: • The contour of the part is not necessarily convex with a large number of vertices; The associate editor coordinating the review of this manuscript and approving it for publication was Yilun Shang .
• Requires the minimum distance among parts; • Each part can only be rotated by a finite set of angles, including {0 • , 90 • , 180 • , 270 • }; • There may be some flaws in the fabric; • The number of parts is large. For irregular packing problems, a few programming-based approaches are proposed. These approaches compute a solution by solving certain mathematical models for the packing problems. These proposed models include the mixedinteger programming (MIP) model with a linear objective function and mixed-integer constraints [2], [15], [22], [32], [40], [42], nonlinear programming models with a nonlinear objective function [14], [15], [29], [30], [44], and constraintbased programming models where the packing problem is described by constraints [12], [38], [39]. Interested readers are referred to the references in the survey [33]. However, since the irregular packing problem is NP-hard [23], it might not be possible to compute an optimal solution within the time limit. For example, computing the optimal solution for 5 parts can take more than 6 hours [33]. Moreover, since these exact mathematical models usually involve a considerable number of variables and complex constraints [31], [33], generating a high-utilization solution with commercial solvers is difficult to achieve, especially for large-scale packing problems.
As illustrated in [21], the use of meta-heuristics and hybrid algorithms as powerful optimization tools have been applied to this problem. Sato et al. [41] summarized that these approaches can be roughly divided into two categories: searching over the sequence [5], [6], [9], [13], [16], [26], [35], [45] and searching over the layout [21], [29], [41]. The main difference resides in the techniques for generating the final solution. In searching over the sequence approach, the final solution is represented by a sequence of parts. However, in the latter approach, the position of each item is directly represented by their positions [7].
This difference directly impacts the searching strategy. In searching over the sequence approach, the parts are placed sequentially into the sheet without overlapping. As a result, this approach applies heuristics algorithms searching for a sequence with high utilization in the following steps: 1) Determine a sequence of the placement for parts. This can be performed randomly or by sorting the parts according to some measure, e.g., area or perimeter of polygons [35]; 2) Place the parts with some evaluation functions. Typically, a part is placed at the contour of the stencils already placed. Some algorithms also allow the holefilling strategy, i.e., the part is placed in the holes formed by previously placed parts [19], [24]. The most popular placement rule is the bottom-left policy [41]. In addition, Bennell and Song [8] proposed several attributes considering mutual fitness under the bottom-left policy. With a specific evaluation function, the challenge is to determine the sequence of parts. Pinheiro et al. [36] adopted a random-key genetic algorithm to simultaneously determine the sequence and rotation. Moreover, Burke et al. [10] modified the greedy bottom-left layout construction by discretizing the horizontal search and applied a hill-climbing tabu search. However, the bottom-left strategy usually results in solutions with low utilization, and the beam search strategy proposed by Bennell and Song [8] lacks efficiency, especially for large-scale packing problems.
Searching over the layout approach searches for a solution based on an initial solution, which is usually generated by searching over the sequence approach. Starting from the initial solution, the goal is to minimize the overlap within a sheet with fixed length and height. This approach generates a feasible solution if and only if the corresponding overlap vanishes. As a result, in this approach, parts move freely, and a separation method is usually employed to minimize overlap. The no-fit polygon is used to determine the mutual overlap among the parts. This model was adopted by Bennell and Dowsland [6] to generate valid layouts using a tabu search heuristic. Gomes and Oliveira [25] hybridized the compaction and separation algorithms with a simulated annealing algorithm.
In large-scale cases, the searching over the layout approaches has a large continuous search space, which becomes an obstacle for constructing efficient solvers [41]. Therefore, it is important to design an efficient searching over the sequence algorithm to generate a high-utilization solution in time limits. In addition, an efficient searching over the sequence algorithm can provide a good initial solution for searching over the layout approaches.
For large-scale packing problems, the correlated work is limited. Most existing algorithms are tested on specific smallscale or middle-scale cases, where the parts have simple contours. The numerical performance of existing algorithms on large-scale fabric packing problems is unknown.
In this paper, based on searching over the sequence approach, we propose a greedy adaptive search (GAS) method for large-scale fabric packing problems. We construct an evaluation function, where the weights are dynamically adjusted to achieve high fitness among parts while retaining the robustness of the algorithm. To reduce the search space and increase the one-pass utilization, inspired by the work of Bennell and Song [8], we propose a greedy searching technique where we search for the best-fit part in the constitutive α parts and then place it on the sheet. The primarily numerical experiments demonstrate that our algorithm outperforms the existing open-source solver for large-scale packing problems. In addition, on large-scale fabric packing problems constructed from the ESICUP dataset, our algorithm takes less CPU time and has comparable utilization to some existing state-of-the-art algorithms, which is usually based on searching over the layout approach.
The rest of this paper is organized as follows. We put all the preliminaries, including the preprocessing and computing nofit polygon, in Section 2. In Section 3, we present the detailed algorithm, and when a higher utilization rate is required, we propose a two-stage heuristic algorithm. Numerical experiments are reported in Section 4. In the last section, we draw a brief conclusion.

II. PROBLEM DEFINITION
This section gives a specific problem definition of the irregular packing problem in the fabric industry. We have a list of parts P = (P 1 , P 2 , . . . , P n ), a list of their allowable orientations O = (O 1 , O 2 , . . . , O n ) and their reference points {r 1 , · · · , r n }. The sheet is a rectangular sheet C(W, L) with fixed-width W and arbitrary length L, and our goal is to minimize L. There are some flaws F = (F 1 , F 2 , . . . , F m ) on the sheet, let d(A, B) denote the minimum required spacing between A and B, and ∂C(W , L) denotes the boundary of the container C(W , L), where d( We denote polygon P i ∈ P rotated by θ i ∈ O i as P θ i i , P θ i i := {(û cos θ i +v sin θ i , −û sin θ i +v cos θ i )|(û,v) ∈ P i }, which may be written as P i for simplicity when the orientation is 0 • . We describe translations of polygons by Minkowski sums. Let x i = (x i1 , x i2 ) be the translations of polygons, and w i be the new position of their reference points, and we have x i = w i −r θ i i . Thus, the polygon placed at x i and rotated by θ i can be represented as A solution to this problem is described by a set of translation vectors (x 1 , x 2 , . . . , x n ) and a set of orientations (o 1 , o 2 , . . . , o n ). Let L * be the length of the sheet used for nesting, which is a decision variable to be minimized. Therefore, we define the problem in the fabric industry as follows: As described in our introduction, this programming is difficult to solve due to its discreteness and nonconvexity, especially in the large-scale case. To efficiently solve the packing problem, we propose a novel algorithm based on a greedy strategy with adaptively adjusted parameters. The details of our proposed algorithm are presented in Section IV.

A. SIMPLIFY THE PARTS
In this problem, the number of vertices of the parts can be very large, which leads to complex geometrical computation and results in numerical inefficiency. Thus, it is important for us to reduce the vertices by simplifying the boundary of the parts, for which we introduce two efficient approaches.

1) RAMER-DOUGLAS-PEUCKER ALGORITHM
Given a curve composed of line segments, Ramer-Douglas-Peucker algorithm [18], [37] is a popular method for finding a similar curve with fewer points. With a maximum simplified error δ, this algorithm first selects two ends, A and B, from the curve, and then finds the point C, which is the farthest from line segment AB. If the distance from C to line segment AB is less than δ, we simplify the curve AB as line AB and remove all the other points on curve AB; otherwise, this algorithm recursively calls itself to simplify the curve between A and C, as well as the curve between B and C, respectively. In brief, the Ramer-Douglas-Peucker algorithm uses line segments to approximate some successive vertices.
However, such approximation leads to a simplified boundary of our parts, resulting in errors in calculating their mutual distance. These errors can result in violations in the constraint on the distance among parts. To avoid such a violation,  we need to expand the parts, which causes the loss of the overall utilization of the generated results. Hence, we need to perform a more detailed simplification of the parts with fewer vertices.

2) CLEAN THE CONCAVE BASED ON AREA
The main purpose of this algorithm is to simplify those vertices that form a concave boundary of the parts, which is called a locally tiny concave structure in the following paragraph. Here, the locally tiny concave structure denotes three consecutive vertices E, F, G where the area of triangular EFG is less than δ s , and if we draw a line between E and G, F is in the interior of the newly formed contour. Fig. 2 provides an illustrative example for the so-called locally tiny concave structure.
With a fixed tolerance δ s , in each iteration, our algorithm searches for a locally tiny concave structure among all vertices of the given contour. If we find such vertices E, F, G, we remove the vertex F, edges EF and FG, then add a new edge EG to the contour. The algorithm stops when there is no locally tiny concave structure in the simplified contour.

B. PREPROCESSING WITH THE MINKOWSKI SUM 1) GENERATING NO-FIT POLYGON (NFP)
The no-fit polygon (NFP) and inner-fit polygon (IFP) were first proposed by Art Jr [3] and applied in detecting the overlap between two parts. The NFP/IFP is a polygon that defines the legal placement relationship of one part to another part or sheet. More precisely, each part has a reference point, which can be any point inside or outside the part. Given two parts P i and P j , the NFP of parts P i and P j is the set of points where if the reference point of part P j is placed, then the two parts overlap. In the rest of the paper, we denote the no-fit polygon between two parts, P i and P j by NFP ij . The boundary of NFP ij and its exterior are the feasible regions where part P j can be placed so as not to overlap part P i . Similar to NFP, with a selected reference point for any part P i , its IFP denotes the set of points where if the reference point is placed, part P i is placed inside the sheet. In the rest of our paper, we denote the inner-fit polygon of the part i by IFP i .
To compute the NFP and IFP for all parts, several approaches have been proposed in the literature, such as the sliding method [11], the Minkowski sum [7], a combination  of region splitting and the Minkowski sum approach [1]. Assume parts P i and P j have e i and e j edges, respectively, then the computational complexity of computing NFP ij is up to O(e 2 i e 2 j ) [4], [20]. In this paper, we apply the Minkowski sum to generate pairwise NFP among parts. The mathematical formulation of the Minkowski sum for generating NFP of parts P i and P j with reference point r i , r j can be defined as follows: The no-fit polygon has the following properties: • P i and P j overlap if and only if r j ∈ NFP(P i , P j ); • P i and P j touches if and only if r j ∈ ∂NFP(P i , P j ); 2) OFFSET TO KEEP SPACING As described in the introduction, a minimum distance among any parts, denoted as d 1 , is required in irregular fabric shape packing problems. To guarantee that the minimum distance among the parts is no less than d 1 , we choose to dilate the contour of each part. Moreover, when these dilated parts do not intersect, the minimum distance of all placed parts is no less than d 1 . For part i, to compute its enlarged parts P x i ,θ i i , we enlarge its original contour P x i ,θ i i by computing its Minkowski sum with a circle: where B d 1 2 denotes the circle with radius d 1 2 . Based on the enlarged parts, the constraint d( In addition, in the packing problem, a minimum distance between parts and the boundary of the sheet is also required. In this case, we shrink the boundary of the sheet by d 2 − d 1 2 and denote the shrunk sheet asC(W , L), i.e., With such notations, the second constraint in (1) can be refor- Additionally, the minimum distance between the parts and flaws should be larger than d 3 . Similarly, we can enlarge the boundary contour of the flaws by d 3 − d 1 2 and denote them asF i . More precisely, Thus, the third constraint in (1) can be reshaped as the nonoverlapping betweenF i andP

IV. ONE-PASS PACKING ALGORITHM
Let contain the indexes of all placed parts; then, the legal movement for part i is where denotes the inner-fit polygon ofP x i ,θ i i for sheetC(W , L) with flawsF.
The heuristic algorithm TOPOS-''T écnicas de Optimizacão para o Posicionamento de Figuras Irregulares" proposed by Oliveira et al. [35] has two essential concepts that differ from the conventional bottom-left approach: how to select the next part and how to place the selected part. To select the next part, as well as its position and orientation, Oliveira et al. [35] defined two heuristics strategy: local search and initial sort. Their TOPOS algorithm proposed evaluation criteria to evaluate the score for all unplaced parts with any possible placement and rotations. Then, the TOPOS algorithm performs the local search for all unplaced parts, where it selects a part with the smallest score and chooses the position and rotation corresponding to the smallest score. The framework of the TOPOS algorithm can be stated as follows: 1) Compute the score for all unplaced parts with all possible positions and rotations; 2) Choose the part as well as the corresponding position and rotation with the smallest score, and place it on the sheet; 3) Return to step 1 until all parts are placed. In each iteration of the TOPOS algorithm, the part to be placed is selected from all the unplaced parts, which leads to VOLUME 8, 2020 expensive geometric computations in computing the score of all unplaced parts, especially in large-scale cases. In addition, although TOPOS selects the best-fit part from all unplaced parts, as discussed in Section IV-B, the pursuit of a local optimum may lead to a low-utilization solution.
Based on the searching-over-sequence approaches and TOPOS algorithm, we propose an algorithm for large-scale fabric packing problems, where the solution is generated by selecting a new part and adding to the partial solution of already placed parts. In our algorithm, we propose several improved evaluation criteria to evaluate the score for all unplaced parts. In addition, to balance the local optimum and global optimum, we restrict the depth of the greedy searching and propose an improved local search strategy that adopts the previous information to adjust the current score. Moreover, inspired by the initial sorting strategy in [8], we propose a new choice for the initial sorting in our algorithm.

A. EVALUATION FUNCTION
In this subsection, we present a detailed evaluation criterion to evaluate the score of partP x i ,θ i i , which is denoted as s(P x i ,θ i i ) in the rest of our paper. As described in [8], [35], we select the following attributes as the basis of our criteria for determining the next piece and its position: length of placed parts L 0 and area of overlap between certain enclosures S 0 . When we select part P i and place it to u i with rotation angle θ i , the length of all placed parts can be represented as In addition, we denote the dilation enclosure ofP , Therefore, the overlap regions for any selected part P k with translation x k and rotation θ k can be defined as: Fig. 5 illustrates the overlap among parts P i , P j and P k . Parts P i , P j are already placed into the sheet, and part P k is selected. In Fig. 5, the grey regions denote the origin contour for all three parts, while the regions enclosed in the dashed line denote their dilation. In addition, the black region denotes their overlapping regions S 0 .
The first criterion we select is the length of all placed parts, proposed by Bennell and Song [35] and Bennell and Song et al. [8] to reduce the total length of the used sheet, which is equivalent to improving the utilization of the final result. However, directly using L 0 in the evaluation will cause an imbalance among different parts. For example, if an absolute measure of the length is used, the algorithm will choose to pack the long pieces at its final stage and undermine the global utilization of the final solution. Hence, we must define attributes as a relative measure of the piece length or area. To balance the magnitude of these measures for all pieces, we modify the measure of the partP x i ,θ i i with length l i in the first criteria as In addition, the area of overlap between enclosures S 0 is an essential factor in the evaluation function to force the parts placed close to each other and thus improve the total utilization. However, if we directly maximize the total overlap area between parts to provide an incentive for the highest area of overlap, the algorithm prefers to choose large parts and place them first. To balance the choice among parts, we modify the measure for overlap by introducing the perimeter of the selected part as a penalty. More precisely, for partP x i ,θ i i with perimeter q i , the measure is Based on the two attributes above, part P i is placed on the sheet with position x i and rotation θ i at the k-th round scored It is worth mentioning that the importance of these two attributes is different at each stage of the algorithm. At the beginning, only a few parts are placed on the sheet. Hence, placing the selected parts to the position that fits well is more important than minimizing the total length. As a result, in our algorithm, we first estimate a utilization rate γ 1 for our solution and its corresponding lengthL = n i=1 Area(P i ) γ 1 W , then compute the score by

B. RESTRICTED LOCAL SEARCH WITH DEPTH
In the TOPOS algorithm, the local search heuristic does not pack the parts following a predetermined order. Instead, the next part is selected from all available unpacked parts greedily according to the given evaluation criteria [8], [35]. However, for large-scale packing problems with hundreds of parts, recurrently evaluating all unplaced parts is costly. Therefore, in our algorithm, we only search for the best part in the consecutive α parts, where α is the searching depth. We call this searching procedure searching in the pool for simplicity in the rest of the paper. This behavior of the algorithm is displayed in the example in Fig. 6. As illustrated in our numerical experiments, limiting the searching depth is efficient in computing the placement. In most cases, this strategy can generate better results when compared with the classical TOPOS algorithm. In this paper, we denote Score(P i , k) as the smallest score for part P i at the k-th round. More precisely, let W i,k denote the possible movement for part P i at the k-th round, and Score(P i , k) has the following expression: When P i is not in the pool at the k-th round, we set Score(P i , k) = +∞. Since all possible movements are infinite, in our algorithms, we choose W i,k as all the vertices of W (P x i ,θ i i ). Although selecting the best-fit part in our algorithm results in a partial solution with high utilization in the early stage, there may still exist some irregular parts that do not fit well with others. In the TOPOS algorithm, these irregular parts are never selected until all others are placed. In this case, the TOPOS algorithm generates a poor placement in its final stage, which results in poor utilization.
To solve this problem, we introduce two factors to adjust the final score and select the part with the smallest score.
Here are two rules to balance the preference for all unplaced parts: 1) The longer part P i is in the pool, the more likely it will be selected in the next evaluation; 2) When the score of part P i is considerably smaller than its largest score, it is likely to be selected in this evaluation. For those irregular parts, these two rules guarantee that they will not be backlogged until the final stage of our algorithm. If a part is kept in the pool for a long time, it must be difficult for this part to fit well with the placed parts, so we add a penalty to its score and force a choice in the next few iterations. In this paper, t(P i , k) represents the number of rounds that P i remains in the pool at the k-th iteration.
In addition, when an irregular part makes considerable progress in its score, we consider that this part has found a relatively suited position at this moment. Thus, this part is preferred in the current selection. This factor is defined as: j). (15) Then, we obtain the final evaluation score in (16).
Then, based on FinalScore, when given parameters γ 1 , γ 2 , γ 3 , γ 4 and depth α, we compute the final solution by VOLUME 8, 2020 Algorithm 1. We also provide a flowchart in Fig. 7 to describe the algorithm vividly. It is worth mentioning that the best-fit part corresponds to the smallest score.

C. INITIAL SEQUENCE
Our algorithm starts from a predefined sequence and searches for a solution with high utilization. To enhance the performance of the local search, we select the predefined sequence in descending order according to the following rules: • area of the polygon.
• perimeter of the polygon.
• area of the smallest rectangle containing the polygon.
• area of polygon · (1 − irregularity), where irregularity is calculated from the area of the polygon divided by the area of the smallest rectangle containing the polygon.

D. TWO-STAGE HEURISTIC
In our algorithm, if we wish to compute a solution with higher utilization, we can search over the parameters γ k 1 , γ k 2 , γ k 3 , γ k 4 , α k as well as the sequences. However, since the number of all possible sequences equals n!, directly searching over the sequence is costly and lacks efficiency. As described in Algorithm 1, in each iteration, our algorithm chooses the best-fit part in the consecutive α parts, which can be regarded as a modification to the predefined sequence. Consequently, adjusting parameters results in a different evaluation function, leading to a different inserting sequence for the parts. Since our algorithm only involves 5 parameters, searching for a better parameter in such a lowdimensional space is more efficient than searching over all possible predefined sequences. As a result, we propose a two-stage searching strategy, where we first search for good parameters and then search over the predefined sequences for better utilization with fixed parameters.

V. COMPUTATIONAL RESULTS
Two groups of instances were tested using the GAS: the data provided by the Alibaba Cloud and some of the benchmark instances. The data provided by the Alibaba Cloud are real data in industrial production; shirts, trousers and other parts are combined for packing, and the numbers of both parts and vertices are relatively large. The classic benchmark instances include various materials, and we chose fabric Select individuals by tournament selection; 5: Crossover by arithmetic crossover; 6: Mutate by Gaussian mutation; 7: k = k + 1; 8: end while 9: Choose the parameters γ * 1 , γ * 2 , γ * 3 , γ * 4 , α * that corresponds to the highest fitness; 10: Mutate and crossover seq 1 , ..., seq n to generate the initial population for sequence {seq 1 , ..., seq M }; 11: while Continue do 12: Compute the fitness for each sequence in the population by GAS(seq i , γ * 1 , γ * 2 , γ * 3 , γ * 4 , α * ), for i = 1, ..., M ; 13: Select sequences by tournament selection; 14: Perform crossover by partially matched crossover; 15: Mutate by inversion mutation; 16: end while material including shirts and trousers to test the GAS. For each instance, independent of the group, the algorithm was executed ten times. The algorithm was tested on an Intel(R) Core(R) Silver 4110 CPU @ 2.1 GHz, with 32 cores and 394 GB of memory.

A. TEST INSTANCES
In Table 1, we summarize some basic characteristics of the test instances; all the cases admit orientation 0 • , 180 • .
Data 1-4 are large-scale industrial datasets collected by the Alibaba Cloud and can be retrieved from their website. 1 These datasets have over 250 parts with the mean vertices over 70, showing that the problems are large-scale with the complex contour of the parts. As illustrated in Table 1, the proposed two preprocessing algorithms in Section 2.1 significantly reduce the mean vertices for the parts. Moreover, a certain cutting gap needs to be reserved among the parts, and some of the sheets have flaws in it, which need to be avoided during packing.
The other two datasets were generated from benchmarks in ESICUP 2 datasets. Since our algorithm is dedicated to solving the large-scale fabric packing problem, we select the fabric packing datasets 'shirts' and 'trousers' from the ESICUP datasets. In addition, these two datasets are smallscaled, so we generate large-scale instances by copying the shapes of these two instances four times.

B. TEST RESULTS
In our numerical examples, GAS + GA denotes our greedy adaptive search modified by the generic algorithm. We tested the performance of both GAS and GAS + GA. The GAS algorithm is a fast deterministic algorithm, so we only need to run one time. Based on GAS, GAS+GA is a modified nondeterministic algorithm. Therefore, we present both the best utilization and average utilization in Table 2-3. In our numerical experiments, the mutation rate in GAS + GA equals 0.3, the crossover rate equals 1 and the tournament size is 2. In addition, since the search space of the sequences is much larger than the search space of all parameters, we choose the population sizes N = 16 and M = 64 in Algorithm 2. Additionally, the max generation for the first     stage is 10, while the second stage of Algorithm 2 stops when its runtime exceeds the time limit.
Data 1-4 were made public recently, so no paper has comparable test results on these instances. Since the algorithm code in previous papers is not open-source, we compared the results by running the open-source software DeepNest [17] and showed the results in Table 2. The final results of GAS + GA on Data 1-4 are presented in Fig. 8-11. From Table 2, we can conclude that GAS + GA is more stable and efficient than DeepNest. On data 1-4, the results of our algorithm were 3% -4% higher than the optimal value of the DeepNest's. Furthermore, we ran GAS+GA with two different evaluation functions to illustrate the superiority of restricted local search with depth, as presented in Section IV-B. One uses (16) as its evaluation function, and the other uses (16) with fixed α = 1. It is worth mentioning that when we fix α = 1, both γ 3 and γ 4 are inactive and set to 0 since they are designed for adjusting the placing order in the pool. As can be seen from Table 2, the restricted local search with depth strategy indeed improves the utilization and delivers better packing results.
Benchmark instances have been massively tested in the literature, but few articles tested the cases that have been multiplied. Hu et al. [27] proposed a fast algorithm and tested that algorithm on 'shirts*4', Elkeran and Ahmed [21] proposed a guided cuckoo search and obtained the best layout of the 'shirt' as 88.96%, Sato et al. [41] applied the raster penetration map and achieved the best results for trousers as 90.06%. Moreover, these two algorithms are based on searching over the layout approach, which lacks scalability and efficiency for large-scale packing problems, as described in our introduction. Compared with the algorithms above, as shown in Table 3, our algorithm shows advantages both in operational efficiency and utilization on large-scale problems. Additionally, we present the final packing results in Fig. 12 and Fig. 13.

C. COMPUTATIONAL PERFORMANCE DISCUSSION
In 2-dimensional irregular packing problems, most of the existing approaches have been tested on the ESICUP dataset. Most cases in the ESICUP dataset are small-scale or mediumscale and have simple shapes. However, in this paper, we focus on large-scale packing problems where a piece of cloth is usually 50 metres to 200 metres. Additionally, due to the lack of open-source code, we compared our algorithm with DeepNest, a state-of-the-art open-source solver based on bottom-left strategy and improved by deep learning [17]. In our previous numerical examples, we multiplied the number of benchmark instances and performed GAS + GA and efficiently achieved higher utilization results. In addition, we performed GAS on the data provided by the Alibaba Cloud and achieved high-utilization results, which were 3% -4% better than those by DeepNest.
However, the lack of open-source code makes it difficult to determine the quality of our algorithm and compare it with other approaches, which were tested with different running environments and computational capacities. To enhance the comparison analysis of our algorithm, a computational performance study of GAS + GA was conducted, and the numerical results are presented in this subsection. During the execution of GAS + GA on all our test examples, the utilization of the solutions was sampled at each 60-second interval. In addition, we ran the first stage in GAS + GA for 10 generations and then terminated the second stage in GAS + GA when the running time exceeded 1, 200. This procedure is equivalent to setting stricter time limits on GAS + GA. Since datasets 1-4 have not been tested by existing works, we present the changes in the utilization rate in Fig14 to illustrate the detailed performance of GAS + GA. In addition, because trousers*4 and shirts*4 have not been tested by existing efficient approaches, we present the difference between GAS+GA and the best-in-literature utilization rates for trousers and shirts in the ESICUP dataset. The difference is presented in Fig. 15.
The testing results in Fig. 14 and Fig. 15 show similar tendencies on all testing examples. It can be noted that the utilization increases more intensely in the first stage of GAS+ GA, which coincides with our discussion in Section IV-B and illustrates the high efficiency of our two-stage algorithm. This provides further evidence that a reduced time limit does not greatly impact the results for these cases.

VI. CONCLUSION
The greedy adaptive search algorithm (GAS) is designed to solve large-scale fabric packing problems. Inspired by TOPOS, we construct a dynamically adjusted evaluation function as well as a restricted local search strategy. In addition, we proposed a two-stage genetic algorithm, searching for both proper parameters and input sequences.
In our algorithm, the overlap is detected by the no-fit polygon, which is computed in parallel in our preprocessing step. Because the contours of the parts in the fabric packing problem can have a large number of vertices, we introduce two algorithms to simplify the contours and thus accelerate the geometric computation in the preprocessing stage. As illustrated in Table 1, our proposed approaches reduce approximately 50% of the vertices.
Then, we tested two sets of instances by using the proposed GAS + GA. We first compare GAS + GA with GAS + GA (α = 1), GAS and DeepNest on our test problems. The results illustrate that GAS + GA is more efficient and stable, and our proposed restricted local search with the depth strategy results in higher utilization results than DeepNest, even in one pass of the algorithm. In addition, tests using benchmark cases show that GAS + GA yields competitive solutions, producing the best solution in the literature on our test instances, which were generated from shirts and trousers in the ESICUP dataset.
In conclusion, on large-scale packing problems, our algorithm is significantly better than existing open-source software and papers. The GAS shows innovation in both the evaluation function and the idea of the penalty in restricted local search. The potential of applying the GAS + GA to largescale packing problems in the fabric industry emerges, and we will continuously work on developing efficient algorithms on large-scale packing problems.