Tabu Search-Based Heuristic Solver for General Integer Linear Programming Problems

This paper presents a tabu search-based heuristic solver for general integer linear programming (ILP) problems as a dependable alternative to branch-and-bound (B&B) solvers. It aims to expand the range of ILP instances for which optimization practitioners can obtain reasonable solutions using either B&B or heuristic solvers. The challenge in developing such a heuristic solver is ensuring both good performance and versatility. The proposed solver addresses this difficulty by incorporating several performance-improving techniques for search behavior and computation speed while maintaining the versatility. These techniques include instance size reduction, neighborhood filtering, and accelerating solution evaluation. The proposed algorithm was implemented in C++17 and made available as an open-source solver that accepts ILP instances formulated for B&B solvers without modification. In numerical experiments with a 120-second time limit, the proposed solver found better feasible solutions than existing open-source B&B solvers for 17 out of the 82 feasible pure integer instances from MIPLIB 2017 Benchmark Set, including instances with over 100,000 variables. An ablation study verified that the incorporated techniques were essential for the performance.


I. INTRODUCTION
This paper presents a tabu search-based heuristic solver for general integer linear programming (ILP) problems.ILP problems are a significant class of combinatorial optimization problems with diverse applications, including crew scheduling [1], timetabling [2], fault diagnosis [3], and resource allocation [4].Although branch-and-bound (B&B) algorithms have been extensively studied to derive exact optimal solutions [5], [6], [7], [8], there is a demand for obtaining high-quality feasible solutions quickly, even without guarantee of optimality.Efficient heuristics have been proposed for subsets of ILP problems [9], [10], [11], [12]; however, studies on heuristics for general ILP problems remain limited.
The purpose of this study is to develop a practical open-source heuristic solver for general ILP problems as a dependable alternative when B&B solvers fail to provide satisfactory solutions.Note that this study aims not to The associate editor coordinating the review of this manuscript and approving it for publication was Frederico Guimarães .compete with B&B solvers but to expand the range of ILP instances for which optimization practitioners can obtain reasonable solutions using either B&B or heuristic solvers.Here, ''practical'' refers to the ability to handle ILP instances formulated for B&B solvers without any modification and the performance of finding better feasible solutions than B&B solvers for some instances.The challenge in developing such a heuristic solver is ensuring both good performance and versatility.This paper proposes an algorithm and solver based on tabu search [13].The proposed solver addresses this difficulty by incorporating several performance-improving techniques for search behavior and computation speed while maintaining applicability to broad ILP instances.These techniques include instance size reduction, neighborhood filtering, and accelerating solution evaluation.Although most of these techniques are well known, this paper contributes to the design of a solver that integrates them.This paper also provides comprehensive performance evaluation results of the proposed solver compared with existing B&B solvers.
The limitations of this study must be mentioned.Owing to budget constraints, the performance of the proposed solver was compared with that of open-source B&B solvers, although commercial solvers are generally more powerful.To compensate for this deficiency and strengthen the evaluation baseline, the comparison was made with the best-performing tested B&B solver for each instance.In addition, despite the proposed solver being designed for practical applications, it has not yet been verified with instances from real-life problems, because the evaluation in this paper primarily focuses on performance for various instances.
The remainder of this paper is organized as follows.Section II reviews the related work and clarifies the position of this study.Section III introduces the notation, ILP problem formulation, and tabu search with a prototype algorithm.Section IV presents the basic design of the tabu search algorithm for ILP problems.Section V details the performance-improving techniques, and Section VI presents the complete proposed algorithm.Section VII evaluates the performance of the proposed algorithm-based solver using MIPLIB 2017 instances [14].This section also presents an ablation study to clarify the contribution of each technique to the performance.Section VIII states the achievements of this paper and future work.

II. RELATED WORK A. HEURISTICS FOR ILP PROBLEMS
Finding high-quality feasible solutions is crucial for B&B solvers, and several heuristics have been proposed, including feasibility pump [15], [16], [17], relaxation induced neighborhood search [18], and relaxation enforced neighborhood search [19].Nevertheless, they require solving the linear programming (LP) relaxation, which is computationally expensive.Feasibility jump has been proposed as an LP-free heuristic and improved a B&B solver [20].
Local search algorithms without LP relaxation have been proposed for set partitioning and covering problems, which are subsets of ILP problems [9], [10], [11], [12].The algorithm proposed in [12] outperformed commercial solvers for large-scale set partitioning problems with more than two million variables, indicating that heuristic approaches are practical for large-scale ILP instances.
Although there are a few studies on heuristics applicable to the general ILP problem, weighted tabu search has been proposed for constraint satisfaction problems, a comprehensive problem class of ILP [21], [22].Weighted tabu search has demonstrated remarkable performance in general assignment problems, nurse rostering, and timetabling.

B. METAHEURISTICS
Tabu search is one of the leading metaheuristics, which are frameworks of heuristics for combinatorial optimization problems independent of a specific problem.Metaheuristics include simulated annealing [23], genetic algorithms [24], guided local search [25], and iterated local search [26], in addition to tabu search.Each metaheuristic is characterized by a diversification strategy to prevent being trapped at a specific local optimum and to search for better solutions.Tabu search prevents revisiting of recently explored solutions.Simulated annealing probabilistically allows for a move to a worse solution.Genetic algorithms generate various solutions through the stochastic mutation of solution components and crossover of multiple solutions.Diversification strategies are not limited to those related to solution moves.Guided local search modifies the evaluation function to facilitate escape from local optima.Iterated local search introduces diversity by perturbation of obtained local optima.
The concept of metaheuristics has evolved into continuous optimization, and various algorithms have been proposed, including particle swarm optimization [36], differential evolution [37], and cuckoo search [38].They have been applied to complex problems in which a simple formula cannot describe the objective function, including feature selection in machine learning [39] and reconfiguration of electrical power systems [40].Some studies have extended continuous metaheuristics to nonlinear integer programming problems, but most have dealt with low-dimensional problems with up to 30 variables [41], [42], [43].

C. OPEN-SOURCE HEURISTIC SOLVERS
Open-source heuristic solvers for general combinatorial optimization include OptaPlanner [44] and METSlib [45], [46].OptaPlanner is a constraint satisfaction solver implemented in Java for various combinatorial problems such as layout planning, nurse rostering, vehicle routing, timetabling, and cloud balancing [47], [48].METSlib is a tabu search-based optimization framework implemented in C++.They have the advantage of being applied to broader combinatorial optimization problems that are not limited to ILP.However, using these solvers requires coding problems according to their architecture, which places a heavy burden on their use as alternatives to B&B solvers.

D. POSITION OF THIS STUDY
The algorithm proposed in this paper is a specialized version of weighted tabu search tailored for ILP problems.Although the proposed solver does not involve algorithmic innovation, it is tuned as a practical heuristic solver for ILP problems.Compared to previous studies, the proposed solver is differentiated in the following points: • It can address general ILP problems, not limited to any specific structural form.
• It does not employ LP relaxation, allowing fast search even for large instances.System (MPS) files, a standard instance format used by commercial and open-source B&B solvers [49].These features allow users to apply the proposed solver to ILP instances immediately when B&B solvers fail to obtain satisfactory solutions for them.This advantage allows optimization practitioners to expand the range of ILP instances for which reasonable solutions can be obtained.
It should be noted that the MIPLIB series [14], [50], benchmark sets for mixed ILP problems, is an indispensable foundation for this study.The MIPLIB series provides extensive instances with detailed statistical information such as the classification and breakdown of the constraints that compose the instances.These data allow the development of a general-purpose solver for various instances.

III. PRELIMINARY A. NOTATION
The symbols R and Z denote the sets of real numbers and integers, respectively.Bold symbols denote column vectors, and the transposition of a vector is written, for example, as x ⊤ .The symbol 0 denotes a vector for which all components are zero.The symbol e j denotes the standard unit vector in which only the value of the j-th component is one

B. ILP PROBLEMS
An N -dimensional ILP problem under M E equality and M L inequality constraints can be formulated as follows: where c ∈ R N is the cost vector, l, u ∈ Z N (l ≤ u) are the lower and upper bound vectors of x, respectively, is the index set of inequality constraints, and the tuples of (a define the constraints.The value of the objective function is denoted by z(x) := c ⊤ x, and the values of the constraint functions are denoted by g i (x) := a ⊤ i x − b i (i ∈ I E ∪ I L ).Let J := {1, . . ., N } and I := I E ∪ I L denote the index sets of the variables and constraints, respectively.Let I viol (x) denote the index set of constraints with violations at the solution x.For a succinct presentation, let us define their subsets as follows: Algorithm 1 TabuSearch (Prototype) Input: initial solution x 0 , tabu tenure T Output: incumbent solution (x * , f * ) 1: T ← ∅ 2: x ← x 0 3: (x * , f * ) ← (x, f (x)) 4: while termination condition is not satisfied do 5: x ′ 1 ← arg min x ′ ∈N (x) f (x) 6: x ′ 2 ← arg min x ′ ∈N (x)\T f eval (x) 7: x end if update T with x and T 12: end while 13: return (x * , f * )

C. TABU SEARCH
Tabu search is a metaheuristic that extends local search algorithms using a tabu mechanism for search diversification [13].At each iteration, a tabu search algorithm evaluates all neighboring solutions of the current solution and then selects the best solution not recorded in the tabu list as the next solution.The tabu list is updated based on the search history of the previous T ∈ Z iterations, where T is the tabu tenure.A technique called the aspiration criteria ignores the tabu list, except when the incumbent solution is updatable.
A tabu search algorithm comprises solution space X , neighborhood N : X → X , objective function f : X → R, and evaluation function f eval : X → R. The neighborhood N defines the adjacency between solutions in X .The objective function f is a metric whose value is minimized through optimization.The evaluation function f eval scores the neighboring solutions to select the next solution.Algorithm 1 illustrates the prototype tabu search algorithm.
From a practical standpoint, tabu search has the following features suited for a general-purpose solver: Early discovery of high-quality feasible solutions.Tabu search is based on local search with the best-move strategy, sequentially moving to a neighboring solution that improves the evaluation function value the most.This behavior is suitable for the early discovery of high-quality feasible solutions.Although this strategy requires longer computation times to evaluate all neighboring solutions, significant acceleration is possible using the techniques presented in Sections V-E and V-F.Minimal user-tuning parameters.General-purpose solvers should have fewer user-tuning parameters to reduce user burden.Tabu search is user-friendly in this respect, as it involves only one parameter, tabu tenure.Flexibility of the computational resources.A tabu search algorithm can find good local optima even with limited time and provides better solutions with a longer computation time.Moreover, its performance can be enhanced by simply increasing the computing resources.In particular, multicore CPUs can parallelize and accelerate the evaluation of neighboring solutions.Tabu search was employed as the core algorithm of the proposed solver in preference to the aforementioned features.However, it should be noted that the core algorithm can be replaced with another metaheuristic if it satisfies these features.Indeed, the proposed solver employs techniques to improve the search behavior, which diminishes the characteristics of the original tabu search.

IV. TABU SEARCH FOR ILP PROBLEMS A. SOLUTION SPACE AND NEIGHBORHOOD
Tabu search cannot handle constraints directly, and it is impractical to restrict the solution space exclusively to feasible solutions.Therefore, we define the solution space by all integer solutions within the bounds, including feasible and infeasible solutions.
Let x denote the current solution, and N (x) denote the neighboring solutions of x.We introduce M(x) which is the set of moves from the current solution to the neighboring solutions.Because M(x) is more expedient than N (x) in the algorithm design, we refer to M(x) as the neighborhood of x instead of N (x).
We consider multiple neighborhood types: flip, shift, and jump.The flip neighborhood is for binary variables and comprises moves that flip the value of a single component from x.The shift and jump neighborhoods are for nonbinary variables.The shift neighborhood comprises moves that increment/decrement the value of a single component from x within its range, and the jump neighborhood comprises moves that jump the value of a single component from x by half of the margin toward the bounds.The flip, shift, and jump neighborhoods, denoted by M flip , M shift , and M jump , are defined as follows: where Ĵ ⊆ J is introduced to filter the neighboring solutions to be evaluated and is discussed further in Section V-E.
For the following discussion, we define J x := {j ∈ J | x j ̸ = 0} and I x := {i ∈ I | a ij ̸ = 0, j ∈ J x }, which are the index sets of the variables and constraints sensitive to x, respectively.

B. OBJECTIVE AND EVALUATION FUNCTIONS
We transform the original problem (ILP) into an augmented problem in which the objective function is penalized for constraint violations.Let us denote I + := I E ∪ I L and I − := I E as the aliases.By associating these index sets, we introduce violation functions and their corresponding penalty weights w Then, we define the penalty function Based on the objective function, we define the evaluation function as where n := (n 1 , . . ., n N ) ⊤ are the counters of the variables' updating. 1The last term serves as a tie-breaker that provides a preference for variables with fewer updates [22].In the definition of f eval , the penalty weights are denoted by (v + , v − ), indicating that they are adaptively adjusted and can differ from (w + , w − ).

V. PERFORMANCE-IMPROVING TECHNIQUES
This section introduces the techniques used to improve the performance of a tabu search algorithm for (ILP ′ ).The improvement perspectives are search behavior and computation speed.Table 1 lists the techniques detailed in this section and their contribution to these perspectives.

A. PENALTY WEIGHT ADJUSTMENT
Sufficiently large penalty weights (w + , w − ) ensure that the optimal solution set of (ILP ′ ) matches that of (ILP).Nevertheless, excessive penalty weights cause a lack of search diversity, whereas overly small penalty weights prevent finding feasible solutions.Several mechanisms have been proposed to address this trade-off by adaptively adjusting the penalty weights [11], [12], [22].We employ a mechanism similar to that in [22]; it repeats the tabu search algorithm in subloops and adjusts the penalty weights after each subloop.
Let (w end for 6: end for 9: else 10: 11: 13: end for 15: w and x * v denote the global and local best solutions found thus far in terms of f (x; w + , w − ) and f (x; v + , v − ), respectively, and let f (x * w ) and f (x * v ) denote their corresponding objective function values.
Algorithm 2 adjusts the values of (v + , v − ).The algorithm has two parameters: r is the rate used to decrease the values of (v + , v − ) taking a value in (0, 1), and δ is an externally specified Boolean flag that determines whether to decrease or increase them.In the complete algorithm proposed in Section VI, Algorithm 2 decreases the values of (v + , v − ) if either of the following conditions is satisfied in the last subloop: 1) The global best solution was updated.
2) The local best solution was feasible.
3) The local best solution was not updated.4) The global objective function value varied little.
Conditions 1) and 2) imply that there is a margin to decrease the values of (v + , v − ).Condition 3) indicates that they are too large.For condition 4), the values of (v + , v − ) should be reduced to avoid stagnation.If either of these conditions is satisfied, Algorithm 2 decreases the values of the local penalty weights for the satisfied constraints at the given local best solution x * v .Maintaining the values of (v + , v − ) for violating constraints at x * v enlarges the difference in penalty between the violating and satisfied constraints, facilitating obtainment of feasible solutions.If none of these conditions is satisfied, the local best solution must be infeasible.In this case, Algorithm 2 increases the values of (v + , v − ).The updaters ( v + * , v − * ) computed in lines 12 and 16 can be regarded as the optimal solution components of the following problem: where That is, the new values for (v + , v − ) are determined such that their changes are minimized in terms of the Euclidean norm while ensuring that the local objective function value at x * v equals the global objective function value.The computed local penalty weights are limited by the global penalty weights in lines 13 and 17 to prevent the diversion of (v + , v − ) [22].

B. INSTANCE SIZE REDUCTION
Instance size reduction aims to remove redundant variables and constraints before optimization through inferences and is a critical technique for B&B solvers.Instance size reduction can narrow the search space and may benefit heuristics in improving the search behavior and computation speed.We employ the preprocessing and probing techniques detailed in [51] as generic methods applicable to broad instances.In addition, we consider specific methods for set partitioning and covering problems (e.g., in [52], [53], [54], [55]), considering that they have numerous applications.
The proposed solver attempts instance size reduction not only before but also during optimization with inequality c ⊤ x ≤ z * , where z * denotes the objective function value of the feasible incumbent solution.For example, consider an instance with objective function x 1 + 10 x 2 + 100 x 3 , where x 1 , x 2 , x 3 are binary variables.If we obtain a feasible solution with an objective function value of 10, we can conclude that x 3 = 0 because no better feasible solution exists when x 3 = 1.

C. SELECTION VARIABLE EXTRACTION
In ILP modeling, we frequently define constraints in the form j∈J ′ ⊆J x j = 1 to select exactly one element from multiple options.In this paper, these constraints are referred to as selection constraints.We introduce an additional neighborhood specialized for selection constraints, where move changes the variable that takes the value of one.This type of neighborhood is extracted based on selection constraints that do not share the same variables with each other.Let Ĩsel denote the index set of the candidate selection constraint defined by Ĩsel : Then, we define the selection neighborhood for i-th constraint (i ∈ I sel ) as All possible selection moves for the current solution x are given by M sel (x, Ĵ ) := i∈I sel M sel,g i (x, Ĵ ).We refer to the variables x j ∈ J sel := i∈I sel J g i as selection variables.
To maintain the selection constraints satisfied throughout the optimization, values of selection variables must be initialized such that only one variable takes the value of one for each selection constraint.In addition, the values of selection variables must be changed only by moves within the selection neighborhood; hence, the indices of the selection variables are excluded from J bin .

D. DEPENDENT VARIABLE EXTRACTION
It is common to introduce equality constraints that define the dependent variables virtually to clarify the model, as follows: where a j (j ∈ J ′ ), b are integer parameters, and y and x are integer variables.This constraint is possibly accompanied by a range constraint l y ≤ y ≤ u y .We can consider removing variable y and constraint (11) by substituting y into the objective function and other constraints, adding the new inequality constraints l y ≤ j∈J ′ a j x j + b ≤ u y .This approach requires a rule to extract the dependent variables.
We consider an integer variable as a dependent variable if it satisfies all of the following conditions: 1) The candidate-dependent variable can be regarded as y in an equality constraint of the form (11). 2) For the equality constraint (11) detected in condition 1), let us define 3) Given the candidate, only one constraint in the form (11) satisfies conditions 1) and 2).4) The candidate does not have direct or indirect interdependency with other candidates.Condition 1) ensures the integrity of the candidate-dependent variable.Condition 2) determines the candidate uniquely for each equality constraint (11).If this condition is dropped, it is possible to extract multiple dependent variables with interdependence from an equality constraint (11).Conditions 3) and 4) ensure the uniqueness of the expression of the candidate.Condition 4) can be verified by representing the Ĵ ← J infeasible (x) 5: end if 6: return s∈{flip,shift,jump,sel} M s (x, Ĵ ) dependency of candidates as a directed graph and extracting cycles (e.g., in [56]).Table 2 provides examples of the extraction of dependent variables.

E. NEIGHBORHOOD FILTERING
We apply the following rules to filter the neighboring solutions to accelerate the iterations: 1) If the current solution is feasible, only neighboring solutions that improve the objective function value are evaluated.2) Otherwise, only neighboring solutions that reduce the violation of at least one constraint are evaluated [22], [57], [58], [59], [60].These rules require the detection of variables that improve the solution in terms of the aforementioned aspects by changing their values.For feasible cases, the index set of improvable variables can be computed as follows: For the other cases, the index set of improvable variables can be computed as follows: We can derive J feasible (x) and Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
algorithms without this technique would repeatedly perform moves within feasible solutions in a narrow area, missing better feasible solutions.Moving to a solution that improves the objective function value may eventually lead to better solutions, even if it temporarily violates some constraints.

F. INCREMENTAL PENALTY EVALUATION
We employ an incremental computation scheme to accelerate the evaluation of neighboring solutions [12].By exploiting linearity, we can compute the values of z(x + x) and g i (x + x) (i ∈ I) in O(|J x |) time as follows: Using ( 15), we can compute the value of p(x+ x; v + , v − ) in O(|I||J x |) time from the definition given in Section IV-B.We now further streamline computing the value of p(x + x; v + , v − ) using incremental evaluation of itself as x and I − x should be computed before optimization because their derivation for each evaluation can be computationally expensive.For moves within the flip, shift, and jump neighborhoods, I x is invariant to changes in the solution; thus, we can compute I + x and I − x in advance.For moves within the selection neighborhood, I x depends on the current solution.Considering this, we redefine and can be computed in advance.Although this replacement increases the constraint indices to be scanned in (16), we can obtain more significant advantages to avoid computing I + x and I − x for each evaluation.
// Update the local penalty weights.

24:
x ← x * penalty weight w 0 should be set sufficiently large (e.g., w 0 = 10 7 ), and the coefficients α and β should be set appropriately small (e.g., α = 10 −4 , β = 10 −2 ).The decrease rate of penalty weights r, tabu tenure T , and the number of tabu search iterations K affects the performance.Empirically, the setting r = 0.9, T = 10, and K = 200, obtained through trial and error, shows a good performance for various instances.The termination conditions for Algorithms 4 and 5 include iteration and time limits.In addition, Algorithm 5 can be terminated if M is empty or the algorithm finds an optimal solution.In principle, the proposed algorithm cannot prove the optimality of a solution.However, if the current solution is feasible and there are no moves that improve the objective function value, the solution must be optimal.If Algorithm 5 finds an optimal solution, then Algorithm 4 can be terminated.
In Algorithm 5, the tabu mechanism is implemented using the state L j (j ∈ J ) which records the last updated iteration   // Evaluate the neighboring solutions.if M is empty or an optimal solution is found then // Update the current solution. 23: // Update the global and local best solutions. 27: end if

33:
// Update the other states. 34: for ∀ j ∈ J x do 36: end for 38: end for 39: return for each variable.This implementation avoids recording the solutions themselves and allows an efficient determination of whether a neighboring solution violates a tabu.

VII. EVALUATION
Numerical experiments were conducted to evaluate the performance of the proposed algorithm-based solver using MIPLIB 2017 Benchmark Set [14].This benchmark set has been carefully designed to include a wide variety of instances for assessing the performance of B&B solvers.The instance set is suitable for evaluating the proposed solver that aims to complement the performance of the B&B solver for a wide range of instances.Open-source B&B solvers, including SCIP 8.03 [61], [62], HiGHS 1.5.1 [63], and CBC 2.10.8 [64], were selected as comparison targets.The proposed solver was compared with the best-performing B&B solver among SCIP, HiGHS, and CBC for each instance.Note that the evaluation aims not to prove the superiority of the proposed solver over B&B solvers but to verify its complementary performance to them.In other words, we expect the proposed solver to obtain better feasible solutions than B&B solvers for some instances.This section also verifies the effects of the incorporated techniques through an ablation study, which is a performance evaluation using solvers that remove each technique individually.

A. IMPLEMENTATION
The proposed algorithm was implemented in C++17 and made available as an open-source software called PRINTEMPS [65].It provides a standalone solver that accepts MPS files so that users can immediately apply it to instances formulated for B&B solvers.The available version of PRINTEMPS was further tuned using automatic tabu tenure adjustment, a more complex penalty weight adjustment, and parallelization via OpenMP [66] to ensure stable performance for broad instances.In this section, the evaluation results for both the original and tuned versions are provided for reference; however, we focus on the results of the original version. 2

B. SETUP
Eighty-two pure integer instances from the MIPLIB 2017 Benchmark Set were used. 3Table 3 lists the parameter values for the proposed algorithm.The initial solution for each instance was determined by x j = min(u j , max(0, l j )) (j ∈ J \ J sel ), and the initial values of the selection variables were set greedily to minimize the overall constraint violation.Table 4 presents the components of the computational environment.The computation time was set to 120 seconds for all tested solvers.The computation time for the proposed solver accounts for reading the MPS file, reducing the instance size, and extracting selection and dependent variables.
Note that we cannot rule out the possibility that a particular parameter setting leads to a good performance by chance.Statistical tests based on the results of multiple trials are crucial to ensure the reliability of evaluation results [67], [68].However, its application for this evaluation was challenging due to the comparison with deterministic solvers, the absence of an intuitive method for evaluating trials where solvers did not find feasible solutions, and the extensive computation time involved.Consequently, a sensitivity analysis was performed to evaluate the robustness of the performance against parameter value changes and substantiate the reliability of the evaluation results.Table 5 lists the parameter settings for the sensitivity analysis.The parameters r, T , and K , whose values are changed from the nominal case, were chosen since their values are sensitive to the search behavior.
The experimental cases for the ablation study followed the techniques described in Section V and were performed under the same instances, parameter setting, computational environment, and time limit as those in the performance evaluation.The ablated solver for incremental penalty evaluation computes the penalty function value from scratch for each neighboring solution without using (16).

C. RESULTS
Table 6 presents the performance evaluation results regarding finding high-quality feasible solutions.Table 7 shows the sensitivity analysis results.Table 8 lists the ablation study results with supplements in Tables 9 and 10.In particular, Table 9 lists the average neighborhood size and solution evaluation time for the relevant cases, and Table 10 lists the results of the instance size reduction and extraction of the dependent and selection variables.The figures and tables presented in the following discussion are based on these results.

D. DISCUSSION 1) PERFORMANCE OF THE PROPOSED SOLVER
Fig. 1 compares the performance of the proposed solver with that of the B&B solvers.We define the representative performance of the B&B solvers based on the results of the best performers among SCIP, HiGHS, and CBC for each instance.Fig. 1 verifies that the performance of the proposed solver is complementary to that of the existing B&B solvers, even though it does not compete with them in overall performance.Specifically, only the proposed solver found feasible solutions for six instances.In addition, it found better feasible solutions than the B&B solvers for 11 instances.
It was not by chance that the proposed solver outperformed the B&B solvers for these 17 instances in total due to a Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.particular parameter setting.Table 7 indicates that varying the parameter values leads to different final solutions.Still, for each of the 17 instances, there is a parameter setting where the proposed solver outperformed the B&B solvers other than the nominal case.To add, for the 16 instances except tpfb-network, the proposed solver demonstrated superior performance to the B&B solver in at least six out of the seven cases.The results indicate that the performance of the proposed solver is robust against parameter value changes, that is, it performs well under various parameter settings.In addition, the results verify that it was not a coincidence that the proposed solver outperformed the B&B solvers for these instances.
Fig. 2 depicts the distribution of the numbers of variables and constraints by highlighting the instances for which the proposed solver outperformed the B&B solvers, and Table 11 lists these instances with their including constraint types [14].Fig. 2 shows that the proposed solver has complementary performance to the B&B solvers even for large instances, including those with more than 100,000 variables.Table 11 shows that the proposed solver is not specialized for a subset of ILP problems (e.g., set partitioning problems) but performs well for instances with various constraints.These results indicate that the proposed solver is promising as a general-purpose solver applicable to diverse scaled and structured instances.
Although these results meet expectations, there is room for further improvement.Because the proposed solver is based on local search, it performs poorly for instances in which feasible solutions are distant from each other.For example, the instance highschool1-aigio, for which the proposed solver could not find any feasible solution, includes constraints in the form j∈J ′ ⊆J x j = |J ′ |x ′ , where x j (j ∈ J ′ ) and x ′ are binary variables.This constraint is satisfied only when the values of the included variables are all zero or all one.For ILP instances that include such constraints, transitions between feasible solutions would be rare as long as we employ moves that change one component at a time.A possible improvement involves incorporating a neighborhood in which moves enforce the values of the included variables to all zero or all one.Another promising method is to substitute x ′ for x j (j ∈ J ′ ) in the objective function and the other constraints.

2) CONTRIBUTION OF PERFORMANCE-IMPROVING TECHNIQUES
Fig. 3 compares the performance of the original solver with that of the ablated solvers that remove each performance-improving technique individually.Fig. 3 shows that the original solver outperformed all ablated solvers in terms of the number of feasible solutions and the objective function values of the feasible solutions.This result confirms that all techniques are essential for the performance.Note that the ablated solver without incremental penalty evaluation obtained better solutions for the instance rococoC10-001000 than the original solver, even though the technique should not change the search behavior.This inconsistency arises from differences in the implementation of penalty evaluation; the ablated solver computes the penalty function value from scratch each time, whereas the original solver computes it incrementally based on the current solution.Due to rounding errors in floating-point arithmetic, evaluation function values computed by these solvers might vary slightly for the same solution, leading to potential discrepancies in the selected solution in each iteration and the final best solutions.
We now discuss the contributions of neighborhood filtering and incremental penalty evaluation.Figs. 4 and 5 show the relationship between instance size and the effects of these techniques.In terms of the quantitative values of the instance size, we focus on the number of variables, number of constraints, and nonzero component density in the constraint TABLE 11.List of instances and their including constraint types for which the proposed solver outperformed the B&B solvers (computation time: 120 seconds).The constraint types are based on [14].

FIGURE 4.
Relationships between the reduced instance size and the effect of neighborhood reduction.The y-axis values are from the column under ''Average Neighborhood Size'' in Table 9.

FIGURE 5.
Relationships between the reduced instance size and the effect of incremental penalty evaluation.The y-axis values are from the column under ''Average Solution Evaluation Time'' in Table 9.
functions of the reduced instance.Fig. 4 indicates that neighborhood filtering affects instances with many variables, constraints, and low nonzero component density.Fig. 5 shows a similar tendency for the incremental penalty evaluation.In principle, tabu search algorithms evaluate all neighboring solutions and select the best among them as the next solution.Addressing large instances requires improving the efficiency of this process.The proposed solver achieves this requirement by filtering neighboring solutions and accelerating their evaluation.
From the standpoint of implementing a general-purpose solver, the dominant techniques are penalty weight adjustment, incremental penalty evaluation, and neighborhood filtering.These techniques are universal; they work for any instance and thus have a high implementation priority in developing a solver.In addition, the comparison 19074 VOLUME 12, 2024 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
results with the ablated solvers indicate that incorporating these techniques significantly improves the performance and computation speed.The other techniques, instance size reduction, selection variable extraction, and dependent variable extraction, target specific instances.These are also important techniques since the proposed solver found feasible solutions for some instances only with these techniques.However, they should be given lower priority than the universal techniques in implementation.

VIII. CONCLUSION
This paper presented a tabu search-based heuristic solver for general ILP problems.The proposed solver incorporates techniques for penalty weight adjustment, instance size reduction, selection variable extraction, dependent variable extraction, neighborhood filtering, and incremental penalty evaluation.
In numerical experiments with a 120-second time limit, the proposed solver found better feasible solutions than existing open-source B&B solvers for 17 out of the 82 feasible pure integer instances from MIPLIB 2017 Benchmark Set, including instances with over 100,000 variables.The results confirm that the performance of the proposed solver is complementary to that of the existing B&B solvers.The ablation study demonstrated that the incorporated techniques were essential for achieving the desired performance.
Future work will include performance improvement.The proposed solver is unsuitable for instances in which feasible solutions are distant from each other.Possible approaches to address this drawback include automatically defining appropriate neighborhoods and transforming the instance to a suitable form for the proposed solver.Incorporating the characteristics of other metaheuristics with high diversification performance is also practical (e.g., genetic algorithms and simulated annealing).Another challenge is automatic parameter tuning.The proposed solver features six parameters.Although the parameter values listed in Table 3 exhibit a good performance for various instances in the numerical experiments, instance-wise parameter tuning is required to obtain further better solutions.It is not ideal for a general-purpose solver to expect users to adjust the parameter values manually for each instance.Machine learning-based parameter tuning techniques are considered promising approaches [69], [70].
, and the values of the other components are all zero.The notation ∥x∥ represents the Euclidean norm of x.The notation f (a; b) indicates that the parameter b shapes the function f , and the symbol a is its argument.The notation A := B indicates that the expression B defines the symbol A. The notation A ← B in pseudo-code indicates that A is initialized or updated by the value of B.
where I + x := I + ∩ I x and I − x := I − ∩ I x .This modification improves the time complexity for computing the value of p(x+ x; v + , v − ) to O(|I x ||J x |), in particular, O(|I x |) for moves within the flip, shift, and jump neighborhoods.The index sets I +

VI. COMPLETE ALGORITHM Algorithms 4
and 5 illustrate the complete proposed algorithm.They omit processes that can be performed before optimization, including instance size reduction, and extraction of selection and dependent variables.Algorithm 4 iteratively runs the tabu search shown in Algorithm 5 while adjusting the local penalty weights.Except for the initial solution, the proposed algorithm includes six parameters: w 0 , r, T , K , α, and β.The original Algorithm 4 ProposedAlgorithm Input: initial solution x 0 , parameters w 0 , r, T , K , α, β Output: global best solution (x * w , f * w ) 1: // Initialize the global and local penalty weights.2: for ∀ i ∈ I + do 3: v + i ← w 0 , w + i ← w 0 4: end for 5: for ∀ i ∈ I − do 6: global = true or δ local = false false, otherwise 19:

v 25 :
if δ global = true and x * w is feasible then 26: attempt instance size reduction using c ⊤ x * w ≤ f * w 27: end if 28: end while 29: return (x * w , f * w )

Algorithm 5
TabuSearch for (ILP ′ ) Input: initial solution x 0 , objective function value of global best solution f * , global penalty weights (w + , w − ), local penalty weights (v + , v − ), updating counters n, parameters T , K , α Output: global and local best solutions with supplementary information (x * w , f * w , x * v , f * v , δ global , δ local , F) 1: // Initialize the current and local best solutions.

FIGURE 1 .
FIGURE 1. Performance comparison of the proposed solver and B&B solvers.

FIGURE 2 .
FIGURE 2. Distribution of the numbers of variables and constraints in the reduced instances for which the proposed solver outperformed the B&B solvers.

TABLE 7 .TABLE 8 .
Sensitivity analysis results (computation time: 120 seconds).Numbers in bold indicate the objective function values for feasible solutions, and those in parentheses indicate the violation of constraints.Numbers with an asterisk indicate that optimality or infeasibility was proven.The ''superior to B&B solvers'' column shows the rate of the sensitivity analysis cases where the proposed solver outperformed the B&B solvers.Considering the objective of this analysis, values in this column are only listed for instances where the proposed solver derives better solutions than B&B solvers in the nominal case, for which the instance number is highlighted with double asterisks.Ablation study results (computation time: 120 seconds).Numbers in bold indicate objective function values for feasible solutions, and those in parentheses indicate the violation of constraints.Numbers with an asterisk indicate that optimality or infeasibility was proven.Cells without numbers indicate that the ablated technique does not have an inherent effect on the performance.19070 VOLUME 12, 2024

FIGURE 3 .
FIGURE 3. Performance comparison results of the proposed solver with the ablated solvers.

TABLE 2 .
Examples of dependent variable extraction.''None'' indicates that no dependent variable is extracted because of a conflict with the condition in parentheses.

TABLE 3 .
Parameter values for the proposed algorithm.

TABLE 4 .
Components of the computational environment.

TABLE 5 .
Parameter settings for the sensitivity analysis.The parameter values for the nominal case are identical to those in Table3.Numbers in bold indicate that they are changed from the nominal case.

TABLE 6 .
Performance evaluation results (computation time: 120 seconds).Numbers in bold indicate the objective function values for feasible solutions, and those in parentheses indicate the violation of constraints.Numbers with an asterisk indicate that optimality or infeasibility was proven.Instance number Note that the CBC solver concluded that the instances k1mushroom and rain01 are infeasible, which is incorrect.

TABLE 10 .
Results of the instance size reduction and extraction of the dependent and selection variables.The checkmarks in the ''in Preprocess'' and ''in Optimization'' columns indicate that the bound of variables were updated in the corresponding phase.