An Adaptive Opposition-Based Learning Selection: The Case for Jaya Algorithm

Over the years, opposition-based Learning (OBL) technique has been proven to effectively enhance the convergence of meta-heuristic algorithms. The fact that OBL is able to give alternative candidate solutions in one or more opposite directions ensures good exploration and exploitation of the search space. In the last decade, many OBL techniques have been established in the literature including the Standard-OBL, General-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL. Although proven useful, much existing adoption of OBL into meta-heuristic algorithms has been based on a single technique. If the search space contains many peaks with potentially many local optima, relying on a single OBL technique may not be sufficiently effective. In fact, if the peaks are close together, relying on a single OBL technique may not be able to prevent entrapment in local optima. Addressing this issue, assembling a sequence of OBL techniques into meta-heuristic algorithm can be useful to enhance the overall search performance. Based on a simple penalized and reward mechanism, the best performing OBL is rewarded to continue its execution in the next cycle, whilst poor performing one will miss cease its current turn. This paper presents a new adaptive approach of integrating more than one OBL techniques into Jaya Algorithm, termed OBL-JA. Unlike other adoptions of OBL which use one type of OBL, OBL-JA uses several OBLs and their selections will be based on each individual performance. Experimental results using the combinatorial testing problems as case study demonstrate that OBL-JA shows very competitive results against the existing works in term of the test suite size. The results also show that OBL-JA performs better than standard Jaya Algorithm in most of the tested cases due to its ability to adapt its behaviour based on the current performance feedback of the search process.

that covers the t-way interaction strength. Many reported test results indicate that t-way test suite is as good as exhaustive testing [7], [8].
Over the last 10 years, many new meta-heuristic algorithms have been developed, often, disguised by some new inspirations and mathematical formulation. Despite these so-called new inspirations and formulation, the fact remains the same [9]. The performance of any meta-heuristic algorithm is dependent on two core parts: intensification (local search) and diversification (global search). Intensification explores the promising neighbouring regions in the hope to find better solutions. On the other hand, diversification ensures that all regions of the search space have been visited, which enables the algorithm to jump out of any local optimum [10].
More specifically, the performance of meta-heuristic algorithms is highly dependent on: a) The fine balance between the intensification and diversification. Too much intensification may result in the quick loss of diversity in the population which increases the possibility to make the algorithm being trapped in a local optimum. Aggressive diversification may lead to inefficient search and slows down the overall search performance [11]. b) The operators or components that used for performing the intensification and diversification such as selection mutation, and crossover in Genetic Algorithm (GA) or local and global pollinations in Flower Pollination Algorithm (FPA) [12]. To enhance the search performance, many researchers have turned in to Opposition-based Learning (OBL) technique [13]- [16]. The main strength of OBL is the fact that alternative candidate solutions can be generated from one or more opposite directions, thus, ensuring sufficient coverage of the search space. Recently, many OBL techniques have been established in the literature including the Standard-OBL, General-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL [13]. The OBLs have been integrated into many soft computing algorithms such as optimization methods [14], Artificial Neural Networks (ANN) [15], Reinforcement Learning (RL) [17], and Fuzzy System [16], to name a few. Meta-heuristic algorithm such as GA [18], SA [19], PSO [20], Biogeography-based Optimization (BBO) [21], HS [22], Gravitational Search Optimization (GSO) [23], Ant Colony System (ACS) [24], and Group Search Algorithm (GSA) [25], have been known to utilize the concept of OBL to enhance the performance of their search capabilities [26]. Meanwhile, in the field of Artificial Neural Network, the OBLs are used to enhance the training in Backpropagation through time (BPTT) neural network [15]. For the same purpose, the OBLs have also been adopted in Reinforcement Learning [17] to solve the problem of delayed reward in reinforcement learning.
Although proven useful, much existing integrations of OBL into meta-heuristic algorithms have been based on a single technique. If the search space contains many peaks with potentially many local optima, relying on a single OBL technique may not be sufficiently effective. If the peaks are close together, a single OBL technique may not be able to prevent entrapment in local optima. Addressing this issue, ensembling a sequence of OBL techniques into the metaheuristic algorithm can be useful to enhance the overall search performance. Based on a simple penalized and reward mechanism, the best performing OBL is rewarded to continue its execution in the next cycle, whilst poor performing one will miss cease its current turn. This paper presents a new adaptive approach of integrating more than one OBL techniques into Jaya Algorithm, termed OBL-JA. Unlike other adoptions of OBL which use one type of OBL, OBL-JA uses several OBLs and their selections will be based on each individual performance. The Jaya Algorithm has been chosen because it is free of parameter and easy to implement.
Moreover, mixed results show that the capability of existing t-way strategies is still limited as there is no single strategy appears to be superior in all configurations considered [7], [27]. The effort to address the aforementioned shortcomings is justified through the search for a new strategy that takes the new breed of newly developed meta-heuristics algorithms into account.
Given such prospects, this paper proposes a new t-way testing strategy based on adaptive Opposition-based Learning Jaya Algorithm called OBL-JA, for t-way test suite generation. Our contributions can be summarized as follows: • First, this paper presents a new adaptive approach of Jaya Algorithm based on Opposition-based Learning, called OBL-JA. Unlike other variants of OBL, the proposed approach uses several OBLs and the selection mechanism of OBLs will be based on current performance whereas other OBLs use only one type of OBL. By doing so, OBL-JA ables to achieve a fine balance between intensification and diversification, since OBL-JA adapts dynamic selection mechanism between different OBL operators which each has different capabilities.
• Second, this paper proposes a new t-way testing strategy based on OBL-JA for generating t-way test suite that can add a new value in the domain of software testing. The proposed strategy is compared with different t-way testing strategies. Here, two experiments have conducted; the first experiment measures the percentage use of each OBL operator in OBL-JA. while the second experiment measures the exploration and exploitation of the proposed strategy. The rest of this paper is structured in the following manner. Section 2 gives an overview of t-way testing and its theoretical background. Then, section 3 provides reviews of existing strategies. Detailed review on OBLs and its variants are provided in section 4. Section 5 presents the design of the proposed strategy. Experiment and discussion of results are elaborated in section 6. Lastly, section 7 concludes the work along with the recommendations for future work.

II. OVERVIEW ON T-WAY TESTING A. T-WAY TEST SUITE GENERATION
t-way testing is a sampling technique used for generating representative test cases that can for testing software/hardware systems overall. The idea behind the t-way testing is that the tester doesn't have to test all inputs and output combinations, instead, the tester needs to meet some level of coverage such that every t combinations are covered by the test cases.
To illustrate how t-way testing can reduce the size of test cases, consider the online payment system. It allows the electronic transfer of the many in which the user have to fill out an online payment form with required information and submit to the merchant's website. In this illustrative example, there are six inputs or parameters need to be keyed and submitted to merchant's website which are selected payment method, card number, name on card, expiration, and card CVV, as shown in FIGURE 1. Five payment methods are supported by the system which are ''Visa Card'', ''Master Card'', ''American Express'', ''Discover'', and ''PayPal''. The fields ''Name-On-Card'' and ''Card-Number'' accept one string value for each while ''Expiration-Date'' are two input values MM for months and YY for years from 16 to 31. Card CVV parameter accepts one input value.
Ideally, testing this system requires 900 test cases (5 × 1×1 × 12×15 × 1) which are exhaustively covered all combinations of the six parameters' values, however, testing all the combinations especially for the complex system is impractical. Turning to two-way test suite can reduce the test cases to 180 test cases, thereby saving 80% in time, effort and costs. Based on some studies, the 180 test cases generated using two-way testing (interaction coverage t = 2) can detect 93% of software failures, while 98% of failures can be detected if all three-way testing is applied. The same study shows that the rate of fault detection can reach 100% if the interaction coverage strength is between 4 and 6 [28]- [32].

B. THEORETICAL BACKGROUND
Test suite (T) is an n × m array of n rows of test cases. Each test case is combinations of parameters' values. Covering array (CA) is a mathematical notation that is used to describe the t-way test suite [33], [34]. The notation CA(N, t, v p ) represents the uniform covering array where p denotes number of parameters, v denotes the values of the parameter, t donates the level of interaction strength. For example, CA(18; 2, 3 13 ) consists of 18 rows of test cases that are generated from 13 columns of parameters with three values for each parameter. If the covering array is not uniform and values of the parameters are not the same, it is represented by termed as mixed CA. MCA (12, 3, 2 3 3 1 ) represents a covering array with 12 final test cases, generated for the system with 3 2-valued parameters and 1 3-valued parameter.

III. RELATED WORK
In the domain of software testing the existing t-way testing strategies can be characterized into two main approaches: Algebraic and Computational Approaches [4], [35]. Algebraic approach often generates the test sets without considering any combinations because generating the test set is done directly using some lightweight computations. Strategies of this approach include t-way covering Array(CA), orthogonal Latin squares (OLS), and test configuration (TConfig). However, the limitation of this approach is that the algebraic based strategies are often restricted to small configurations [36], [37]. In the other hand, generating the test suite in computational approaches is based on greedy algorithms such to cover the maximum number of interaction combinations. Tools and strategies of this approach generate the test cases either using the One Parameter at a Time (OPT) or One Test at a Time (OTT) approach.
OPT strategies generate a complete test cases with t size of parameters, then horizontally adding one parameter per iteration till all the combinations are covered. The best example of this approach is in-parameter-order (IPO) strategy and its variants [38], [39].
OTT strategies iteratively generate one complete test case per until all combination of the values is covered. An example of these approaches is the automatic efficient test generator (AETG) [40]. Based on the concept of AETG, various strategies have been developed such as GTWay [41], Jenny [42], TConfig [43], and WHITCH [44].
Due to its efficient, many researchers adopt meta-heuristic algorithms such as TS, SA, ACA, GA, HS, FPA, and CS in generating t-way test cases. In general, meta-heuristic based t-way strategies use the algorithm as core implantation for generating the test suite. Most of the meta-heuristic based t-way strategies generating the test suite using OTT. The strategy uses the meta-heuristic algorithm for generating one test per iteration then add the generated test case into the final test case. Then this procedure is repeated until all combinations are covered. In the literature review, we can recognize three categories of meta-heuristic based strategies. The first category uses a single meta-heuristic algorithm as the search engine for the test case. Example of this category includes SA [1], GA [1], [2], ACA [2], PSO [3], HS [4], FPA [6], Whale Optimization Algorithm [45] and CS [5].
Based on the above-mentioned review, most of the existing strategies based on single meta-heuristic algorithms. Only a few works have been done using hybridization or adaptive meta-heuristic algorithms. Another point worth to mention is that most of the existing strategies rely on some parameters and need to be tuned. In this research, we propose new t-way testing based on adaptive OBL-Jaya Algorithm which is free of parameters. The strategy adapts OBL operator to enhance its search capabilities.

IV. PROPOSED STRATEGY
The proposed strategy can be considered as two levels of optimization; the first level uses Jaya algorithm as core implementation, while the second level adopts different OBL operators, including standard-OBL, General-OBL, Quasi Reflection-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL, to generate the opposition of the current population.

A. ORIGINAL OPPOSITION-BASED LEARNING AND ITS VARIANTS 1) OPPOSITION-BASED LEARNING
In general, the idea of basic Opposition-based Learning (OBL) is that corresponding opposite if the current solution maybe is better than the current solution itself. It attempts to provide a better chance of finding a solution x * from current solution x as follows: where a and b are the lower and higher boundaries of x.

2) GENERAL OPPOSITION-BASED LEARNING
General Opposition-based Learning (OBL-G) [53] uses the consent of basic OBL and Cauchy mutation (i.e. random weight), which can help trapped solution to jump out of local minima.

) QUASI-OPPOSITION BASED LEARNING
Quasi-Opposition Based Learning (OBL-Q) [54] generates a random point between the two inverse solutions (i.e. the centre point and OBL point of x). OBL-Q is defined by:

4) QUASI REFLECTION OPPOSITION BASED LEARNING
Quasi Reflection Opposition based Learning (QR-OBL) [55] is an extension of quasi Opposition based Learning, which represents a point between the center point and x which can define by:

5) CURRENT OPTIMUM OPPOSITION BASED LEARNING
Another version of OBL is Current Optimum Opposition based Learning (OBL-O) [56] which uses the search information of the current best solution. The OBL-O is defined by: Centroid-Opposition based Learning (OBL-C) [57] replace Current Optimum in OBL-O by centroid opposition, which can be computed by: B. ORIGINAL JAYA ALGORITHM Jaya Algorithm (JA) [58] is one of the recent meta-heuristic algorithms, designed for solving general optimization problems. The idea of JA is that potential solution should be based on the best solution and avoid the worse solution. Thus, JA needs only the best and worse solutions to generate a new solution. For generating a new solution X i,j , the following equation is used: where X i,j is the current solution, X best is the best solution and Xworst is the worst solution. FIGURE 2 summarizes the Jaya algorithm.

C. ADAPTIVE JAYA ALGORITHM BASED ON OPPOSITION-BASED LEARNING FOR TEST SUITE GENERATION
The proposed strategy utilizes Jaya Algorithm (JA) as core implementation meta-heuristic algorithm to generate optimal t-way test suite. The OBLs are included in OBL-JA to accelerate the convergence of the search process. OBL-JA use the OBL operators for generating opposite population. The current populations and their opposite populations are evaluated simultaneously, hence, the selection of OBL to be used in the next iteration is based on obtained results. Therefore, the selection mechanism used in OBL-JA can be seen as a switch In order to generate t-way test suite, the strategy starts generating all t possible interaction of the inputs, which represent the search space to be added into the interaction list. For instance, if t = 3, the 3 combinations for 4 inputs (i.e. A, B, C, and D) with 2 values for each, are ABC, ABD, ACD, and BCD. Then for each combination, all possible 3 interactions are generated as follows (refer to FIGURE 4): The next step of OBL-JA is to find the smallest number of test cases that cover all those interaction possibilities of the inputs. In OBL-JA, each solution represents one test case. OBL-JA starts generating a population of solutions individually, using Jaya Algorithm (JP) and its opposite ones (OP) simultaneously. Then elite solutions from JP and OP are selected to form the next population as shown in FIGURE 3. The complete step for OBL-JA includes finding the optimal test case, that is, the best solution with the highest weight.
Here, the solution's weight is the number of t-combination elements x i that can be cover by solution, which is defined by: is a fitness function to be optimized that captures the weight of the test case x, and x i are covered t-combinations. The population iteratively subjects to improvement process until the termination condition is met (i.e. reach the maximum number of improvement). The best solution will be selected and added to the final test suite and the covered interaction elements are removed from the t-combinations list. The whole process is repeated until all

V. EXPERIMENTS AND DISCUSSION
In order to evaluate the efficiency of the proposed strategies, first, the variants of OBL based Jaya Algorithm are evaluated against themself. Then, the proposed OBL based Jaya Algorithm is compared with existing test generation strategies. The results are displayed using tables and graphs. The experiments were performed on Core i7-3770 CPU@ 3.40 GHz -3.40 GHz, Windows 7 professional machine. We have adopted the tune Jaya Algorithm parameters based on existing study for generating test suite generation. The results are depicted in TABLES 2 to 10, and FIGURES 6 to 9. For statistical significance, OBL-JA is executed twenty times noting both the average and the best results. Each cell indicates the minimum test suite size obtained by existing strategies. Cells marked by star ( * ) denote the best test size obtained by the corresponding strategy, while cells marked as bold font cell denote the best test size obtained by OBL-JA compared with standard Jaya Algorithm. Cells marked as NA denote unavailability of the results in the literature such as the results of mAETG, AETG, HHH, and CS.
A. PARAMETER ADJUSTMENT TABLE 1 depicts the parameters that are adopted for the meta-heuristic strategies [4], [5], [12], [46]. Regarding OBL-JA's parameter setting, the common two parameters, population size and iteration number, are tunned in order to select optimal values that can lead to best results. For tuning of these parameters, two well-known covering arrays, CA (N; 2, 4 6 ) and SCA (N; 3,9) [4], have been used. First, in order to determine optimal test suite size, we try different values for population size and are iteration.
Concerning the population size and iteration, from the results shown in TABLE 1, TABLE 2, and FIGURE 6, it is observed that using large value of population size may lead to better results, and using too little value may lead to poor results. With increasing the number of population up to 30, the performance of OBL-JA improves, however, the high value of population size (i.e., equal to 500) does not necessarily give better size of the test suite as shown in FIGURE 6. Hence, for population size, the best results are obtained when the population is between 30 and 100. As well, we can observe that as the iteration value increases, the best result obtained is also getting better. The best result obtained is when the iteration value varies from 300 to 500.

B. ANALYZING THE BEHAVIOUR OF OBL-JA
In order to analyze the behaviour of OBL-JA and evaluate the effect of introducing the OBLs into standard, two experiments have been conducted. In fact, the performance of OBL-JA is heavily based on selected OBL which adopts a selection  Similar to other met-heuristic algorithms, the performance of OBL-JA is dependent on the fine balance between exploration and exploitation. Too much exploitation may result in the quick loss of diversity in the population which increases the possibility to make the algorithm being trapped in a local optimum. Aggressive exploration may lead to inefficient search and slows down the overall search performance [11]. To determine the exploration and exploitation of the proposed strategy, an experiment is conducted by measuring the hamming distance between the population of test cases with themselves which is also known as the population's diversity rate. If the distance is large, then it is exploring, otherwise, it is exploiting the search space. The following equation is used to measuring the hamming distance between two test cases (i.e. x i t and x i+1 t ): shows the average distance of the population at each iteration. Besides the standard Jaya and proposed strategy, the figure shows the diversity rates of other variants of OBL based Jaya algorithm based on the results of the first covering array in TABLE 4. The figure also shows that OBL-G and OBL-G obtained the lowest diversity rates which meant they tend to exploitation rather than exploration. In contrast, both of OBL-QR and OBL-JA obtained the highest rating.
Comparing the standard Jaya against the OBL-JA, OBL-JA allows more diverse solutions than standard Jaya. Although obtained the highest diversity rate among other variants of OBL based Jaya Algorithm, OBL-JA still achieves a balance exploration and exploitation since it is less than the maximum diversity rate.

C. BENCHMARKING WITH EXISTING STRATEGIES ON TEST SIZES
To evaluate its obtained solution in terms of minimization of test suite size, OBL-JA is compared with existing VOLUME 9, 2021 meta-heuristic based t-way strategies. Our experiments are divided into four sets of comparisons as follows: 1) Comparison of OBL-JA with results of strategies published in [4], [5], [59] for different configurations system (TABLE ):   , v varied from 2 to 7 as shown in TABL 8 [7], [12], [60].      where P is varied from 5 to 12. GTWay outperforms other strategies in 4 out of 8 cell entries while OBL-JA comes second and outperforms other strategies in 2 entries, followed by HHH and FPA with 1 entry for each. In fact, GTWay uses backtracking concept which almost generates all possible solutions using recursion. Thus the time consuming of GTWay is usually exponential or worse. Concerning the performance of OBL-JA and Jaya Algorithm, OBL-JA is still superior to the standard Jaya Algorithm, however, Jaya Algorithm outperforms OBL-JA in two cases when p = 7 and p = 9.
As for comparative experiment involving CA(N; 4, v 10 ) with v is varied from 2 to 7 in TABLE 8, OBL-JA outperforms the existing strategies in 2 out of 6 cell entries, while GTWay, MIPOG, CS and HHH come as the runner up with only 1 best entry. IPOG, ITCH, Jenny, PICT, TConfig, TVG, CTE-XL, PSO, and HSS perform the poorest with no single best cell entry. In this experiment, the standard Jaya algorithm fails to outperform the proposed strategy in any case. FIGURE 9 illustrates the comparison of OBL-JA against Jaya Algorithm for TABLE 6 till TABLE 8. The comparison shows that OBL-JA performs better than standard Jaya Algorithm in most of the cases. OBL-JA is able to generate better results due to its ability to adapt its behaviour based on the problem itself. As state earlier, the performance of any meta-heuristic based strategies depends heavily on their exploration and exploitation. OBL-JA utilizes the search capabilities of OBL operators since each OBL has its own searching capability, therefore it is able to switch from one OBL to another based on addressing the problem.

D. STATISTICAL ANALYSIS
In order to analyze and verify our findings, this section presents a statistical analysis. Multiple comparisons for all obtained results are conducted. Wilcoxon signed-rank tests VOLUME 9, 2021  with Bonferroni-Holm correction are used to find whether the proposed strategy presents statistical difference with regards to the existing strategies.
Post-hoc Wilcoxon Signed Rank Test technique is used to analyze the significance of each pair of strategies. The Wilcoxon test is a non-parametric analysis technique can be used to compare two sets of ordinal data that are subjected to different conditions. Wilcoxon test statistic is calculated and converted into a conditional probability P-value. A small P-value means that it is strong evidence to reject the null   hypothesis H 0 (i.e. there is no difference between two strategies' results) in favor of the alternative hypothesis.
In this test, the OBL-JA is compared with each existing strategy, separately; to test if there is a significant difference between the produced results of the proposed strategy and other strategies. Here, we have two different hypotheses null hypothesis (H 0 ) and the alternative hypothesis (H 1 ). H 0 indicates that there is no difference between the two strategies' results, while H 1 indicates that there is a difference between the two strategies' results. In other words, H 1 indicates that  obtained test size using OBL-JA is less than each individual strategy.
Since we are dealing with multiple comparisons, it is more likely to face Type I errors which is the rejection of a true null hypothesis [61]. To control such effect, there is a need for adjusting the rejection criteria for each individual test. Here, Bonferroni-Holm correction is adopted for each comparison level. By using Bonferroni-Holm correction, sorted p-values are compared with adjusted alpha [62].  TABLE 4. TABLE 10 shows statistical analyses for   the results in TABLE 5 until TABLE 8. Strategies with one or more NA entries such as mAETG, AETG, SA, ACA, GA are ignored. The statistical results show that in most comparisons the proposed strategy the null hypothesis is rejected with a significant difference. Although no statistical difference is shown in some comparisons such as OBL-JA vs ITCH, OBL-JA vs HSS and OBL-JA vs HHH, the positive ranks of OBL-JA are higher than its negative ranks.

VI. CONCLUSION
In this paper, we have proposed a new adaptive strategy for t-way test suite generation based on Jaya Algorithm(JA) and opposition-based learning(OBL) concept, called adaptive Jaya Algorithm based on Opposition-based Learning for generating the test suite (OBL-JA). The OBL-JA has been obtained by employing the concept of component grafting of different types of OBL operators, such as standard-OBL, General-OBL, Quasi Reflection-OBL, Quasi Reflection-OBL, Centre-OBL and Optimal-OBL, into the standard JA strategy. OBL-JA adapts a kind selection mechanism between OBLs based on the performance of OBL-JA. Experiment results and statistical analysis show that the OBL-JA based strategy outperforms the existing t-way strategies in many cases. The OBL-JA is also compared with standard JA in the context of t-way test suite generation. In most of the cases, the OBL-JA performs better than standard JA due to its ability to adapt its behaviour based on the problem itself. Owing to encouraging results, we are looking to use OBL-JA for global optimization problem and explore the possibilities of constraints-based software product lines.