A Robust Multi-Objective Feature Selection Model Based on Local Neighborhood Multi-Verse Optimization

Classification tasks often include, among the large number of features to be processed in the datasets, many irrelevant and redundant ones, which can even decrease the efficiency of classifiers. Feature Selection (FS) is the most common preprocessing technique utilized to overcome the drawbacks of the high dimensionality of datasets and often has two conflicting objectives: The first function aims to maximize the classification performance or reduce the error rate of the classifier. In contrast, the second function is designed to minimize the number of features. However, the majority of wrapper FS techniques are developed for single-objective scenarios. Multi-verse optimizer (MVO) is considered as one of the well-regarded optimization approaches in recent years. In this paper, the binary multi-objective variant of MVO (MOMVO) is proposed to deal with feature selection tasks. The standard MOMVO suffers from local optima stagnation, so we propose an improved binary MOMVO to deal with this issue using the memory concept and personal best of the universes. The experimental results and comparisons indicate that the proposed binary MOMVO approach can effectively eliminate irrelevant and/or redundant features and maintain a minimum classification error rate when dealing with different datasets compared with the most popular feature selection techniques. Furthermore, the 14 benchmark datasets showed that the proposed approach outperforms the stat-of-art multi-objective optimization algorithms for feature selection.


I. INTRODUCTION
Data mining is the process of extracting valuable knowledge, and interesting patterns embedded in different data sources (e.g., databases and data warehouses) [1]. Data mining techniques are mainly classified into supervised (e.g., classification) and unsupervised (e.g., clustering) techniques [2]. Supervised learning techniques, such as kernel extreme learning [3], k-nearest neighborhood (KNN) [4], and support vector machines (SVM) [5], [6] tend to learn a model to be able map a data instance to a specific category or class. Unsupervised learning techniques, such as clustering, on the The associate editor coordinating the review of this manuscript and approving it for publication was Hisao Ishibuchi . other hand, infer the structure of the data without having a piece of prior knowledge about their categories of classes [7].
Classification methods have been widely used in different real-world applications such as health informatics [5], [8], medical systems [9], [10], image processing [11], [12], protein classification [13], and feature fusion [14]. The main challenge with these applications is that the datasets become very large due to the advancements in data collection tools [15]. The high-dimensional datasets may include, in addition to the valuable features, some irrelevant and redundant features that may reduce the efficiency of the learning algorithms [16]. Therefore, preprocessing and preparing the datasets became a crucial step in determining the success or failure of the learning algorithms [17]. VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Dimensionality reduction (i.e., Feature Selection (FS) and Feature Extraction (FE)) is one of the most common preprocessing techniques used to overcome the challenges of high-dimensional datasets [18]- [20]. This paper focuses on FS for classification tasks, where FS methods aim to determine the most informative features in a dataset during a reasonable training time for a specific classifier, simplify the learned models, and improve the performance of the searching and classification engines [21]. However, searching for the most informative features is challenging due to the large feature space as there are 2 n possible feature subsets in a dataset with n features. A specific feature may be considered an important and beneficial one for the classification model, yet it might be considered redundant when combined with other features. By contrast, a feature may be classified as irrelevant when considered individually while being relevant and beneficial for the learning performance in conjunction with other features. Therefore, for a large n, it is impractical to exhaustively evaluate all feature subsets to get the best performing one. Under those circumstances, FS is considered as an NP-hard combinatorial problem [22]- [24].
Different search strategies have been used for FS, such as greedy-based strategies (i.e., sequential forward selection (SFS) and sequential backward selection (SBS) [25]). However, those methods typically have high computational complexity or suffer from premature convergence problems [26]. To overcome these problems, metaheuristics have been widely applied as search strategies in FS methods [27]. Evolutionary computation (EC) algorithms are population-based metaheuristics that have been successfully applied to tackle FS problems due to their superior global search ability. The most popular EC algorithms commonly used for dealing with FS are Genetic Algorithms (GA) [28], [29] and Particle Swarm Optimization (PSO) [30].
Paying attention to the number of objectives considered when evaluating a solution for the optimization problem, metaheuristics can be classified into two categories. As the name implies, single-objective methods deal with one objective, while multi-objective techniques deal with two or more objectives, which are often in conflict [31]- [34]. The critical point in the EC techniques is that they manipulate a set of solutions at each iteration of the optimization process. In other words, EC can produce multiple trade-off solutions in a single run, which enables them to show good efficacy on multi-objective optimization [35].
FS methods can be categorized into two main categories when considering how they evaluate the generated feature subsets [36], [37]: Filter approaches consider the correlations between features and the class without evolving any learning algorithm. In contrast, wrapper approaches consider the performance of a learning algorithm (e.g., classification) in the evaluation process. Wrapper FS methods try to optimize two contradictory objectives when evaluating a feature subset: to obtain the minimum number of features and the minimum classification error rate. Hence, FS problems can be treated as multi-objective problems with contradictory cost functions. methods in the literature deal with a  single objective, while a few multi-objective FS studies have  been reported. Multi-Verse optimization (MVO) is a new swarm-based approach that has shown its exploratory and exploitative performance in dealing with several real-life engineering and science problems [38]. MVO algorithm was proposed by Mirjalili et al. [39] to mathematically model the philosophy of multi-verse in astrophysics. However, most binary problems such as feature selection normally have more variables than continuous variables, which requires more efficient optimization approaches to handle this challenge. This paper proposes an efficient binary Multi-objective MVO optimizer with personal best to improve the efficacy of the basic MVO to handle the feature selection tasks for the first time in literature.

Most of the existing FS
In this paper, we have made the following key contributions: • Two enhancements of Multi-objective MVO are proposed. In the first approach, a binary approach using an efficient transfer function is developed. In the second approach, the personal best location and ''local best'' is embedded in MOMVO.
• The hybrid MOMVO and personal best is proposed for the first time to solve the feature selection tasks.
• The proposed approaches have been tested on fourteen real benchmarks datasets with different settings and characteristics to show their efficiency for feature selection tasks.
• The efficacy and qualitative results of the proposed technique are compared to the several well-regarded and state-of-the-art multi-objective optimizers in the FS field from different aspects. The multi-objective versions of the PSO (MOPSO), non-dominated sorting genetic algorithm (NSGA-II), multiobjective evolutionary algorithm based on decomposition (MOEAD), improved strength Pareto evolutionary algorithm (SPEA2), and Pareto envelop-based selection algorithm (PESA2).
The rest of this paper is organized as follows: Section 2 presents the review of the related works about multi-objective feature selection algorithms. Section 3 describes the preliminaries of the feature selection and multi-objective optimization, and the MVO algorithm. Section 4 presents the details of the proposed approaches. The experiments and results are presented in Section 5. Finally, Section 6 discusses the concluding remarks and future works.

II. REVIEW OF RELATED WORKS
Feature selection techniques have been widely used in different computational applications including but not limited to medical science [40]- [42], sales forecasting [43], face recognition [44], and customer churn prediction [45]. When designing a machine learning technique [46], reducing the number of features in a dataset contributes to decreasing the required learning time by removing the redundant features. Also, it enhances the performance of the employed learning technique by removing the irrelevant, misleading, and inappropriate features [25].
Many works applied FS methods as an improvement of the machine learning models. One example for the FS study is the work in [47], where the authors discussed various types of evaluation measures for feature selection. Another example investigated the FS for supervised classification approaches [48], whereas. [49] proposed an FS model based on Shapley value embedded genetic algorithms and support vector machine. While the recent work [50] presented an improved feature selection using Harris Hawks optimizer for gene expression data. [51] introduced An efficient FS approach using Modified Social Spider Optimization (MSSO) algorithm and [52] investigated the unsupervised feature selection for enhancing the performance of classification models. Further, the authors of [53] applied FS combined with CGA-NN classifier for the optimal solution. Also, more various studies employed the FS methods in the literature such as [54]- [58].
As FS methods tend to improve the learning performance of the algorithm (e.g., classifier) by using the minimal number of features, the use of multi-objective optimization methods to tackle the FS methods has significantly grown in recent years [59], [60]. In this section, we explain the most crucial multi-objective FS approaches.
Recently, multi-objective EC algorithms (e.g., GA and DE) have been utilized to address the FS problem. In this sense, [61] proposed a multi-objective micro GA to form an ensemble optimizer that optimizes the FS problem in addition to optimizing the neural network classifier. Nondominated sorting-based multi-objective GA II (NSGA-II), is a GA variant that was initially proposed by [62] to solve multi-objective optimization problems. NSGA-II was used in [45] as a multi-objective FS approach, with Decision Tree C4.5 classifier as an evaluator to design a customer churn predictor. [63] proposed a novel multi-objective FS approach that considers both the feature weights and a number of selected features as two objectives to be achieved for a facial recognition application. Another NSGA-II based multi-objective FS approach was proposed in [64], where the classification accuracy and the number of selected features were treated as two objectives, and the user is allowed to choose a subset in the Paretofront. NSGA-III is another variant of the multi-objective GA used as a search strategy in several FS methods. [65] proposed an improved NSGA-III with niche preservation procedure for multi-objective FS problem, where the number of selected features and the sum weight were used as two different objectives to be achieved in the selected subset.
Besides, more works utilized the multi-objective FS technique; for instance, a hyperparameter tuning based on the multi-objective FS method is also proposed by [66]. [67] presented a survey about multi-Objective FS and their applications. While, [68] employed multi-objective FS for bacterial foraging optimization.
Concerning the DE algorithm, a multi-objective FS approach was proposed in [69], where maximizing the classification accuracy and minimizing the number of selected features were considered as two opposing objectives. In addition, [44] proposed a multi-objective FS approach based on the DE algorithm. The authors used their approach for Facial Expression Recognition (FER) method. Therefore, they applied the modified multi-objective DE to select the best subset of features and the support vector machine classifiers for emotion recognition accuracy.
Recently, many multi-objective PSO-based FS approaches were proposed in the literature. [70] introduced the use of a multi-objective PSO algorithm for the FS problem. In that paper, two variants of PSO were proposed to generate the Pareto front of non-dominated feature subsets. [71] proposed a multi-objective PSO FS approach, called RFPSOFS, where the features are ranked based on their frequencies in the archive set, and then they are used to guide the particles and for the archive refinement. [72] proposed an enhanced multi-objective PSO-based FS approach by employing an adaptive uniform mutation operator to enhance the exploration capability of the PSO algorithm, in addition to adopting a local learning strategy to enhance the algorithm's exploitation capability. A similar multi-objective PSO approach was proposed by [73] to optimize both the parameters of the SVM classifier and the number of selected features. In [74], an enhanced multi-objective PSO was used to search for the Pareto front feature subsets that satisfy different objectives. Another multi-objective PSO-based FS approach was proposed in [75], where the reliability and the classification accuracy were considered as two objectives to be achieved, and a bare-bones-based PSO with reinforced memory strategy and a hybrid mutation operator was used to search the Pareto-front feature subsets.
Moreover, other multi-objective swarm-based metaheuristic algorithms were used to tackle the FS problem. For example, a multi-objective variant of Artificial Bee Colony (ABC) was used as a searching strategy in a multi-objective FS method in [26] and [76]. In addition, a multi-objective version of the Gravitational Search Algorithm (GSA) was proposed and used to tackle the FS algorithm in [77]. Therefore, our work differs from other techniques in proposing two different versions of Multi-objective MVO. In the first version, we applied a binary method utilizing the transfer function-based approach, while in the second version, the local neighborhood local space is discovered, as well as the personal best location in this space is utilized with the MOMVO.

III. PRELIMINARIES A. MULTI-OBJECTIVE OPTIMIZATION
In Multi-objective (MO) problems, it is required to deal with two or more opposing objectives to obtain the best set of solutions. In MO optimization, the purpose is to optimize several conflicting objective functions to attain the optimum solutions. The mathematical formulation of a MO minimization is as follows: (1) VOLUME 9, 2021 subject to where x shows the decision variables, f i ( x) denotes the function of x, k show how many functions is to be minimized, and g i ( x) and h i ( x) are the constraints for the intended problem.
In MO optimization, we measure the quality of solutions according to trade-offs between considered objective functions. If x 1 and x 2 be two solutions of the above k-objective problem, when the conditions in Eq. (4) are satisfied, it means that x 1 dominates x 2 : In the case that the rest of the solutions cannot dominate a solution, it is recognized as a Pareto-optimal solution. These solutions generate a trade-off surface, which is called the Pareto front. In a MO optimization technique, it is intended to find a set of non-dominated solutions.

B. MULTI-VERSE OPTIMIZER (MVO)
Multi-Verse optimization (MVO) is a kind of swarm-based approach that has shown its exploratory and exploitative performance in dealing with several real-world engineering and science problems.
MVO algorithm was inspired to mathematically simulate the multi-verse in astrophysics [39]. This physical theory describes the role of the big bangs in forming multiple universes. It also explains that universes can interact with other peers based on hypothetical classes of holes such as white, black, and wormholes. The black and white holes can interconnect using a tunnel, indicating a transmission between paired universes. The black holes can attract other masses, while the white holes can emit other objects. Wormholes also can create tunnels for connecting paired universes in line with the time dimension. Each universe is matched with an inflation rate, which assists it in expanding over space.
These concepts are inspired by a population-based algorithm, which we have called MVO, to develop and efficient exploration and exploitation mechanisms. For this purpose, some initial random universes (search agents) are generated inside the search space. In MVO, each variable/feature in the solution vector corresponds to an object in that universe. Furthermore, each solution has an inflation rate (fitness value) to measure the quality of solutions. Like other metaheuristic methods [78], [79], MVO obtains the fitness values based on the corresponding objective function of the problem. For example, a better fitness value is assigned to a search agent when white holes are observed, whereas an inferior objective value is given to an agent if the black holes are generated. Furthermore, if more communications between white and black holes happen, the variable values of better agents are sent to poorer agents.
The core mathematical formulation of MVO is obtained based on Eqs. (5) and (6): where X j i is the j th object of i th agent (universe), r 1 indicates a random value inside (0,1), NI (U i ) is the normalized fitness value (inflation rate) of the i th agent (universe) and X j k is the j th object of the k th universe selected by a roulette wheel selection mechanism.
Another operation used to provide local changes for each universe is given as follows: where X j is the j th element of the fittest universe attained so far, ub is the superior limit, lb is the inferior limit, Traveling Distance Rate (TDR) plays the role of a coefficient, Wormhole Existence Probability (WEP) is another coefficient, r 2 , r 3 and r 4 are random values inside (0, 1). WEP and TDR are adaptive parameters, which WEP is utilized in MVO to boost the exploitation power, and TDR is used to improve exploitation in the vicinity of the best agent found so far. The adaptive rule for WEP and TDR coefficients can be calculated as follows: where p shows the exploitation factor, min and max indicate the minimum and maximum, respectively, l is iteration, and L denotes the maximum bound of iterations.
In the MVO, the user first sets the maximum iteration number and number of population. The optimization process is initialized using a set of randomly distributed universes inside the upper and lower bounds. At each iteration, variables in the universes with higher fitness values (higher inflation rates) will update their locations toward the universes with lower inflation rates using white/black holes. Temporarily, each universe runs into a random transfer in its objects over wormholes and in the direction of the fittest universe, which has the minimum fitness value. The whole steps are repeated until a termination condition is satisfied. MVO keeps the best agent during iterations and employs it to guide other universes toward the optimum. The pseudocode of MVO is represented in Algorithm 1. Note in this algorithm, SU denotes the sorted universes, NI shows the normalized inflation rate, i denotes the black hole index, m denotes the white hole index, RWS denotes the procedure of Roulette Wheel Selection, and r 1 , r 2 , r 3 , r 4 are random numbers inside interval (0, 1).

Algorithm 1 Pseudo-Code of MVO Algorithm
Input: Total number of universes and number of iterations (L). Output: The best universe and the corresponding inflation rate. Generate the initial random universes x i (i = 1, 2, . . . , n), WEP, TDR, and best universe. while (Termination condition is not true) do Calculate the fitness of current universes. for (each Universe i ) do Update WEP and TDR using Eqs. (7) and (8) for Update universes using Eq. (15). end for end for end while Return: The best universe In MOMVO, similarly to multi-objective Particle Swarm Optimization (MOPSO) [80] and Pareto Archived Evolution Strategy (PAES) [81], an archive is utilized to keep the best non-dominated universes found so far. The exploration and exploitation processes in MOMVO are very similar to the core processes in MVO, in which all candidate universes are evolved based on the interaction between white holes, black holes, and wormholes [82]. Because of several best universes, the white holes and particularly wormholes can be selected from the archive. A leader selection scheme is utilized in MOMVO to select solutions from the archive and open tunnels between universes. For this purpose, the crowding distance between universes in the repository is measured, and where c > 1 is a constant, which can be regarded as a strategy for fitness sharing, and N i is the number of universes located in the neighborhood of the i-th universe. The archive can store a limited number of non-dominated universes hence;, Equation (10) is used to assign higher chances to undesired universes (one with many neighboring agents) to be eliminated from the archive by the MOMVO.
The MOMVO can store Pareto's best universes in the archive and evolve them during iterations with these operators. This technique satisfies the following rules for comparing and addition of a universe to the archive: -When a new search agent can dominate any agent selected from the archive, the algorithm replaces it with that in the archive immediately. -When a new search agent cannot dominate any agent selected from the archive, the algorithm discards it, and it will not be permitted to be inserted into the archive.
-When a new search agent is non-dominated concerning all agents inside the archive, the algorithm adds it to the archive. -When the archive is full, the algorithm deletes an undesired search agent and adds a new non-dominated agent to the archive. In this method, the ideas of Pareto optimality and Pareto optimal solution are employed to be able to compare all universes. In this algorithm, exchanging variables between universes can happen between a universe and an archive universe or two non-dominated universes in the feature space. This rule can enhance the algorithm's exploration tendency, which may also undesirably affect the algorithm's convergence behavior. An equal chance of picking up an archive universe or a non-dominated universe in the feature space is assigned in this method to enhance the trade-off between the exploration and exploitation proclivities.
The MOMVO starts the searching process using a number of random universes and approximates the true Pareto optimal front for the target problem. Every universe is related to some objective values. Initially, the algorithm selects all the non-dominated universes to insert them into the archive. By the first iteration, MOMVO evolves the universes using Eq. (15). Based on the rule in Eq. (15), we have equal chances to exchange variables with an archived agent or one of the non-dominated universes in the up-to-date swarm. The former operation deepens the intensification of the fittest Pareto optimal universes found so far. The latter mechanism improves the diversification of universes inside the search space. The optimization by evolving the universes will be continued in MOMVO to satisfy a termination condition. In addition, the coverage of universes across all objectives will be enhanced by choosing universes from the less colonized areas of the archive. The source code of MOMVO is publicly available .

C. FEATURE SELECTION FOR CLASSIFICATION
A training set often includes some rows, which are also known as objects, and some columns, known as features. These rows and columns are associated with a number of specific classes called decision features. Classification is a well-studied and highly demanded task in machine learning and data mining researches. Based on [21], the main mission of classification is devoted to predicting the possible class of an unidentified object.
Referring to [83], the redundant and irrelevant features can negatively affect the quality of classification. The core reason is that when we face more features in the dataset, we need to add more instances, which raises the learning time of the classifier. In addition, learning from irrelevant features decreases the accuracy of a classifier compared to the same classifier that only deals with relevant features in a more reasonable time. Furthermore, the irrelevant features in the dataset can mislead the classifier, and then, we will face the over-fitting problem. Another remark, the redundant and irrelevant features in a dataset may upsurge the complexity of the main classifier, which can make it complicated to realize the learning outcomes.
Feature selection approaches aim to efficiently determine the irrelevant and redundant features and eliminate them from the dataset to improve the efficacy of the main classifier in terms of the time consumed for the learning process, the accuracy of classification results, and the clearness of the output data. As a fact confirmed by literature, it is very important to choose an efficient searching strategy in FS methods to augment the efficacy of the learning model. By applying an efficient FS method and determining the most informative features, and eliminating the redundant records, the dimensionality of the searching space will be decreased. Then, the performance and convergence rate of the learning algorithm can be boosted [84].

1) SINGLE-OBJECTIVE OPTIMIZATION FOR FS
Feature selection usually is tackled in literature as a single-objective optimization with two cost functions, including accuracy of classification and number of selected features [85]. According to the weight of each component and the importance of each measure, some weighting factors are carefully selected by the user/practitioner before the FS process. Usually, more weights are assigned to the accuracy of the FS rather than the selected features, and accuracy should be maximized while we seek to find the minimized value of the second part. To deal with such a fitness function, every single-objective optimizer can be utilized to detect the minimum fitness value.

2) MULTI-OBJECTIVE OPTIMIZATION FOR FS
The nature of FS can also be studied as a multi-objective task. In this case, the cost functions for accuracy and number of selected features are evolved together, and this allows feature sets to be assessed concerning various dimensions simultaneously. The multi-objective formulation can assist us in negating some of the pitfalls observed when dealing with fitness-based exploration and exploitation phases, such as convergence to sub-optimal solutions and early stagnation. In a multi-objective scenario, multiple factors can be simultaneously involved in the cost function, in which some of them can be potentially more complicated than other parts.

D. K-NEAREST NEIGHBOR (K-NN) CLASSIFIER
The k-NN algorithm is a well-regarded non-parametric and instance-based classification technique which works based on categorizing unlabeled instances. k-NN method can evaluate the distance between a specified instance and the related neighboring k instances (k neighbors) [86]. The core logic behind k-NN method is that the label assigned to an object in the feature space can be probably more similar to its nearby objects. To measure distance, there are many rules employed in previous literature. Often, Euclidean distance is used along with k-NN, which can be obtained by Eq. (11): where P 1 and P 2 show two points with n dimensions. The well-known KNN is one of the simplest and most recommended approaches to wrapper-based FS methods compared to other learning models.

IV. THE PROPOSED APPROACH
This subsection describes the proposed BMOMVO-pbest algorithm. The motivation is to introduce a new feature selection technique based on a multi-objective MVO algorithm, which not only has a good classification performance on solving the feature selection problem but also is minimizing the number of selected features. The BMOMVO-pbest algorithm is summarized in the following subsections.

A. BINARY MOMVO FEATURE SELECTION (BMOMVO)
The MOMVO algorithm was initially intended to deal with complex features of problems in continuous spaces. Due to FS problems' nature, the solutions in MOMVO are planned to change in limited directions within the binary space (0 and 1 values). We have applied transfer functions (TF) as a valid work for converting the version to a binary variant [87]. If the feature is selected, we see 1 in the element, otherwise, it will be zero. In this paper, we used the most popular TF that suggested by Kennedy and Eberhart in [88] to convert the continuous MOMVO version to a binary version as in Eq. (12). (12) where X j i is the j th dimension in the i th universe, and t is the current iteration. The transfer function T is depicted in Fig. 2. Depending on the produced probability from Eq. (12), the universe in the next iteration can be updated using Eq. (13): where X i j (t + 1) is the j th dimension in the i-th universe, and X j i is given in the following rule: where X j is the j th element of the fittest universe attained so far that selected by ranking mechanism that used in the standard MOMVO.

B. BMOMVO FEATURE SELECTION WITH PERSONAL BEST (BMOMVO-PBEST)
This subsection discusses a new approach to FS using binary multi-objective MVO to explore the Pareto front of feature subsets. The standard MVO and MOMVO use the fittest universe/best universe to update all universe's positions. This operation supports more exploration behavior of the algorithm. To add more exploitation capability for the universes, a new term is added to the Eq. (14). This term is called personal fittest, which is the best position achieved by the universe itself so far. It can be viewed as the universe's memory. The new X j i (t + 1) is given in the following rule: where P i j is the personal best of the i − th universe. The P i j is updated based on the dominance relationship, such as if the new P i j dominates the old one, then the new one is employed; otherwise, the old one continues to be used.
In this approach, universes explore the space and store the best position they have achieved so far. Each universe has momentum that allows the universe to explore more areas in the search space. Furthermore, the universe is also attracted to its personal best location in its memory. The main strength is that a universe can balance its exploration and exploitation ability; moreover, diversity can be maintained.

C. FITNESS FORMULATION
The proposed BMOMVO-pbest uses two main objectives, namely; classification error rate (1-Accuracy) and number of features. The first objective is the Classification Error Rate (CER), which is given by Eq. (16): where FP, FN, TP, and TN are false positives, false negatives, true positives, and true negatives, respectively.

VOLUME 9, 2021
The second objective considers the number of selected features (NSF), which is given in the following equation: where l is the number of selected features and A is the total number of features in the given dataset. Algorithm 3 represents the pseudo-code of the proposed BMOMVO-pbest wrapper method for FS problems.

Algorithm 3 Pseudo-Code of BMOMVO-pbest-Based Wrapper FS Approach
Input: Total number of universes, number of iterations (L), and divided datasets into training and testing sets. Output: The best universe and the corresponding inflation rate. Generate the initial random universes x i (i = 1, 2, . . . , n), WEP, TDR, and best universe. Archive = {} Obtain the inflation rates of all universes Update the Archive using Eq. 4 Remark the best solution (X j ) Remark the best personal solution for each universe (P i ) while (Termination condition is not true) do for (each Universe i ) do Calculate two objective CER and NSF for current universes by Eqs. (16) and (17) for each Universe Update the Archive using Eq. 4 Update the best personal solution (P i ) Update WEP and TDR using Eqs. (7) and ( Table 1 reveals the properties of the computing system and utilized testing environments. Table 2 tabulates 14 used datasets for comparative experiments. These datasets are publicly available in UCI machine learning repository [89]. These well-studied datasets are chosen to have different numbers of features, classes, and instances. These representative samples can show how the proposed MVO-based multi-objective techniques can address the optimal feature subsets of FS problems. All evaluations and plans of tests are performed based on fair comparison rules in deep learning [90]- [92]. The compared methods are all wrapper-based multi-objective algorithms, i.e., requiring a learning method to be used within the training stage to assess the resulting feature subset's classification efficacy. The well-known KNN is one of the simplest and most recommended methods used within wrapper FS techniques compared to other learning models. In these experiments, we employed KNN with K = 5 to simplify the evaluation procedure.

B. DATASETS AND PARAMETER SETTINGS
As a training/testing methodology, we randomly have managed all of the instances in each dataset to be processed under two sets: 70% are considered inside the training set and 30% for the test set. Then, we have utilized 5-folds crossvalidation, where the training data set is divided into five equal parts. Note that 5-fold cross-validation is achieved as an internal loop in the training procedure inside the fitness function to assess the classification error of nominated features on the training set. After this process, the selected features are assessed on the test set to attain the testing classification error proportion.
The experiments are repeated for 30 runs to minimize random effects and test the results if they statistically have significant differences compared with other methods. Each run is set to 100 iterations as stopping criteria with a 30 population size. For the proposed BMOMVO-based approaches, we used the same parameters and setting that used in [82]. Table 3 compares the average (AVG), standard deviation (STD), best (BEST), and worst (WORST) error rate results for the proposed BMOMVO-pbest and BMOMVO on all datasets. As per AVG results in Table 3, we see that the BMOMVO-pbest can outperform the BMOMVO on 85.71 % of datasets. According to STD, BEST, and WORST values, the proposed BMOMVO-pbest provides competitive and better classification error rate results compared to the basic binary BMOMVO on several datasets.

C. RESULTS AND DISCUSSIONS 1) RESULTS OF BMOMVO AND BMOMVO-PBEST
Experimental results of BMOMVO and the proposed BMOMVO-pbest in tackling BreastEW, Exactly, HeartEW, SonarEW, CongressEW, and KrvskpEW, Tic-tactoe, Vote, WineEW, Zoo, Semeion, and Leukemia cases are   Fig. 3. Figure 6 also shows the experimental results for BMOMVO-pbest and BMOMVO algorithms in dealing with GLIOMA and Nci9 datasets. In these figures, each sub-figure corresponds to one of the studied datasets. Please note that the numbers in the brackets located at the top of each sub-figure indicate the number of available features and the related classification error values based on all features. The horizontal axis in these charts shows the number of selected features, while the vertical axis indicates the related error values. The curves in these figures show the average Pareto front obtained by BMOMVO-pbest and BMOMVO algorithms over 30 independent runs. Note that, in some test cases, the conventional and improved optimizers may optimize an identical subset in several runs, and the same points in the plots can be observed. Consequently, while 30 results are shown, we can see less than 30 separate points in the plots of average Pareto fronts.
As shown in Figs. 3 and 6, we see that the BMOMVOpbest can efficiently explore the Pareto front and find feature subsets, which covers a smaller number of features and reveal better classification rates than MOMVO. For almost all datasets, except BreastEW and SonarEW, BMOMVO-pbest includes two or more subsets, which effectively obtained a smaller number of features and attained a better error value than the rate obtained based on all features. For KrvskpEW and Tic-tac-toe cases, we see the classification efficacy of both methods are very competitive, while there still is a slight superiority in the results of BMOMVO-pbest algorithm. Table 4 compares the computational time recorded for the BMOMVO and the MOMVO-pbest in dealing with all datasets. As per results in Table 4, in the same condition, the BMOMVO uses relatively less time than the proposed BMOMVO-pbest approach.

2) COMPARISON WITH EVOLUTIONARY MULTI-OBJECTIVE ALGORITHMS
In this section, the efficacy of the proposed approach in terms of classification error rates, number of features, and computational time are compared with other popular multi-objective techniques. Namely, Nondominated Sorting-based Multi-objective Genetic Algorithm II (NSGAII) [93], Strength Pareto Evolutionary Algorithm 2 (SPEA2) [94], Pareto Archived Evolutionary Strategy (PAES2) [95], Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) [96], and MOPSO [97]. These well-studied algorithms have shown excellent performance in dealing with many multiobjective problems in literature. Hence, we compared the performance of BMOMVO-pbest with these well-known methods in solving all datasets. Furthermore, all settings and parameters that are used in the experiments for all popular multi-objective techniques are obtained from [93]- [97] Table 5 compares the error rates returned by the proposed BMOMVO-pbest with those obtained by other compared methods. The F-test statistic is also provided at the last raw of Table 5. Experimental results of the proposed BMOMVO-pbest in solving BreastEW, Exactly, HeartEW, SonarEW, CongressEW, and KrvskpEW, Tic-tac-toe, Vote, WineEW, Zoo, Semeion, and Leukemia cases are compared to other peers in Fig. 4. Figure 7 also shows the experimental results of all multi-objective methods in realizing GLIOMA and Nci9 datasets. Boxplots of error rates are also shown in Figs. 5 and 8. Note that in these figures, BMOMVO-pbest is denoted by MVOpb due to the limited space.
As per AVG results in Table 5, the BMOMVO-pbest can archives better error rates in dealing with 64.28 % of datasets. The minimum value of BMOMVO-pbest for F-test results also supports this observation. According to STD rates, we see the BMOMVO-pbest has shown a relatively more stable performance than other competitors in dealing with the majority of cases. A similar pattern in the superiority of BMOMVO-pbest can also be observed according to BEST and WORST results. According to the F-test results in Table 5, we see that NSGAII, SPEA2, MOPSO, VOLUME 9, 2021 PESA2, and MOEAD have obtained the next overall ranks.
According to average Pareto front in Figs 4 and 7, we can detect that the proposed BMOMVO-pbest can efficiently explore the Pareto front and obtain subsets on most of the cases, which contain a smaller number of features and show better error rates than other peers. Inspecting the Pareto fronts of NSGAII and SPEA2, we observe that these methods have shown a similar classification performance in most test datasets such as BreastEW, Exactly, SonarEW, KrvskpEW,  Tic-tac-toe, Vote, and WineEW. This observation is also confirmed by the same F-test's results for both algorithms, which can be seen in Table 5. For most of the datasets, BMOMVOpbest contains two or more subsets, which successfully nominated a smaller number of features and maintained or attained a better error value than that revealed using all features and other competitors. The computational time of BMOMVO-pbest is compared to those recorded for NSGAII, PESA2, SPEA2, MOEAD, and MOPSO approaches in Table 7. As per records in Table 7, we see that the BMOMVO-pbest has shown the fastest performance on HeartEW, Vote, WineEW, and Zoo. We observed that the PESA2 is the fastest method on six datasets, including Exactly, SonarEW, CongressEW, Leukemia, GLIOMA, and Nci9. The computational time results show that the MOPSO has the slowest exploratory and exploitative trend compared to other methods, including BMOMVO-pbest.
The experimental results on 14 datasets show that the proposed BMOMVO-pbest approach can effectively eliminate irrelevant and/or redundant features with a particular classification rate compared to other competitors. Furthermore, the proposed BMOMVO-pbest improves a set of non-dominated feature subsets that contain better error rates and smaller feature subsets than employing the full features.  The comparative results emphasize the validity and enhanced efficacy of the proposed multi-objective wrapper FS model.
To measure the overall efficacy, we have utilized a Wilcoxon statistical examination with a 5% significance level. We performed this test on the gained average classification error results. The memory term in the proposed method can facilitate the exchange of search information more coherently and preserves a more stable balance between exploration and exploitation trends in the MOMVO-based FS method.
In addition, some performance merits of the proposed BMOMVO-pbest compared to NSGAII, PESA2, SPEA2, MOEAD, and MOPSO techniques are because of the exploratory and exploitative advantages of conventional MOMVO. For instance, using wormholes, some variables of agents can be re-spanned around the best universe attained so far over the optimization stages. This can guarantee sufficient exploitation around the promising zones of the feature space. Furthermore, adaptive WEP values can smoothly emphasize exploitation trends within more iterations, and adaptive TDR values increase the accuracy of exploitative tendencies over the iterations while abrupt changes also help the algorithm to resolve LO stagnation.

3) COMPARISON WITH CONVENTIONAL FILTER FS METHODS
In this part of experiments, we compared the performance of the proposed BMOMVO-pbest in terms of error rates to well-established filter-based approaches [98]: correlation-based (Correlation) [99], ReliefF, InfoGain [100], and symmetrical [101]. Filter methods can select features independent of the used classification algorithm. However, the main drawback of these methods is that it methodically disregards the impacts of the obtained feature subset on the efficacy of the induction engine. The best subset depends VOLUME 9, 2021 on some biased terms and the induction method. Concerning this assumption, wrapper approaches employ a classifier to assess the excellence of nominated features. Regardless of the used learning machine, wrappers aim to propose a simple and influential way to tackle FS tasks.
The error rates of BMOMVO-pbest versus other filter methods are compared in Table 8.
As per the rates in Table 8, it is observed that the BMOMVO-pbest optimizer can beat other filter methods on 64.28 % of datasets, whereas the ReliefF and symmetrical techniques have attained the best rates only for 3 and 2 datasets, respectively. For the Tic-tac-toe case, we see that all filter methods have obtained the same error. These results indicate that wrapper-based BMOMVO-pbest can offer more improvements in the rates compared to filter-based techniques. The reason is that the BMOMVO-pbest-based wrapper can consider both labels and dependencies throughout the selection process of related subsets. The results show that wrapper FS can enhance the quality of feature sets compared to filter-based methods. The main reason is that the wrapper methods can use either labels or dependencies when selecting the relevant subsets. Based on these results, it can be determined that the developed MOMVO-based wrapper shows performance merits compared to other well-known methods and outperforms/competes with studied filter methods as well.

VI. CONCLUSION AND FUTURE WORKS
In this work, a binary multi-objective variant of Multi-Verse Optimizer (MOMVO) was proposed for the feature selection task in machine learning. The MOMVO algorithm was designed as a wrapper-based feature selection approach based on utilizing three cosmology concepts: white hole, black hole, and wormhole. In addition, a variant of the MOMVO that incorporates the personal best solution in its updating was proposed as well. Unlike most of the evolutionary wrapper approaches, the proposed MOMVO-based approaches dealt with an accuracy of the model and the reduction in the dimensional as a multi-objective optimization problem. The results of the conducted experiments based on 14 benchmark datasets showed that the BMOMVO-pbest approach outperforms BMOMVO in the majority of the datasets. Moreover, BMOMVO-pbest showed superior results when compared with the state-of-art multi-objective optimization algorithms for feature selection. We are considering employing more objectives such as parameter optimization and fitness selection and employing different metaheuristic algorithms for the multi-objective problem in future work.