An Improved Tunicate Swarm Algorithm for Global Optimization and Image Segmentation

This study integrates a tunicate swarm algorithm (TSA) with a local escaping operator (LEO) for overcoming the weaknesses of the original TSA. The LEO strategy in TSA–LEO prevents searching deflation in TSA and improves the convergence rate and local search efficiency of swarm agents. The efficiency of the proposed TSA–LEO was verified on the CEC’2017 test suite, and its performance was compared with seven metaheuristic algorithms (MAs). The comparisons revealed that LEO significantly helps TSA by improving the quality of its solutions and accelerating the convergence rate. TSA–LEO was further tested on a real-world problem, namely, segmentation based on the objective functions of Otsu and Kapur. A set of well-known evaluation metrics was used to validate the performance and segmentation quality of the proposed TSA–LEO. The proposed TSA-LEO outperforms other MA algorithms in terms of fitness, peak signal-to-noise ratio, structural similarity, feature similarity, and segmentation findings.


I. INTRODUCTION
Objective optimization problems, such as minimizing time consumption, energy, cost, and error or maximizing efficiency, performance, and quality of a process, are commonly encountered in real-world applications [1]. Recently, several researchers have embraced a new family of optimization algorithms called metaheuristic algorithms (MAs), and numerous optimizers have been developed for complex real-world problems. Such algorithms randomly search the feature space to obtain an optimal solution among various solutions, which are mainly inspired by nature. Among a large body of nature-inspired MAs, some are popular such as moth flame optimization (MFO) [2], whale optimization algorithm (WOA) [3], sine cosine optimization (SCA) [4], seagull optimization algorithm (SOA) [5], krill herd algorithm [6], and barnacles mating optimizer (BMO) [7], because they are simple, efficient, and robust in finding optimal solutions. Moreover, the No-Free Lunch Theorem [8] states that no specific optimization algorithm can accurately solve multiple optimization problems. Thus, several MAs have been developed The associate editor coordinating the review of this manuscript and approving it for publication was Huaqing Li . for use in biomedicine [9], [10], bioinformatics [11], [12], cheminformatics [13], [14], feature selection [15], engineering problems [16]- [19], pattern recognition, text clustering [20], [21], and wireless sensor networks [22], [23]. However, all MAs need to balance exploration and exploitation stages; otherwise, solutions tend to become trapped in local optima or cannot properly converge [24], [25]. Randomization during the solution-finding process can cause such problems. Hybridization of multiple concepts from different scientific fields is mandatory, especially in human-aided systems. Hybridization can combine the advantages of different algorithms to produce enhanced versions with promising performance and accuracy.
For example, the authors in [26] improved the grey wolf optimization (GWO) algorithm for engineering design problems.The enhanced version, which is known as I-GWO, adopts a new movement strategy called dimension-learning hunting (DLH). DLH enhances the diversity of solutions to balance exploration and exploitation phases and avert local optima. Results confirmed the robustness of I-GWO on the CEC'2017 test suit functions. Moreover, the study in [27] boosted the WOA algorithm (one of the most well-known optimization algorithms) with two search strategies: chaotic and Gaussian mutation. The two search strategies were expected to avoid local optima by balancing the exploration and exploitation phases. The algorithm achieved promising performance results compared with state-of-the-art methods. Moreover, the algorithm proposed a method that uses static single assignment (SSA) and particle swarm organization (PSO) to solve complex optimization problems. This method prevents local optima trapping and unbalanced exploitation in the original SSA. The proposed SSA-PSO outperformed competing methods in a comparison test on the CEC'2005 and CEC'2017 functions. The authors of [28]integrated SCA with PSO, which overcomes the drawbacks of SCA in the exploitation phase. The combined ASCA-PSO achieved good performance (high accuracy and low time complexity) on several benchmarks. The authors of [29] combined SCA with operator-based linearization (OBL), increasing the performance and improving solutions of SCA. The superiority of the proposed SCA-OBL was evaluated in several benchmark functions and engineering problems. Orthogonal learning strategy was hybridized with MFO to optimize its parameters [30]. This new MFO version avoids the searchability of the original MFO and enhances the diversity of solutions. The algorithm explores new regions in the search for an optimal agent with the best solution. The effectiveness of the enhanced MFO was verified on CEC'2014 test functions and several engineering problems. The proposed method outperformed other optimization algorithms as proven by the comparison result.
On the other hand, the development and application of vision systems have accelerated in the recent era [31]- [34]. Image processing without a vision system is ill-advised, and a proper pre-processing improves the accuracy of the results. Segmentation pre-processing facilitates the representation and analysis of images [35], and must be accurately performed in any vision application [36]. In particular, the image should be subdivided to extract only the regions carrying useful information. Segmentation methods can be parametric or non-parametric [37]. Parametric segmentation defines each class based on the probability density function; non-parametric segmentation uses specific standards, such as variance, entropy, or error rate, to obtain the optimal thresholds that effectively separate the image. One of the most popular and promising segmentation tools, i.e., thresholding, divides the image into multiple homogeneous segments. Thresholding is also adopted in image analysis and processing because it is easily understood and implemented [38].
Bi-level (BT) and multilevel (MT) thresholding techniques can be used to select the thresholds in a grayscale image [39]. The BT technique divides the entire image into two classes based on a single threshold state, whereas the MT technique segments the image into several classes based on two or more thresholds [40], [41]. Otsu's between-class variance [42], Kapur's entropy [43], and Tsallis entropy [44] are used to optimize the threshold(s). These methods have optimal thresholds that separate an image into multiple segments, but this can be considered a complex task, especially when the number of thresholds increases [45]- [47]. Several optimization methods have been blended with classical thresholding methods to operate with the complexities of multilevel thresholding problems. Tunicate swarm algorithm (TSA) is a new robust search method inspired by the strange behavior of tunicates (a marine invertebrate animal) during foraging [48].
Tunicates adopt two main strategies while searching for food: jet propulsion and swarm intelligence. Most optimization algorithms obtain new solutions based on the previous solution. Two strategies in the original TSA are used to dictate a new solution: jet propulsion and swarm intelligence. These strategies are randomly applied to the current solutions to obtain the best solution. in some optimization cases, the original TSA determines the optimal solution from subregions, which lowers the convergence rate and prevents full coverage of the search space (the latter problem leads to premature convergence of the TSA). These problems are common in most optimization algorithms, especially in complex and high-dimensional problems [49]. Local escaping operator (LEO) is, a new mathematical approach [50] that was developed as a local search used for generating an efficient solution aiming to visit the unseen search regions, and thus, escaping from the local optimal problem. Moreover, operators such as p1, f 1 and f 2 are used to balance between the exploration and exploitation phases, shown in Eqs. (9 and 8).

MOTIVATION AND CONTRIBUTIONS:
To mitigate TSA's problems, this paper hybridizes the original TSA with an efficient operator LEO to address the shortcoming that the standard TSA may exhibit, i.e. 1) evades trapping in local optima, 2) balances between exploration and exploitation, and 3) improves the convergence speed. The proposed method was validated on the CEC'2017 benchmark functions, and its performance was compared with those of seven established optimization algorithms namely MFO, WOA, SCA, SOA, BMO, chaotic TSA (CTSA), and the original TSA. Then it is applied to tackle multilevel thresholding image segmentation problems based on maximizing two objective functions namely Otsu and Kapur objective functions. Peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM) are three quality metrics used to evaluate segmentation results in terms of fitness. Optimization and segmentation results revealed the robustness of the proposed TSA-LEO compared with a set of well-known optimization algorithms. In summary, the major contributions of the paper are summarized below: • An efficient TSA based on LEO called TSA-LEO is presented.
• TSA-LEO is proposed for solving optimization and multilevel thresholding image segmentation.
• The effectiveness of TSA-LEO is assessed on the CEC'2020 suite.
• Two objective functions, Kapur and Otsu, are applied. VOLUME 9, 2021 • The quality of segmentation is verified in terms of the PSNR, SSIM, FSIM.
• The proposed method is compared with state-of-the-art algorithms.
• Extensive results show the more stable performance of the proposed TSA-LEO.
• Significant threshold results are obtained. The remainder of this paper is arranged as follows. Section IIdevices the problem; Section III introduces the proposed TSA-LEO and its main procedure; Section IV discusses and analyzes the benchmark results; and in Section V, TSA-LEO is applied to image segmentation-based thresholding. Conclusions and forthcoming works are represented in Section VI.

A. TUNICATE SWARM ALGORITHM (TSA)
Kaur et al. [48] proposed a bio-inspired optimization algorithm that simulates the natural foraging process of tunicates, marine invertebrates that emit bright bio-luminescence. The TSA was inspired by the strange behaviors of tunicates in oceans, in particular, the jet-drive and swarm intelligence of their foraging process. A mathematical model of jet propulsion is developed under three constraints: preventing conflict among the exploration agents, following the positions of the most qualified agents, and remaining near the optimal agents.

1) PREVENTING CONFLICTS AMONG THE AGENTS
To prevent inter-agent conflicts while searching for better positions, the new agent positions are calculated as: where A is a vector of new agent positions, G is the gravity force, F represents the water flow in the deep ocean, and c 1 , c 2 and c 3 are three random numbers. The social forces between agents are stored in a new vector M , represented as follows: Here P min = 1 and P max = 4 describe the first and second subordinates respectively, indicating the speeds of establishing social interactions.

2) FOLLOWING THE POSITIONS OF THE BEST AGENT
Following the current best agent is essential for reaching the optimal solution. Hence, after ensuring that no conflicts exist between neighboring agents in the swarm, the best position of the best agent is computed as, where PD stores the length between the food origin and the optimal agent, X best is the best position, r rand is a stochastic value in the range [0, 1], and the vector P p (x) contains the positions of the tunicates during iteration x.

3) KEEPING CLOSE TO THE OPTIMAL AGENTS
To ensure that search agents still close to the best agent, their positions are computed as follows: where P p (x) contains the updated positions of the agents at iteration x relative to the best scored position X best .

4) SWARMING BEHAVIOR
To model the swarming behavior of tunicates, the positions of the current agents are updated based on the positions of two agents: To clarify the TSA, the main steps given below illustrate the flow of the original TSA in detail.
Step 1: Initialize the first population of tunicates P p .
Step 2: Set the original value for parameters and the highest number of iterations.
Step 3: Measure the fitness value of each exploration agent.
Step 4: After calculating the fitness, the best agent is investigated in the supplied search space.
Step 5: Update the positions of each exploration agent using Eq7.
Step 6: Return the new updated agents to its boundaries.
Step 7: Measure the fitness cost of the updated search agent. If there is a better solution than the past optimal solution, then update P p and save the best solution in X best .
Step 8: If the termination criterion is met, then the processes stop. Otherwise, iterate Steps 5-8.
Step 9: Declare the best optimal solution (X best ), which is achieved so far.

B. LOCAL ESCAPING OPERATOR (LEO)
The LEO proposed as a local search algorithm in [50] which is used to enhance the ability of an optimization algorithm namely Gradient-based optimizer (GBO) aiming to explore new regions which are desired in complex real-world problems. The LEO enhances the quality of solutions by updating their positions under some criteria. Specifically, it prevents the algorithm from trapping in local optima and improves its convergence behavior. LEO generates its alternative solutions ( P LEO ) with excellent performance by using several solutions such as the best position of tunicates X best , two randomly generated solutions X m r1 and X m r2 , two randomly chosen solutions X m r1 and X m r2 , and a new randomly generated solution X m k . Hence, the solution P LEO can be determined based on 56068 VOLUME 9, 2021 Eqs. (8 and 9) which can be mathematically formulated as follows: if rand < pr if rand < 0.5 Here P m n is the current tunicate position, X best is the best scored position, pr is the probability of performing LEO strategy where pr = 0.3, rand represents a random value in range ∈ [0, 1], f1 and f2 are uniformly distributed random values ∈ [−1, 1], X m r1 and X m r2 represent two random solutions chosen from the population, X 1 m n and X 2 m n are two solutions which are randomly generated as shown in Eq10 from the current population.
where LB, UB are the lower and upper bounds, Dim is the dimension of any solution. Moreover, n and m represent the coordinates of the solution (n = 1, 2, 3, . . . , N ) and (m = 1, 2, 3, . . . , Dim). In addition, u 1 , u 2 , and u 3 are three variables that are randomly generated as following: where L 1 is a binary parameter (L1 = 1 if µ 1 < 0.5, and 0 otherwise), µ 1 is a number in the range of [0, 1]. Moreover ρ 1 is introduced to balance the exploration and exploitation searching processes, and it can be expressed as: where β min and β max are set to 0.2 and 1.2 respectively, t is the current iteration, and Max iterations is the maximum number of iterations. To balance the exploration and exploitation processes, parameter ρ 1 changes based on the sine function α.
To determine the solution X m k in Eq. (28), the following scheme is suggested.
where x rand is a new solution that can be calculated as shown in Eq18, x m p is a random solution selected from the population (p ∈ [1, 2, . . . N ]), µ 2 is a random number in the range of [0,1].
Eq17 can be simplified as follows: where L 2 is a binary parameter with a value of 0 or 1. If parameter µ 1 is less than 0.5, the value of L 1 is 1, otherwise, it is 0.

III. THE PROPOSED TSA-LEO
This section illustrates the implementation of the proposed TSA-LEO method to improve the ability of the original TSA by allowing it to visit promising regions. LEO is specifically used to improve the performance of the best solutions of the original TSA. The TSA-LEO algorithm follows the main steps of the original TSA, and employs the LEO operator to encourage the visitation of new regions. LEO improves the search for global optima and convergence rate of the algorithm, dynamically evading stagnation in local optima. In the following section, the implementation of the proposed TSA-LEO is given in detail.

A. PRIMITIVE STEP OF TSA-LEO
The proposed TSA-LEO method, like numerous other optimization algorithms, begins by randomly initializing its parameters, A, G, F, M as shown in Eqs. 1 to 4, respectively. Moreover, creating the initial population P p as shown below.
where P p is the initial population, and N denotes the number of random solutions i ∈ {1, 2, . . . , N }, each solution is limited between the upper and lower boundaries (UB and LB) with a dimension of Dim in the search space.

B. UPDATING SOLUTION SCENARIOS
The position updating process is conducted based on two scenarios. First, generating a two-agent solution as shown in Eq. 7, or based on the best position obtained so-far using Eq. 6 and saving results. In this step, the original TSA is executed conventionally. In the second scenario, to the solution is updated using the LEO strategy to improve the solution efficiency. The LEO distinction between two paths depends on a specific condition as shown in Eqs. 8, and 9.
If rand < 0.5, the first path is selected to perform the process of solution updating as shown in Eq8; otherwise, the second path Eq.9 is selected to find the new solution.

C. OPTIMIZATION SCENARIOS
This step is performed to evaluate the vector of solutions generated from the previous phase in each iteration to enhance the quality of the further solutions. Accordingly, TSA-LEO computes the fitness value f ( P p ) of each tunicate position in the current population. The best-scoring solution X best is then determined, saved, and extracted at the updating stage.

D. TERMINATION CRITERIA
After completing the optimization scenarios and iterating until reaching the stopping criteria, the proposed TSA-LEO retrieves the optimal solution according to the best fitness. Algorithm 1 gives the pseudo code of the TSA-LEO algorithm, and a detailed flowchart is shown in Fig. 1.

Algorithm 1
The Proposed TSA-LEO Algorithm procedure TSA-LEO Initialize the first population P p randomly.
This subsection reports and estimates the computational complexity of the proposed TSA-LEO algorithm in terms of time and space complexities.

2) SPACE COMPLEXITY
Space complexity defines the total amount of space occupied by the algorithm. Now, TSA-LEO takes O(N × Dim) space complexity.

IV. PERFORMANCE EVALUATION OF TSA-LEO A. PARAMETER SETTINGS
To accurately evaluate the effectiveness of the proposed TSA-LEO, the algorithm was competed against seven other algorithms, namely, MFO [2], WOA [3], SCA [4], SOA [5], BMO [7], CTSA, and the original TSA. Each method was executed 30 times through (at most) 1000 iterations. The user population size was set to 30. The parameters of each algorithm were set to the values of the first-published standard versions. Table 1 lists the parameters and setting positions of TSA-LEO.

B. DEFINITION OF CEC'17 TEST SUITE FUNCTIONS
The CEC'17 test suite was selected as a test problem because it has high complexity and is customized for global optimization. The CEC'17 test suite contains 30 functions, but function F2 was excluded because to its instability. Therefore, the used benchmark contained 29 test functions. The test suite contains 29 functions and is composed of unimodal shifted and rotated functions; multimodal shifted and rotated functions; hybrid functions; and composition functions as shown in [51]. Fig. 2 shows the landscapes of 16 selected functions in two-dimensional space and provides an intuitive understanding of the functional differences and nature of the problems.

C. STATISTICAL RESULTS ANALYSIS
The CEC'17 benchmark functions are employed to assess the performance of advanced TSA-LEO. Mean and standard deviation (STD) values of each run's best solutions are used to measure the algorithm efficiency. Table 2    the best fitness of 9 functions gaining the second rank with overall ratio (31%) of test functions, while the other competing algorithms fail to gain the best fitness in any test function. This means that the proposed TSA-LEO algorithm can effectively solve multimodal functions (F4 to F10), hybrid functions (F11 to F20), and composition functions (F21 to F30). Table 3 shows the rank-sum results for fitness according to Wilcoxon rank-sum test. After applying the rank-sum test between the proposed TSA-LEO algorithm and each of the other algorithms (MFO, WOA, SCA, SOA, BMO, TSA, CTSA, and TSA-LEO) a difference between all competitors in contrast to the proposed TSA-LEO is noticed. TSA-LEO vs MFO has a significant difference with a ratio of (96.55%), TSA-LEO vs WOA has a significant difference with a ratio of (100%), TSA-LEO vs SCA has a significant difference with a ratio of (93.10%), TSA-LEO vs SOA has a significant difference with a ratio of (86.20%), TSA-LEO vs BMO has a significant difference with a ratio of (100%), TSA-LEO vs TSA has a significant difference with a ratio of (82.75%), TSA-LEO vs CTSA has a significant difference with a ratio of (82.75%); this means that the proposed TSA-LEO algorithm has a significant development. Moreover, based on VOLUME 9, 2021 Friedman's mean rank test results, the proposed TSA-LEO ranks first compared to the other algorithms. Overall statistical results showed that in solving different advanced benchmarks, the proposed method was more effective than other well-known optimization methods.

D. BOXPLOT BEHAVIOR ANALYSIS
Data distribution characteristics can be displayed by boxplot analysis. Boxplots are efficient for depicting data distributions into quartiles. The minimum and maximum edges of the whiskers are the lowest and largest data points reached by the algorithm. The ends of the rectangles define the lower and upper quartile. A narrow boxplot signifies a high agreement between data. Due to space limitations, Fig. 6 illustrates 15 functions. Figure 3 shows the analyses of  ing their exploitation behavior to achieve the desired results. The third pillar displays the average fitness over 100 iterations, explaining how diversified new agents assist in the search of the best solution. The proposed TSA-LEO can find the areas with the best fitness for most functions according to the search history pillar. In terms of average fitness history, all curves are decreasing, which means that the population improves at each iteration. This constant improvement substantiates a collaborative searching behavior and supports the efficiency of updating particle law. Finally, convergence curve and optimization history revealed the progress of fitness over several iterations. The decrease in optimization history indicates that the solutions are optimized during iterations until reaching the optimal solution.

V. EXPERIMENTAL RESULTS AND ANALYSIS
This section employs the proposed TSA-LEO to solve thresholding-based image segmentation problems. In this evaluation, TSA-LEO was expected to select the thresholds that best segmented a set of benchmark images by maximizing a well-known thresholding technique, namely, Otsu's objective function.

A. MULTI-THRESHOLDING IMAGE SEGMENTATION STUDIES
In this research, image thresholding shows the efficiency of metaheuristic algorithms in the relevant method [35], [47], [52]. In this regard, there are numerous examples of meta-heuristic applications; however, a few prominent stateof-the-art research works are given. To tackle the problems of multi-thresholding, Upadhyay and Chhabra [53] used the crow search algorithm (CSA) to maximize Kapur's method. The proposed model was compared with a set of VOLUME 9, 2021 well-known metaheuristic algorithms, namely, PSO, DE, GWO, MFO, and CSA. The authors chose CSA because of its balance between exploration and exploitation, as well as less parameters to tune. Through most commonly used evaluation metrics, the authors contended to have achieved comparatively better results when tested on a set of benchmark images using multiple threshold values. Despite the success in this work, CSA has a slow convergence. Khairuzzaman and Chaudhury [54]used GWO to produce efficient image-segmentation results while finding the optimal set of   thresholds using Otsu's and Kapur's functions. GWO converged to better optimum solutions than bacterial foraging optimization (BFO) and PSO; however, the proposed algorithms also posed certain disadvantages: a) its efficiency reduced when employed on noisy images and b) GWO was slower than PSO regarding the computational time. The research maintained a major weakness; it did not provide a comprehensive comparison with other well-known and established metaheuristic algorithms, but merely used PSO and BFO for comparison. To optimize threshold values for multilevel image thresholding, a modified grasshopper optimization algorithm (GOA) with Lévy flight was introduced based on Tsallis cross-entropy as the objective function [55]. The proposed model was tested on benchmark images and plant stomata. Compared with standard GOA, WOA, flower pollination algorithm (FPA), PSO, and bat algorithm (BA), the proposed GOA variant produced better segmentation accuracy with enhanced multilevel segmentation convergence on energy-based Tsallis entropy. One limitation of this study is that it did not experiment with relatively increased thresholds for high-dimensional optimization problems.
The study in [56] used the EO algorithm and Kapur's entropy as objective function to achieve the optimal threshold values for grayscale images. To achieve enhanced search ability, the researchers improved EO with adaptive parameters. The proposed method was evaluated using several solution quality metrics such as the signal-to-noise ratio, structured similarity index, some accuracy measures like mean absolute error, and the computation time for resource complexity. The proposed EO outperformed WOA, BA, SCA, SSA, harris hawk's optimizer (HHO), CSA, and PSO techniques. The significance of this study can be determined with the level of thresholds used in the experiment. The researchers used up to 50 threshold levels. However, the proposed EO variant comparatively underperformed considering standard deviation values and computational time. HHO is another recent metaheuristic technique that was implemented in a similar domain using Otsu's and Kapur's objective functions [57]. Comparisons of the proposed method with PSO, DE, harmony search (HS), ABC, and SCA, show that it produced efficient results in terms of quality, consistency, and accuracy. However, the results of HHO were also compared with two machine learning techniques, K-means and fuzzy IterAg, revealing that these techniques performed the least in the overall image-segmentation exercise. Another limitation of this study is that it was not evaluated on color images, and the number of thresholds was manually set. Meanwhile, Díaz-Cortés et al. [58] resolved the problem of unclear regional borders in low-resolution thermography images in health-care using the dragonfly algorithm (DA). In addition, the DA technique is used to find optimum threshold values for energy curves in thermal images for breast cancer diagnosis. Based on the objective functions of Otsu's and Kapur's, the authors evaluated solution quality and found that DA outperformed GA, PSO, runner-root algorithm and krill-herd algorithm on a set of eight images retrieved from the DA-Breast Thermography database.
To improve the optimal threshold selection in this study, the proposed TSA-LEO algorithm was integrated with Otsu' and Kapur's objective functions.

B. OTSU's OBJECTIVE FUNCTION
Otsu was selected because it is commonly used for thresholding images, segmented by maximizing the between-class variation. TSA-LEO optimizer maximizes the Otsu objective function and determines the best-fit thresholds. The objective function of Otsu considers L intensity levels of a gray image, and the probability distribution is computed in Eq. 21. This method can be used for RGB color images in which Otsu is separately applied to each channel.
where i is an intensity level defined in the range of (0 ≤ i ≤ L − 1). NP is the total number of pixels in an image. h i denotes the number of occurrence of intensity i in the image represented by the histogram. The histogram is normalized in a probability distribution Ph i . Based on the probability distribution or threshold value (th), the classes are computed for bi-level segmentation as follows: , . . . , Ph th ω 0 (th) and C 2 = Ph c th+1 ω 1 (th) , . . . , Ph L ω 1 (th) (22) where ω 0 (th) and ω 1 (th) are cumulative probability distributions for C 1 and C 2 , as it is shown by Eq. (23).
Ph i and ω 1 (th) = L th+1 Ph i (23) It is mandatory to find the average intensity levels µ 0 and µ 1 that define the classes using Eq. (24). Once those values are calculated, the Otsu based between-class σ 2 B is calculated using Eq. (25).
Notice that σ 1 and σ 2 in Eq. (25) are the variances of C 1 and C 2 which are defined as follow: where µ T = ω 0 µ 0 + ω 1 µ 1 and ω 0 + ω 1 = 1 based on the values σ 1 and σ 2 , Eq. (27) presents the objective function. Therefore, the optimization problem is reduced to find the intensity level that maximizes Eq. (27) (27) where σ 2 B (th) is the Otsu's variance for a given th value. Otsu's method is applied for a single component of an image, that means for RGB images it is necessary to apply separation into single component images. The previous illustration of such bi-level method can be modified for multiple thresholds. The objective function F otsu (th) in Eq. (27) can also be modified for multiple thresholds as follows: where TH = [th 1 , th 2 , . . . , th k − 1] is a vector containing multiple thresholds, L denotes maximum grey level, whereas the variances are computed through Eq. (29).
where i represents a specific class. ω i and µ j are the probability of occurrence and the mean of a class respectively. For multi-level thresholding, such values are obtained as: for mean values:

C. KAPUR's OBJECTIVE FUNCTION
Another thresholding technique used to apply the concept of segmentation is the Kapur's method [43]. Kapur's method selects the optimal threshold values based on maximizing the entropy. The mathematical model is described as follows: where the entropies H 1 and H 2 are computed as: where Ph i is the probability distribution of the intensity levels which is obtained using Eq. (13), ω 0 (th) and ω 1 (th) are probabilities distributions for the classes C 1 and C 2 . ln(.) is the natural logarithm. Similar to the Otsu's method, the entropybased approach can be modified for multi-thresholding values; for such a case, it is necessary to divide the image into k classes using k − 1 thresholds. The objective function then can be modified as follows: where TH = [th 1 , th 2 , . . . , th k−1 ] is a vector that contains the multiple thresholds. Each entropy is computed separately with its respective (th) value, so Eq. (34) is expanded for k entropies as: Here the values of the probability occurrence (ω c 0 , ω 1 , . . . , ω k−1 ) of the k classes are obtained using Eq. (20) and the probability distribution Ph i with Eq. (13).
For the ease of understanding TSA-LEO implementation on image segmentation, the following steps are given in brief.
1) Read the image in grayscale.
2) Obtain the histogram of the selected image.
3) Calculate the probability distribution using Eq23. 4) Initialize TSA-LEO parameters. 5) Initialize the first population of tunicates P p with the dimension of Dim. 6) Evaluate the initial population using Otsu (F otsu ) Eq28 or Kapur (F kapur ) Eq34. 7) Calculate the parameters A, G, F, M , and PD using Eqs.(1-5) respectively. 8) Update the positions of each agent using Eqs.(6 or 7). 9) Determine the optimal position X best . 10) Apply LEO strategy if rand < pr and update the value of P p based on Eq.8 if rand < 0.5 or 9 if rand ≥ 0.5. 11) Evaluate the new population and save best results. 12) Select tunicate with the best solution according to the objective function. 13) If maximum iteration or the stop conditions are not met, go to Step 7. 14) To segment the image, use tunicate with the best threshold values.

D. ENVIRONMENTAL SETUP
The results of advanced TSA-LEO with the objective functions of Otsu and Kapur were compared with those of MFO [2], WOA [3], SCA [4], SOA [5], BMO [7], CTSA, and original TSA. All algorithms were executed 35 times per algorithm under the same stopping criteria (350 iterations at most) with 50 search agents to evaluate their performances. The parameters of each algorithm were maintained at their standard versions' values. All tested algorithms were programmed and operated in the same experimental environment (Intel Core-I5 processor, 8 GB memory, Matlab-2013, and Windows 8.1-64).

E. EVALUATION CRITERIA
Evaluating segmented images is essential for validating the performance and accuracy of any algorithm. Three measures were used to evaluate the degree of segmentation: PSNR [59], SSIM [60], and FSIM [61]. Wilcoxon rank-sum was used to evaluate the significance of the proposed TSA-LEO, and the variations between the proposed method and competing algorithms were assessed in Friedman's non-parametric statistical tests [62], [63].

1) QUALITY METRICS
The PSNR distinguishes between the qualities of the initial and resulting images. The PSNR is defined as where RMSE is the root-mean-squared error, and I and Seg are the initial and final images, respectively. All images are sized M × N. The SSIM determines the similarity between the original and segmented images. The SSIM is defined as where µ I and µ Seg are the mean intensities of the original image I and segmented image Seg, respectively, and σ I and σ Seg are their respective standard deviations. σ I ,Seg is the covariance of the original and segmented images, and c1 and c2 are two constants. The FSIM measures the similarities in the mapped features. The FSIM mainly depends on the phase congruency (PC) and gradient magnitude (GM). The PC is a new measure applied to the features of an image. The GM computes the image gradient, as traditionally done in digital image processing. The similarity between the two images was first obtained as where T 1 is a positive constant that increases the stability of S PC . PC 1 and PC 2 are the PCs of the original and segmented images, respectively, and SG is the similarity between G 1 and G 2 , which is computed as: 56080 VOLUME 9, 2021 Here, G 1 and G 2 are the gradients of the original and segmented images, respectively, and T 2 is a positive constant that depends on the dynamic range of GM values. From Eqs. (38) and (39), the similarity is computed as The parameters α and β adjust the relative importances of the PC and GM features. Note that high values of fitness, PSNR, SSIM, and FSIM indicate a high-performing algorithm .

2) NON-PARAMETRIC STATISTICAL TESTS
The Wilcoxon rank-sum test rank-sum test is a non-parametric measure that analyzes the results of pairs of methods. The null hypothesis implies that the ranks of the results of the comparative methods are not significantly different. The alternative hypothesis examines whether the results of the comparative methods can be distinguished by rank. The Wilcoxon rank-sum was calculated at the 5% significance level. The significance levels (P) and hypothesis (H) values in terms of fitness obtained with Otsu's method are shown in Table 19. If P > 0.05 or H = 0, then the null hypothesis is accepted, whereas if P < 0.05 or H = 1, the alternative hypothesis is accepted.
The Friedman mean rank test is another non-parametric analysis that compares three or more matched groups. In the present study, the Friedman mean rank was applied for checking the performances of the competitive algorithms. The Friedman statistic determines the mean ranked value. Whether the critical values reach the assigned significance level is evaluated using Friedman's statistics, and whether the null hypothesis is accepted or declined is then judged.

F. ANALYSIS OF MULTI-THRESHOLDING IMAGE SEGMENTATION RESULTS
This section reports and discusses the experimental results of multilevel level thresholding Otsu and Kapur objective functions described above to tackle multilevel thresholding image-segmentation problems.

1) MULTI-THRESHOLDING SEGMENTATION EXPERIMENTS OF OTSU AND KAPUR METHODS IN TABLES AND FIGURES
Image-segmentation experiments were performed using Otsu and Kapur methods as the objective functions in two separate experiments on a set of ten benchmark images at four thresholding levels (Level = 2, 3, 4, and 5). In total, 40 cases were tested. Fig. 7 illustrates a set of benchmark images with their respective histograms namely Cameraman, Lena, Baboon, Hunter, Airplane, Pepper, Living room, Woman, Bridge, and Butter-Fly. Figures (12, 13 (15,16,17,18, TSA-LEO Otsu) and (7,8,9, and 10, TSA-LEO Kapur) represent the fitness, PSNR, SSIM, and SSIM results, for Otsu and Kapur methods respectively. Moreover, Tables (19 and 19) show the results of the Wilcoxon rank-sum test of TSA-LEO and other seven algorithms with Otsu and Kapur methods. Table.20 also provides convergence curves on Otsu and Kapur objective functions for samples of test images on various thresholds for the proposed TSA-LEO and other competitive algorithm.

2) MULTI-THRESHOLDING SEGMENTATION ANALYSIS OF OTSU AND KAPUR METHODS
From the optimal thresholds selected on the basis of Otsu and Kapur objective functions, we can conclude that the Kapur segmentation process is more decentralized and has wider coverage, such as the optimal threshold value of the test image, namely, Cameraman at Level = 4 is 22,59,98,145,196, as shown in Table 6, and 36,82,122,149,173 shown in Table 14. Notably, the optimal thresholds for Otsu's objective function are closer than Kapur's objective function, revealing that segmentation based on Kapur's objective function is better than Otsu's objective function, which is also evident from the results of the segmented images based on Otsu and Kapur objective functions in Figures. (12, 13, 4, and 5). In terms of quality metrics (PSNR, SSIM, and FSIM), the quality of the segmented image based on Kapur's is better than Otsu's objective function. For example, the PSNR value of starfish in Test 9 image at Level = 4 is 1.90E + 01 in Table 8 12,13, especially in the case with higher number of thresholds. Generally, in the given segmentation image and the same number of threshold levels, the method based on Kapur's is significantly better than Otsu-based method for the same optimization algorithm. VOLUME 9, 2021

3) MULTI-THRESHOLDING SEGMENTATION ANALYSIS OF TSA-LEO AND OTHER SEVEN OPTIMIZATION ALGORITHMS
Fitness results in the basis of Kapur's objective function shown in Table 7 confirmed the superiority of TSA-LEO over other algorithms. TSA-LEO ranked first with 21 higher cases (52.5%); MFO and WOA ranked second with 8 higher fitness cases (20%); and CTSA ranked third with 7 higher cases (17.5%). Moreover, BMO ranked fourth with only 4 higher fitness cases (10%). All the remaining algorithms have no higher fitness cases. Regarding PSNR results  shown in Table 8, the proposed TSA-LEO ranked first with 19 higher cases (47.5%) and CTSA ranked second with 7 higher cases (17.5%). Besides, SOA ranked third with 6 higher cases (15%). WOA ranked fourth with 5 higher best fitness cases (12.5%). MFO and BMO ranked fifth with 3 higher cases representing (7.5%) of overall higher cases.
Finally, TSA took the last rank without any higher case. The proposed TSA-LEO ranked first in the SSIM results represented in Table 9, with 21 higher cases representing (52.5%) of overall test cases. However, SOA ranked second with 7 higher cases (17.5%). BMO gained third place with only 5 higher cases representing (12.5%) from overall cases.   MFO, WOA, SCA, and CTSA ranked fourth with 3 higher FSIM cases representing a (7.5%) of the total cases. Finally, TSA ranked last without any higher cases. Regarding the FSIM results shown in Table. 10, proposed TSA-LEO ranked first with 18 higher cases (45%). MFO was ranked second with 7 higher cases (17.5%) and WOA ranked third with 5 higher cases (12.5%). CTSA ranked fourth with 4 higher cases (10.5%). SCA and BMO ranked fifth with only one higher case with a percentage of (2.5%). In addition, TSA gained no higher FSIM cases. Regarding the objective function of Otsu, the results of Otsu in terms of the mean of fitness provided in Table 15 confirmed the superiority of TSA-LEO over the other algorithms. Where the proposed TSA-LEO ranked first with 40 higher cases (100%), Table 16 shows the mean PSNR results and confirms that BMO ranked first with 10 higher cases (25%). Moreover, the proposed TSA-LEO and SCA ranked second with 9 higher cases (22.5%). MFO ranked third with 8 higher cases (20%) and CTSA ranked fourth with 4 higher best fitness cases (10%). The original TSA and SOA with 3 higher cases represent (7.5%) of overall higher cases. Finally, WOA ranked in the last place with only one higher case (2.5%). Table 17 represents the SSIM results of the proposed TSA-LEO as compared with other algorithms in terms of SSIM mean results. Remarkably, BMO ranked first with 15 higher cases representing (37.5%) of overall test cases. TSA-LEO ranked second with 13 higher cases (32.5%). MFO, SCA, SOA, and CTSA ranked third with 3 higher cases representing (7.5%) of overall cases. WOA and TSA ranked last with only one higher case representing (2.5%) of total cases. Table 18 provides the mean result of FSIM, which indicates that BMO ranked first with 14 higher cases representing (35%) of overall cases. TSA-LEO ranked second with 9 higher cases (22.5%). SCA ranked third with overall 5 higher cases (12.5%). Moreover, WOA, SOA, CTSA ranked fourth with 4 higher cases representing a percentage of (10%). Besides, TSA gained two higher FSIM cases with (5%) of overall cases. Finally, MFO ranked last with only one higher FSIM case representing (2.5%) of overall cases. According to the Wilcoxon rank sum test, Tables (11 and 19) represent P and H results of the Wilcoxon test in terms of fitness for Kapur and Otsu objective functions, respectively. When the number of thresholds is small (e.g., Level = 2,3), the segmentation results of each algorithm are almost the same, according to comparisons based on Kapur and For example, when Level = 2, the optimal threshold, PSNR, SSIM, and FSIM of the eight algorithms of Baboon are the same.       Kapur's method but ranked second in terms of fitness and all other quality measures. Generally, the segmentation effect of TSA-LEO reflects the superiority of the proposed TSA-LEO especially in Kapur's method.

VI. CONCLUSION AND FUTURE WORK
This paper introduced an enhanced variant of a metaheuristic optimization algorithm, named TSA. The TSA was hybridized with an efficient search strategy called LEO, which improves the performance, accuracy, and convergence behavior of TSA. During the solution update process, TSA competes with LEO in the proposed TSA-LEO method. The effectiveness of the proposed TSA-LEO was VOLUME 9, 2021 evaluated using the functions in the CEC'17 benchmark test suite. The proposed method outperformed the competing methods regarding various statistical measures. Moreover, the proposed TSA-LEO can tackle multilevel threshold problems while seeking the optimal thresholds for image separation. Thus, the proposed TSA-LEO method is potentially applicable for solving complicated real-world problems. The proposed method selects the optimal thresholds that intensified the segmentation process in the thresholding experiment.
In future work, we intend to 1) combine two or more objective functions (e.g., Otsu and Kapur) in the proposed TSA-LEO, 2) further evaluate the proposed method on different datasets, and 3) apply the proposed TSA-LEO to other real-world complex problems. Promisingly, the proposed approach can be considered as an efficient and effective strategy for more complex optimization scenarios and the intelligent optimization field's theoretical work as well.
ESSAM H. HOUSSEIN received the Ph.D. degree in computer science with a focus on wireless networks based on artificial intelligence in 2012. He is currently working as an Associate Professor with the Faculty of Computers and Information, Minia University, Egypt. He is also the Founder of the Computing & Artificial Intelligence Research Group (CAIRG), Egypt. He has more than 80 scientific research articles published in prestigious international journals in the topics of optimization, machine learning, image processing, and the IoT and its applications. His research interests include wireless sensor networks, the IoT, bioinformatics and biomedical, cloud computing, soft computing, image processing, artificial intelligence, data mining, optimization, and metaheuristics techniques. He serves as a reviewer of more than 30 journals, such as Elsevier, Springer, and IEEE.
BAHAA EL-DIN HELMY is currently pursuing the M.Sc. degree in computer science with the Faculty of Computers and Information, Minia University, Egypt. He is also working as a Teaching Assistant with the Faculty of Computers and Artificial Intelligence, Beni-Suef University, Egypt. He is also a member of the Computing & Artificial Intelligence Research Group (CAIRG). His research interests include image processing, segmentation, and optimization. VOLUME 9, 2021 AHMED A. ELNGAR is currently an Assistant Professor with the Department of Computer Science, Faculty of Computers and Artificial Intelligence, Beni-Suef University. His current project is biometrics, AI-based security, image processing, AI-based control, AI-based smart grids, and the Internet of Things Security. He is also the Founder and Chair of the Scientific Innovation Research Group (SIRG). He is also a Managing Editor of Journal of Cybersecurity and Information Management (JCIM).
DIAA SALAMA ABDELMINAAM received the Ph.D. degree in information system from the Faculty of Computers and Information, Menofia University, Egypt, in 2015. Since 2011, he has been an Assistance Professor with the Department of Information Systems, Faculty of Computers and Information, Benha University, Egypt. He has worked on several research topics. He has contributed more than 40 technical papers in the areas of wireless networks, wireless network security, information security and Internet applications, cloud computing, mobile cloud computing, the Internet of Things, and machine learning in international journals, international conferences, local journals, and local conferences. He majors in cryptography, network security, the IoT, big data, cloud computing, and deep learning.
HASSAN SHABAN received the Ph.D. degree in computer science. He currently works as a Lecturer with the Department of Computer Science, Faculty of Computers and Information, Minia University, Egypt. His research interests include wireless sensor networks, security, optimization, and metaheuristics.