Dynamic Group-Based Cooperative Optimization Algorithm

Several optimization problems from various types of applications have been efficiently resolved using available meta-heuristic algorithms such as Particle Swarm Optimization and Genetic Algorithm. Recently, many meta-heuristic optimization techniques have been extensively reported in the literature. Nevertheless, there is still room for new optimization techniques and strategies since, according to the literature, there is no meta-heuristic optimization algorithm that may be considered as the best choice to cope with all modern optimization problems. This paper introduces a novel meta-heuristic optimization algorithm named Dynamic Group-based Optimization Algorithm (DGCO). The proposed algorithm is inspired by the cooperative behavior adopted by swarm individuals to achieve their global goals. DGCO has been validated and tested against twenty-three mathematical optimization problems, and the results have been verified by a comparative study with respect to state-of-the-art optimization algorithms that are already available. The results have shown the high exploration capabilities of DGCO as well as its ability to avoid local optima. Moreover, the performance of DGCO has also been verified against five constrained engineering design problems. The results demonstrate the competitive performance and capabilities of DGCO with respect to well-known state-of-the-art meta-heuristic optimization algorithms. Finally, a sensitivity analysis is performed to study the effect of different parameters on the performance of the DGCO algorithm.


I. INTRODUCTION
Nowadays, real-world optimization problems are complex with high dimensional search space therefore, they are in general challenging. Heuristic Optimization techniques have been applied in several fields such as engineering [1], machine learning [2], business processes [3], mechanics [4], economics [5], scheduling [6], transportation [7], integrated decision making [8], and formulas estimation [9], [10]. Optimization refers to finding an acceptable optimal solution for a specific problem among many feasible ones. An optimization task is usually transformed into a search problem in a multi-dimensional space. Practically, that search refers to minimizing or maximizing an objective function that evaluates the quality of a solution candidate that is usually denoted by a vector in the search space. Meta-heuristics are a family of approximate optimization techniques that provide acceptable The associate editor coordinating the review of this manuscript and approving it for publication was Yanbo Chen . solutions in a reasonable time [11]. They are adopted for solving hard and complex problems in science and engineering.
Meta-heuristic optimization algorithms have become more popular because of their simplicity and flexibility with respect to classical and exact optimization methods like Greedy Search and Local Search [12]. That simplicity is due to the fact that they are based on simple concepts that are easy to understand and implement. In general, those meta-heuristic optimization algorithms are flexible because they can be applied in various fields and applications without special changes to their design and implementation. Moreover, they are able to avoid local optima because of their stochastic nature that allows them to extensively explore the search space and avoid stagnation in local optima. In addition, meta-heuristics are derivative-free techniques, so they do not need to use derivative information of the search space compared to gradient-based algorithms. In fact, a search space in real-world problems is often very complex with an enormous number of local optima and expensive or unknown derivative information, so meta-heuristics are more suitable for optimizing those types of problems.
Local search optimization algorithms such as Tabu Search [13] and Simulated Annealing (SA) [14] use a single randomly initialized solution that is improved over the course of iterations. On the other hand, population-based meta-heuristics starts the optimization process with an initial random group of individuals (population) that represent candidate solutions to the problem. The individuals exchange information about the search space and cooperate to avoid local optima and converge towards a global goal.
In fact, the quality and effectiveness of a meta-heuristic algorithm depend on its ability to ensure an appropriate balance between exploration of the search space (diversification) and exploitation of the best solutions found (intensification) [15]. Exploration is achieved by the meta-heuristic in order to discover new areas that may contain promising points in the search space. On the other hand, exploitation is the process of finding better solutions around the good solutions found so far. Most of the meta-heuristics perform more explorations at the initial stages to extensively explore the search space and to avoid stagnation in local optima. The main difference among meta-heuristic algorithms is the technique they ensure a balance between exploration and exploitation.
Evolutionary algorithms are one type of population-based meta-heuristics that are inspired by the process and mechanism of biological evolution. The most popular evolutionary algorithm is the Genetic Algorithm (GA) [16] that was proposed by John Holland and is inspired by the process of natural selection. It uses genetic operators such as selection, crossover, and mutation to evolve an initial random population throughout generations. GA starts with an initial random population. Then, for each generation, the fitness is calculated for all individuals in the population, and parents are selected for mating and creating offspring. The mutation is applied to guarantee the diversity of the population. Other popular evolutionary algorithms are Evolutionary Programming (EP) [17], Evolution Strategy (ES) [18], and Differential evolution (DE) [19].
Swarm-based algorithms are another type of meta-heuristic algorithms that are inspired by the social behavior of swarm systems in nature. Particle Swarm Optimization (PSO) [20] is the most popular swarm-based algorithm. PSO mimics the movement and interaction among individuals in a bird flock or fish school. Each candidate solution is called a particle and its movement is influenced by its own experience and the best solution found by the swarm. Ant Colony Optimization (ACO) [21] is another popular swarm-based algorithm that is inspired by the foraging behavior of some ant species. Grey Wolf Optimizer (GWO) [22] is also another interesting example of the swarm-based methods that mimic the social hierarchy and hunting behavior of grey wolves. Other examples of swarm-based algorithms are Artificial Bee Colony (ABC) [23], Firefly Algorithm [24], Cuckoo Search [25], Whale Optimization Algorithm (WOA) [26], and Bat Algorithm (BA) [27].
Unlike evolutionary algorithms and swarm-based algorithms, some meta-heuristics are inspired by laws of physics [28]. These include Gravitational Search Algorithm (GSA) [29] that is inspired by Newtonian gravity rule, Simulated Annealing (SA) [14], and Intelligent Water Drops Algorithm (IWD) [30]. Table 1 lists some popular meta-heuristics along with their working principles, advantages, limitations, and potential applications. These algorithms have been selected to be compared with our proposed algorithm because of their popularity and their similar processing principles. As illustrated in Table 1, some algorithms are inspired by the laws of biological evolution while others are inspired by the foraging or the hunting behaviors of some animals. All of these algorithms are gradient-free which means that they do not require any gradient information of the search space. Most of these algorithms have been extensively used to solve several optimization problems because of their advantages such as ease of understanding and implementation, fast convergence, and large-scale exploration. However, most of them tend to converge prematurely due to stagnation in local optima. Other algorithms have slow convergence speed or low diversity among their candidate solutions. As a result, these algorithms have different variations that improve some features of the original algorithms such as convergence speed, exploration behavior, and the trade-off between exploration and exploitation [12], [31], [32].

II. LITERATURE REVIEW
In real applications, the search space is generally complex and subject to many local optima. This increases the likelihood of stagnation into local optima therefore causing premature convergence of an optimization algorithm. Most optimization algorithms try to solve this challenge by including techniques that increase the diversity of the population [33]. These techniques may help in avoiding local optima; however, they may affect the convergence performance. Hence, a good balance between exploration and exploitation is required to develop a performant meta-heuristic algorithm for optimization [34]. This balance improves the convergence speed of the optimization algorithm and provides a higher exploration of the search space with the ability to avoid local optima.
Recently, many meta-heuristic algorithms have been introduced in the literature with different techniques to tackle these common challenges. For instance, Butterfly Optimization Algorithm (BOA) [35] is a nature-inspired meta-heuristic that has been recently introduced. BOA mimics the food search and mating behavior of butterflies. BOA uses simple strategies to achieve exploration and exploitation. In BOA, the butterfly may move randomly in the search space to achieve exploration or move towards the best butterfly to achieve exploitation. The balance between exploration and exploitation depends on a switch probability. BOA has been validated on classical benchmark functions and engineering design problems. In general, BOA exhibits good performance and results.
Another example of recently introduced meta-heuristics is the Stochastic Fractal Search (SFS) that is inspired by the natural phenomenon of growth [36]. SFS applies two main processes during the optimization task: the diffusing process and the updating process. The first process ensures exploitation while the updating process increases the exploration of the search space. Moreover, SFS uses two strategies to create new particles: Levy flight and Gaussian. These strategies help increase the convergence speed of the algorithm. SFS has been tested against both constrained and unconstrained standard benchmark functions, and it showed good performance and high exploration capabilities.
Whale Optimization Algorithm WOA [37] applies different strategies to achieve exploration and exploitation. Some solutions move around a randomly selected solution to improve exploration. On the contrary, other solutions move towards the best solution in a spiral movement to satisfy exploitation. WOA depends on two adaptive parameters to achieve the balance between exploration and exploitation. WOA has been tested and validated against standard benchmark functions and constrained engineering design problems. Moreover, many variants of WOA have been introduced and applied to several optimization problems.
Another newly introduced optimizer is the Harris Hawks Optimization (HHO) [38] which is inspired by the hunting behavior of Harris Hawks. The exploration phase mimics the way hawks search for prey by perching in random locations and waiting to detect prey. In the exploitation phase, HHO uses four strategies to attack the prey. HHO moves from exploration to exploitation through iterations using an adaptive equation that is similar to the one used in WOA. HHO has been tested and validated against several benchmark functions and constrained engineering design problems. HHO proved to be promising and competitive.
Recently, many researchers have introduced hybrid optimization algorithms that combine the advantages of two or more optimization algorithms to overcome the limitations of a single optimization algorithm. For instance, in [39] the PSO algorithm is combined with the Sine Cosine Algorithm and levy flight approach resulting in a new hybrid optimizer called PSOSCA. The levy flight approach introduces random walks in the search space. These random walks guarantee high exploration and better avoidance of local optima. The position updating equations in the Sine Cosine Algorithm (SCA) [40] enhance the exploration and exploitation capabilities of PSO. The new hybrid PSOSCA has been validated using standard benchmark functions and real constrained engineering problems. PSOSCA showed its advantages and effectiveness against most of the PSO variants.
In [41], a new variant of the Bat Algorithm (BA) [27] called ASF-BA was introduced. ASF-BA added adaptive inertia weight to speed up the velocity rate of bats. This technique improves the diversity among the individuals in the population. Moreover, ASF-BA replaces the weak random searching method of the Bat algorithm with the Sugeno function for fuzzy search that enhances the local search ability of bats.
The new hybrid algorithm ASF-BA has been validated and tested on several benchmark functions and proved to be promising in terms of convergence speed, stability, and solution quality.
In [42], the authors tested the hybridized Genetic Algorithm and Simulated Annealing (IGASA) to get rid of the limitations of GA and SA. The hybrid algorithm (IGASA) has been used for solving both low-dimensional and high-dimensional knapsack problems. The reported results proved that IGASA offered good quality solutions. However, it is more computationally expensive than GA and SA.
According to the No Free Lunch (NFL) theorem [43], there is no meta-heuristic best suited for solving all optimization problems. This explains why certain meta-heuristics perform better on specific optimization problems and not as good on others. Consequently, new optimizations are still being proposed. Our proposed optimization algorithm presents techniques to handle some of the drawbacks of existing optimization algorithms such as slow convergence, the balance between exploration and exploitation, and stagnation into local optima. The NFL theorem and attempts to overcome existing drawbacks have motivated us to develop the proposed optimization algorithm.
In this paper, we propose a new meta-heuristic optimization method, named Dynamic Group-based Cooperative Optimization Algorithm (DGCO) that is inspired by the cooperative behavior of swarm individuals to achieve their global goals. DGCO attempts to strike a balance between ensuring fast convergence and avoiding stagnation between local optima. This is implemented by applying techniques that help in enhancing the performance of exploitation, achieving a good balance between exploration and exploitation, enhancing the exploration of the search space, and increasing the diversity among the individuals of a current population. The main contribution of this paper is the introduction of DGCO as a new optimization algorithm with new insight into solving different optimization problems. Preliminary studies show that DGCO is competitive, promising, and capable of outperforming existing swarm-based and evolutionary-based algorithms. Moreover, the performance of the proposed algorithm has been verified on constrained engineering design problems.
The rest of this paper is organized as follows. An introduction to DGCO is given in Section III. The experimental results of validating and testing our approach against mathematical benchmark functions and constrained engineering design problems are described in Section IV and Section V respectively. A sensitivity analysis of the parameters of DGCO and their effect on the performance is introduced in Section VI. Finally, in Section VII, some conclusions and potential future work are presented.

III. DGCO OPTIMIZATION ALGORITHM A. MOTIVATIONS
DGCO is a dynamic group-based cooperative optimization algorithm that mimics the cooperation among individuals in VOLUME 8, 2020 a swarm to achieve a global goal of optimization. In nature, creatures tend to live in groups and communities. They usually collect food and defend against enemies together in a cooperative way while exchanging roles when achieving their tasks. They arrange themselves into sub-groups among which the individuals tend to inter-cooperate in the same sub-group and with individuals of other groups to achieve their global goal. For instance, ant colonies and bee colonies are the most popular examples of cooperation among individuals in a swarm. Each member in the swarm has a specific role to do for the colony. Soldier members are responsible for defending the colony whereas worker members are responsible for finding food and feeding other members. DGCO is inspired by the fact that members in groups tend to be arranged into sub-groups to interchangeably perform different duties at a period and cooperate to achieve their common goals. The solution of an optimization problem consists of resolving two sub-tasks: Exploration and Exploitation. In our case, DGCO divides individuals, also named search agents, into two subgroups, where each is dedicated to solving one of those two complementary sub-tasks. The number of individuals in each sub-group is dynamically controlled by DGCO. Each group applies two different techniques to complete its specific task, as described in the following sections. Furthermore, DGCO's techniques of exploration and exploitation ensure a good exploration of the search space, encourage diversity, preserve convergence, and avoid stagnation in local optima. In most of the cooperative optimization algorithms, all individuals perform exploitation at final iterations that may cause stagnation into local optima. DGCO avoids this phenomenon by keeping a group of search agents doing exploration over the course of iterations. Moreover, DGCO increases the number of exploring individuals immediately if the performance of the algorithm does not show enhancement in any three consecutive iterations.
The individuals that constitute a population represent solution candidates of an optimization problem. Each of these individuals consists of a vector of parameters. The population of candidate individuals is divided into two sub-groups: the exploration group and the exploitation group. The individuals in the exploration group mainly focus on exploring new areas of the search space for an optimal solution, whereas the individuals of the second group mainly focus on improving the quality of the actual best solution found based on an objective function. Unlike the traditional evolutionary algorithms, the individuals of DGCO that belong to the two sub-populations cooperate among themselves and exchange information and duties to ensure two goals: (1) good exploration of the search space, and (2) avoid local optima in an efficient way. DGCO keeps control of the balance between exploration and exploitation and offers an automatic mechanism to avoid the steady regions of the search space.

B. BASIC CONCEPTS AND FORMULATION
An optimization problem in the science and engineering domain consists of finding the best solution to a problem within a set of given constraints and conditions. Such a problem can be considered as a search problem for an optimal solution that can be represented by a vector in a search space. In DGCO, an individual of the population, belonging to either the exploration group or the exploitation group, can be represented by a vector S = {S 1 , S 2 , . . . , S d } ∈ R d where S i represents a parameter of the system or the problem to be solved or optimized, and d is the dimension of the problem and therefore of the search space. To assess how much an individual is performant till a certain moment, a fitness function f is adopted in the proposed algorithm for that purpose. The optimization process evolves through populations in looking for a certain optimal vector S * that optimizes the fitness function using the following steps:

1) INITIALIZATION
The algorithm starts with initial random individuals (solutions). The initial value of each component p i in a d-dimensional vector belongs to a range [min p , max p ] specific to the parameter represented in that vector. Therefore, DGCO requires the following parameters to start the optimization process: (1) the number of solutions that will be denoted by the population size, (2) the dimension of each solution, (3) the lower bound and upper bound of each solution component, and (4) the fitness function.

2) DYNAMIC GROUP-BASED COOPERATION
After initialization, a fitness value is calculated for each solution in the population. Then, the algorithm finds the best solution that has an optimal fitness value. After that, the algorithm divides the population's individuals into two groups: the exploration group and the exploitation group. DGCO ensures the balance between exploration and exploitation by dynamically changing the number of individuals in each group gradually. Each group applies two different strategies to achieve their global goal. Figure 1 illustrates the two groups of the DGCO algorithm.
For instance, some individuals in the exploitation group move towards the best solution designated by ''leader'', and other individuals search in the area around the leader. For the exploration group, an individual, up to a certain probability, performs a mutation in one or more of its parameter values, and other individuals search in the area around themselves to search for promising areas in the search space. Then, DGCO randomly exchanges the individuals between the two groups to ensure a certain degree of randomness. These techniques will be discussed in detail in the following sections.

C. BALANCING BETWEEN EXPLORATION AND EXPLOITATION
To ensure a balance between exploration and exploitation, DGCO dynamically changes the number of individuals in each of the population's sub-groups. Initially, the algorithm starts with a (70/30) schema by assigning 70% of the population's individuals to the exploration group whereas the remaining 30% are assigned to the exploitation group. It is worth mentioning that starting with a high percentage of individuals in the exploration group helps in exploring more promising areas in the search space in the early stages of the optimization process. The number of individuals in the exploration group decreases dynamically from 70% to 30%, whereas the number of individuals in the exploitation group increases from 30% to 70% over the course of iterations allowing more improvement in the global average of individuals' fitness by letting more exploiting individuals improve their fitness value. Moreover, the algorithm applies the elitism technique in order to ensure the convergence of the process by keeping the leader individual in a subsequent population in case no better solution is found for that new population. In order to avoid local optima and stagnation problems, DGCO may increase the number of individuals in the exploration group at any iteration if the fitness of the leader does not improve significantly for three consecutive iterations. Figure 2 shows how the number of individuals in exploration and exploitation groups change dynamically over the course of iterations. Figure 2(a) shows a sample convergence curve for a sample optimization problem of finding a point in 2-dimensional space. Figure 2(b) illustrates the same process described in (a) depicting the complementary number of exploration and exploitation of individuals over the iterations of the process.

D. EXPLORATION GROUP
The exploration group is responsible not only for finding promising areas in the search space but also for avoiding stagnation in local optima. To achieve that, DGCO uses two different techniques for exploration: explore around the current individual and mutation.

1) EXPLORE SEARCH SPACE AROUND THE SOLUTION
In this technique, the individual is looking for promising areas around its location in the search space. This is performed by iteratively searching for a better solution in terms of fitness value among its neighboring possible solutions. For this purpose, DGCO uses the following equations: where r 1 and r 2 are coefficient vectors in the intervals [0, 2] and [0, 1] respectively, t refers to the current iteration, S is the current solution vector, and D indicates the diameter of the circle in which the solution will look for promising areas.

2) MUTATION OF THE SOLUTION
Another technique that DGCO applies for exploration is mutation. It is a genetic operator used to introduce and maintain diversity in the population. It can be seen as a VOLUME 8, 2020 local random perturbation of one or more components in the individuals based on a certain probability. It helps avoid local optima thus preventing premature convergence. Such a perturbation acts as a jump to another promising area in the search space. In fact, mutation is one of the key factors that gives DGCO its high exploration capability.

E. EXPLOITATION GROUP
The exploitation group is responsible for obtaining even better solutions from good ones. At the beginning of each iteration, DGCO calculates the fitness value of all individuals and recognizes the best individual that has the best fitness value. In order to achieve exploitation, DGCO uses two different techniques, as follows:

1) MOVING TOWARDS THE BEST SOLUTION
In this technique, the individual moves towards the best solution using the following formulas: where r 3 is a random vector in the interval [0, 2] that controls the step of the movement towards the leader solution, t refers to the current iteration, S is the vector of the current solution, L is the vector of the best solution, and D indicates the distance vector.

2) SEARCHING AROUND THE BEST SOLUTION
The area around the best solution (leader) is most likely to be promising. Therefore, some individuals search in the area around the best solution with the hope to find an even better solution. To achieve this, DGCO uses the following formulas: where r 4 and r 5 are random vectors in the interval [0, 1], k decreases exponentially from 2 to 0 over the course of iterations, L is the vector of the best solution, S is the current solution vector, and D indicates the diameter of the circle in which the solution will look for better solutions.

F. ELITISM OF THE BEST SOLUTION
To guarantee the solution's quality, DGCO elects the best solution to the next iteration of the process without any modifications. The elitism strategy improves the performance of the algorithm; however, it may lead to premature convergence in multimodal functions [53], [54]. It is worth mentioning that DGCO has high exploration capabilities by using mutation strategy and searching around individuals in the exploration group. This high exploration capability helps DGCO prevent premature convergence. The pseudo-code of the DGCO algorithm is represented in Algorithm 1. First, we provide DGCO with input parameters: population size, iterations count, and mutation rate. Then, DGCO divides the individuals into two groups: the exploration group and the exploitation group. The algorithm controls the number of individuals in each group dynamically. Each group applies two different strategies to achieve their tasks. It is worth mentioning that DGCO randomizes the order of solutions at the end of each iteration to guarantee diversity and high exploration. For instance, a solution in the exploration group in an iteration may be a member of the exploitation group in the next iteration. The elitism strategy applied in DGCO prevents the algorithm from losing the leader individual from one iteration to the next.
The main execution steps of the proposed algorithm are the following: • Initialization: The algorithm randomly generates a predefined number of individuals. Each individual represents a candidate solution to the problem being solved.
• Fitness Evaluation: Each candidate solution is evaluated using a fitness function that measures the quality of the solution.
• DGCO Groups: The algorithm divides the individuals of the population into two groups and then manages the number of solutions of each group dynamically at the beginning of each iteration. Over the course of iterations, the number of individuals in the exploration group decreases from 70% to 30% of the total number of individuals. On the other hand, if the fitness values of the leaders in three consequent iterations do not change significantly, the algorithm immediately increases the number of individuals in the exploration group to become 70% of the total number of individuals with the hope of finding other promising areas in the search space or avoiding stagnation in local optima.
• Exploration / Exploitation: DGCO applies two different strategies to explore the search space in an efficient way. These strategies are search around the solution and mutation. Also, it applies two different strategies to achieve exploitation. These strategies are moving toward the best solution and searching in the area around the leader solution.
• In the end, DGCO modifies the individuals that go beyond the search space. Then, the order of individuals is changed randomly in order to exchange the roles of members of the exploration and exploitation groups. At the end of the optimization, DGCO returns the best solution.

IV. RESULTS AND DISCUSSION
In order to evaluate the performance of the proposed optimization algorithm, twenty-three standard benchmark mathematical functions have been used to find their minimum values in a specific domain of the search space. These functions are widely used in literature for benchmarking optimization algorithms [55]. Those functions are divided into three categories: Unimodal functions, Multimodal functions, and Multimodal functions with fixed dimensions. Table 2 and Table 3 list the benchmark functions where D represents the dimension of the function and f min represents the optimum value of the function that is the minimum in this case. Unimodal high-dimensional functions are used to test the convergence rate of the search algorithm. These functions are suitable for benchmarking exploitation performance. On the other hand, Multimodal high-dimensional functions are more difficult to optimize as they have many local optima, so they are suitable to benchmark the exploration capabilities of VOLUME 8, 2020  optimization algorithms. The fixed-dimension multimodal functions are similar to high-dimensional functions, but they have a lower number of local minima.  others have many local optima; therefore, these functions are suitable to benchmark exploration, exploitation, and convergence of optimization algorithms.

A. INTRODUCTION AND EXPERIMENTAL SETUP
Because of the random initialization of the individuals of the first population of meta-heuristic algorithms, we have run the experiments 30 times for each of the considered benchmark mathematical functions. Each run consists of 500 iterations. The population size is one of the parameters of the algorithm and has been set to include 30 individuals in our experiments. All individuals are initialized with random values. In order to validate our results, DGCO was compared with other well-known cooperative and competitive algorithms such as Particle Swarm Optimization (PSO) [20], Differential Evolution (DE) [19], Genetic Algorithm (GA) [16], Whale Optimization Algorithm (WOA) [26], and Grey Wolf Optimizer (GWO) [22]. In fact, there are many optimization algorithms in the literature. We have selected these five algorithms based on two significant factors: functionality and popularity. We selected PSO because of its well-known cooperative nature. PSO has been used to solve various optimization problems in the last decade [32]. DGCO and PSO share the same principle of cooperation among individuals to achieve their global goals. On the other hand, we selected the Genetic Algorithm (GA) and Differential Evolution (DE) as examples of evolutionary algorithms inspired by the theory of natural evolution. GA is considered as one of the most popular optimization algorithms in the literature. GA and DE have been applied to solve several real-world problems in different domains [44], [47]. Because DGCO has some evolutionary principles such as mutation and elitism, we decided to compare DGCO with popular evolutionary algorithms like GA and DE. Moreover, DGCO has been compared with stateof-the-art swarm-based algorithms such as Grey Wolf Optimizers (GWO) and Whale Optimization Algorithms (WOA). GWO and WOA were introduced in 2014 and 2016; they have been extensively used to solve several optimization problems in different domains. They are inspired by the cooperation of humpback whales and grey wolves to find their food [37], [49]. These five algorithms share some common principles with DGCO; therefore, they are suitable for comparison with our proposed algorithm.
In all experiments, we have used Python to implement the proposed optimization algorithm. For PSO, GA, DE, GWO, and WOA, we modified EvoloPy open-source [56]. The experiments have been run on a machine with the following specs: Intel i7 processor, 16 GB RAM, and Windows 10 operating system. The initial parameters of the algorithms are listed in Table 4. The obtained results have been statistically analyzed by comparing the average and relative standard deviation. Table 5 and Table 6 illustrate the obtained results for unimodal and multimodal benchmark functions. Figure 4 illustrates the convergence curves for eight benchmark functions. It should be noted that differential evolution (DE) may provide better results, similar to those in [57] VOLUME 8, 2020  and [26], if we increase the number of iterations. However, we preferred to use the same configuration (population size, number of iterations, and number of runs) as the other algorithms in our experiment to provide a fair comparison.

B. EXPLOITATION PERFORMANCE ANALYSIS
The unimodal functions (F1-F7) have only one global optimum, so they are used to benchmark exploitation of the optimization algorithms. It can be seen from Table 5 that DGCO was the most efficient optimizer for functions F1, F2, F3, F4, and F7 and had very competitive results for F5 and F6. The reason is that DGCO applies two different exploitation techniques in every iteration. The first technique is moving towards the best solution whereas the other technique is searching around the best solution. These techniques help DGCO achieve exploitation by finding better solutions around the best solution found so far. The integrated exploitation techniques are not the only reasons for the high exploitation capability of DGCO; the balance between exploration and exploitation is another important factor. Unlike most of the optimizers, DGCO starts the exploitation process at early iterations, and the number of individuals in the exploitation group increases over the course of iterations. This explains why DGCO outperformed other optimizers in most of the unimodal benchmark functions.

C. EXPLORATION PERFORMANCE ANALYSIS
The multimodal benchmark functions (F8-F23) have many local minima that increase exponentially with problem size. Therefore, they are suitable to measure exploration performance and the ability to avoid local optima. It can be seen from Table 6 that DGCO is very competitive against other optimization algorithms. The reported results in the table prove the high exploration capability of DGCO.
The reason is that DGCO applies two different exploration techniques. The first one is to search around each solution

mean, standard deviation and p-value for t-test between DGCO and other optimizers) for benchmark functions (F8 -F23).
whereas the second is the mutation that acts as a random jump to another position in the search space. Also, DGCO controls the number of individuals in different teams in order to achieve a balance between exploration and exploitation. DGCO starts with a higher number of exploring individuals at early iterations, and this number decreases over the course of iterations. In addition to that, DGCO guarantees that some individuals are exploring the search space until the last iteration. All these integrated techniques provide DGCO with the ability to highly explore the search space and find promising points that may lead to the global optimum.

D. ABILITY TO AVOID LOCAL OPTIMA
The multimodal benchmark functions (F8-F23) have many local optima that make the optimization process difficult and challenging. The ability of DGCO to explore the search space and avoid local optima has been confirmed by the reported results of the multimodal benchmark functions in Table 6. As shown in the table, DGCO was the best optimizer in six benchmark functions and was very competitive in other benchmark functions. Besides the two different exploration techniques, DGCO offers dynamic control of the balance between exploration and exploitation. Moreover, the mutation operator, provided by DGCO, introduces local random perturbations to some components of the exploring individuals that act as a jump to another position in the search space. Therefore, it helps in avoiding local optima. The integrated techniques guarantee a high number of exploring individuals in the early iterations.
Unlike most of the optimizers, DGCO guarantees that some individuals are exploring the search space over the entire course of iterations. Moreover, DGCO increases the number of exploring individuals if the fitness of the leaders for three consecutive iterations does not improve significantly. These techniques guarantee the ability of DGCO to avoid local optima and stagnation problems.

E. STATISTICAL SIGNIFICANCE OF THE RESULTS
In order to verify that the results obtained by DGCO have a statistical significance, a one-tailed t-test with a significance level equal to 50% is employed for each benchmark function. Tables 5-7 present the t-test results of comparing DGCO against other optimizers in finding the minimum of each benchmark function. Table 7 provides statistical significance test results between DGCO and each optimizer. The obtained results show that DGCO outperformed most of the surveyed optimization algorithms. DGCO outperformed other optimization algorithms in statistical significance in 86 out VOLUME 8, 2020 of 115 cases. Whereas other optimization methods outperformed DGCO in significance in only 4 cases. The other optimization methods achieved better results in 9 cases but without significance.
On the other hand, DGCO achieved better results but without statistical significance in the remaining 16 cases. It is noted that in some of the 16 cases, DGCO and other optimizers could find the global optima of the function (F1, F2, F3, F9, F10, and F11) with almost the same average but with different standard deviations. This may explain why there is no statistical significance.

F. CONVERGENCE BEHAVIOR
In this section, we discuss the convergence behavior of DGCO. Most of the meta-heuristics algorithms tend to extensively explore the search space at early iterations by introducing random changes to their search agents. Then, these random changes are reduced to increase exploitation at the final iteration. This behavior guarantees that an algorithm eventually converges towards a point that could be a global optimum in the search space. Similarly, DGCO follows the same behavior by dividing the population into two teams: the exploration team and the exploitation team. In addition, DGCO controls the number of individuals in each team dynamically by starting with a higher number of individuals in the exploration team at early iterations. Then, the number of exploration individuals decreases over the course of iterations to emphasize exploitation.
Unlike most of the optimizers, DGCO guarantees that some individuals are working on the exploitation team in early iterations. This explains the fast convergence behavior reported in Figure 4. The convergence behavior of DGCO is compared to PSO, GA, and GWO for some benchmark functions. Figure 4 shows that DGCO tends to converge very quickly compared to other optimizers in most of the benchmark functions. This is due to the ability of DGCO to control the balance between exploration and exploitation and starting exploitation in early iterations.
Moreover, the integrated elitism strategy improves the convergence performance of DGCO and ensures the movement of the best solution found in one iteration to the next iteration.

G. CONVERGENCE TIME
In this section, we study the execution time of the examined algorithms to find the global optima of different benchmark functions. In our experiments, we used the same configuration in Table 4. Moreover, we have added a stopping condition for all algorithms so that in each run, the algorithm stops running if it finds the global optima of a benchmark function (as described in Table 2 and Table 3). The average and the standard deviation of the time of convergence of each algorithm for each benchmark function over 30 runs are reported in Table 8. Moreover, we have reported the average number of calls to the fitness function (average function evaluations FEs) in the same table. The number of function evaluations represents the computational effort required for each algorithm to find the global optima of an optimization problem. As shown in Table 9, the merit of DGCO appears in its ability to converge very quickly towards global optima.
In Table 9, DGCO is the best algorithm in terms of convergence time for most of the benchmark functions compared to other surveyed optimization algorithms. Consequently, DGCO has the lowest number of function evaluations as reported in Table 9. Being able to find the global optima very quickly with a lower number of function evaluations proves that DGCO requires less computational effort and that it is very competitive compared to other optimization algorithms. As mentioned in Section 3.5, the reason for the fast convergence behavior of DGCO is that the algorithm guarantees that some individuals are working on the exploitation group from early iterations, and the number of individuals is increased over the course of iterations. This behavior guarantees that the algorithm will search around the best solutions found in early iterations and find better solutions very quickly.
Based on our experiments, we found that some algorithms such as PSO and DE are able to find global optima of some benchmark function but after a higher number of iterations (around 3000-5000 iterations) which increases the convergence time and the number of function evaluations as well. VOLUME 8, 2020   Moreover, DGCO is equipped with the elitism technique, which guarantees that the best solution found in one iteration will not be lost in the next iteration. This helps improve the convergence process of the algorithm.
Based on the reported results in Tables 5-9 and Figure 4, DGCO proved to be a promising competitor to existing optimization algorithms in terms of convergence behavior, convergence time, and computational effort expected from an optimization algorithm.

V. DGCO FOR CONSTRAINED ENGINEERING DESIGN PROBLEMS
In order to examine the performance of solving real-world constrained engineering design optimization problems, we have applied DGCO to three well-known engineering design problems: pressure vessel design, tension/compression spring design, and welded beam design. The general constrained optimization problem along with equality, inequality, and lower bound and upper bound constraints can be expressed as follows: Minimize f (x) Subject to the constraints : where f (x) is the objective function, g j (x) = 0 are the equality constraints, p is the total number of equality constraints, g j (x) ≥ 0 are the inequality constraints, m is the total number of inequality constraints, l i and u i are the lower bound and the upper bound of x i respectively. and n is the number of design variables (dimension of the solution) [58]. Due to the fact that these constrained optimization problems have several equality and inequality constraints, we have employed scalar penalty functions [59] to model and handle these constraints. By using the penalty functions, the unconstrained optimization problem is converted to a constrained optimization problem. The new objective function can be described as follows: where g (c i (x)) = max(0, c i (x)) 2 is the penalty function, c i (x) are the constraints of the problem, ϕ k (x) is the new objective function, f (x) is the objective function of the problem, and p k is the penalty coefficient. By applying this penalty rule, we guarantee that any individual that violates the constraints will be assigned a high objective function value. For these constrained optimization problems, we used similar penalty functions of those adopted in [60] and set the penalty coefficient to p k = 10 15 .

A. PRESSURE VESSEL DESIGN PROBLEM
In the pressure vessel design problem, the main goal is to minimize the total cost of material, forming, and welding of the cylindrical pressure vessel shown in Figure 5. There are four continuous design variables and four inequality constraints. The problem can be formulated as follows: +3.1661T 2 s L + 19.84T 2 h L Subject to the constraints : where Variables range as follows: where T s , T h , R and L are the thickness of the shell, the thickness of the head, the inner radius, and the length of the cylindrical section of the vessel respectively. This problem has been solved by many researchers using different techniques such as GA [60], PSO [61], GWO [22], and mathematical methods like the augmented lagrangian multiplier (ALM) [62]. Table 9 shows the performance of DGCO on this problem. This table shows the optimal values for the design variables corresponding to the best solution obtained by DGCO, GA [60], PSO [61], GWO [22], WOA [26], ALM [62], and GSA [26]. As can be seen, DGCO outperforms other optimization algorithms, and it is able to find the best feasible optimal design for the pressure vessel design problem. Table 10 compares the statistical results obtained by DGCO and some of the other algorithms when solving the pressure vessel design problem over 30 independent runs. We have used 20 individuals over 500 iterations to solve this problem. It may be observed in this table that DGCO has the best average compared to the other methods. In addition, DGCO outperforms other methods in finding the best optimal design with the least number of fitness evaluations. The integrated exploration and exploitation techniques in DGCO assist in finding the best optimal design variables for this problem. Moreover, being able to find the optimal values with the least number of fitness evaluations is another proof of the fast convergence behavior of DGCO.

B. TENSION/COMPRESSION SPRING DESIGN PROBLEM
The objective of this problem is to minimize the weight of the spring (as shown in Figure 6) subject to constraints on deflection, stress, surge frequency, and geometry [48]. The problems involve three continuous variables and four nonlinear inequality constraints. The design variables are the wire diameter (w), the mean coil diameter (d). and the length or number of coils (L). The mathematical model of the spring design problem can be expressed as follows: Minimize f (w, d, L) = (L + 2) w 2 d Subject to the constraints : where Variables range as follows: Table 11 illustrates the performance of DGCO on this problem. This table shows the optimal values for the design variables and the optimal cost obtained by DGCO, GA [60], PSO [61], GSA [26], DE [48], GWO [22], and WOA [26]. It may be observed in this table that DGCO outperforms the other methods, and it is able to find the optimal design for the tension/compression spring design problem.
The statistical results of DGCO and some of the other algorithms when solving the tension/compression spring design problem are presented in Table 12. To solve this problem, we have used 20 individuals, a maximum of 500 iterations, and 20 independent runs. According to the results in this table, DGCO was the second-best optimizer in terms of average. In addition, DGCO was able to find the best optimal design for the problem with the least number of function evaluations. The results prove the ability of DGCO to highly explore the search space and quickly converge towards the optimal goal.

C. WELDED BEAM DESIGN PROBLEM
This problem is considered one of the standard engineering optimization problems [63]. The problem involves the following four design variables: the width w and the length L of the welded area and the depth h and the thickness d of the main beam. The objective of this problem is to minimize the fabrication cost of the welded beam (as shown in Figure 7) subject to the following constraints: shear stress τ , bending stress σ , buckling load P, and maximum end deflection δ.   The problem can be formulated as follows: Minimize f (w, L, d, h) = 1.10471w 2 L + 0.04811dh(14.0 + L) Subject to the constraints : where Variables range as follows: Table 13 indicates the optimal design variable corresponding to the optimal cost obtained by DGCO, GA [60], PSO [61], GSA [26], Ray Optimization Algorithm (RO) [64], and WOA [26]. As shown in the table, DGCO outperforms other methods, and it is able to find the best feasible optimal design variables for the welded beam design problem.
The statistical results of DGCO and some of the other algorithms when solving the welded beam design problem are presented in Table 14. We used 20 individuals over 500 iterations and 20 runs. As may be seen in the table, DGCO has the third-best average. However, DGCO finds the best optimal design with the least number of function evaluations.

D. SPEED REDUCER DESIGN PROBLEM
The objective of this problem is to minimize the weights of the speed reducer (as shown in figure 8). The problem involves seven design variables and 11 constraints. VOLUME 8, 2020   The design variables are: face width x 1 , module of the teeth x 2 , number of teeth on the pinion x 3 , length of the first shaft between bearings x 4 , length of the second shaft between bearings x 5 , diameter of the first shaft x 6 , and diameter of the second shaft x 7 . The problem can be formulated as follows: Minimize +0.7854 (x 4 x 2 6 + x 5 x 2 7 ) Subject to the constraints : where Variables range as follows: Table 15 shows the performance of DGCO on the speed reducer design problem. This table shows the optimal values for the design variables corresponding to the best solution obtained by DGCO, GA, PSO, GWO, and WOA. As can be seen, DGCO outperforms other optimization algorithms, and it is able to find the best feasible optimal design for speed reducer design problem.   Table 16 compares the statistical results obtained by DGCO and some of the other methods when solving the speed reducer design problem over 30 independent runs. We have used 20 individuals over 500 iterations to solve this problem. It may be observed in this table that DGCO has the best average compared to other methods. Moreover, DGCO outperforms other methods in finding the best optimal design with the least number of fitness evaluations. The complexity of this constrained engineering problem and the superiority of DGCO prove that DGCO can handle constrained engineering problems and reflect the high exploration and exploitation capabilities of DGCO.

E. HIMMELBLAU's NonLinear OPTIMIZATION PROBLEM
This problem is one of the well-known benchmark problems for optimization algorithms. It was proposed by Himmelblau. The problem involves five design variables and six nonlinear inequality constraints. The problem is formally described as follows: Subject to the constraints : where Variables range as follows: Table 17 illustrates the performance of DGCO and some of the other methods on this problem. This table includes the optimal design variable corresponding to the optimal cost obtained by DGCO, GA, PSO, GWO, and WOA. As may be seen in this table, DGCO outperforms the other methods, and it is able to find the optimal design for Himmelblau's Nonlinear optimization problem.
The statistical results obtained by DGCO and the other methods are illustrated in Table 18. We have used 20 individuals over 500 iteration and 20 independent runs. As may be seen in this table, DGCO has the best average for this problem. In addition, DGCO was able to find the optimal variable with the least function evaluations which confirms the fast convergence behavior of DGCO. These results prove that DGCO can be used to solve constrained problems efficiently.

VI. SENSITIVITY ANALYSIS OF DGCO PARAMETERS
In this section, we investigate the sensitivity analysis of DGCO with respect to its parameters. DGCO has five parameters: population size, number of iterations, mutation rate, exploration percentage, and k vector. Table 19 lists the parameters of DGCO along with their descriptions and default values. Those parameters control the performance of the algorithm in solving optimization problems. Any single change in one parameter may affect the optimization process. Hence, a sensitivity analysis of these parameters is conducted in order to collect information that will help improve the algorithm in future runs. In order to perform a sensitivity analysis of DGCO's parameters, we selected the multimodal benchmark function (F23), as described in Table 3.

A. ONE-AT-A-TIME SENSITIVITY ANALYSIS
In order to perform the sensitivity analysis, we have applied the One-at-a-time (OAT) sensitivity measure [66]. OAT is considered as one of the simplest approaches to sensitivity analysis. OAT consists of measuring the performance of the algorithm while changing one parameter at a time and keeping the other parameters constant. Table 20 and Table 21 list the observed changes in time and the fitness values of DGCO while changing the values of different parameters. As shown in Table 20 and Table 21, we have selected 20 different values in the interval of each parameter by adding 5% of the length of that interval to obtain a new value for evaluation. For each of those values, the algorithm was run 10 times and the averages for both time and fitness have been reported on the table. Each parameter involved 200 different runs of DGCO. Figure 9 lists the curves of convergence time and fitness for each parameter. As shown in the figure, each parameter has two curves: one for convergence time and another for fitness. Iterations count and population size proved to be the most effective parameters on the convergence time of the algorithm. This is verified by the fact that increasing the number of individuals or iterations will increase the number of calls to the objective function, which will increase the computational cost and the time for convergence. On the other hand, higher values of vector K slightly decrease convergence time.
Moreover, exploration percentages greater than 20 have shown a good effect on the convergence time of the algorithm. A mutation rate with values lower than 0.6 has shown a clear effect on the convergence time. For the fitness of the algorithm, increasing iterations count or vector K enhances the fitness of the algorithm. In addition, population size values greater than 50 search agents enhance the fitness of the algorithm. Population size values greater than 50 also have a good effect on the performance of the algorithm.

B. REGRESSION ANALYSIS
For a further study on how parameters can explain the variations of the performance of the algorithm, a regression analysis has been performed. Regression analysis is adequate when we need to predict the value of a dependent variable (performance of the algorithm) upon the value of an independent variable (parameter). Table 22 lists the results of regression analysis for the parameters of DGCO, convergence time, and fitness. The value of R Square indicates how much of the total variation of time or fitness can be explained by the values of the parameter. As listed in Table 22, iterations count and population size have the highest value of R Square for convergence time which means that they can explain any variation in  convergence time very well. On the other hand, the vector k value can explain 57.59% of the variation in the convergence time of the algorithm. Moreover, the exploration percentage and the mutation rate can explain 37.28% and 24.81% of the variation in convergence time respectively. In Table 22, significance F with values lower than 0.05 indicates that   the regression model is statistically significant to predict the performance of the algorithm.

C. STATISTICAL SIGNIFICANCE OF THE RESULTS
In order to determine whether there is any statistical difference between the means of different observations listed in Table 20 and Table 21, we used a one-way analysis of variance (ANOVA). We conducted two ANOVA tests for both convergence time and fitness values while changing the parameters of DGCO. Table 23 lists the results of ANOVA test for the convergence time and the minimum fitness of DGCO. In Table 23, we can see that p-values are less than 0.05 and F is greater than F-critical. Therefore, there is a statistically significant difference between the means of the five groups of convergence time observed by changing the values of each parameter. Moreover, there is a statistically significant difference between the means of the five groups of minimum fitness observed by changing the values of each parameter.
ANOVA test does not provide any information about which groups have a statistical significance. Therefore, a post hoc test is conducted between every two groups. For this purpose, we have employed a one-tailed T-Test at 0.05 significance level. Table 24 lists the results of the T-Test conducted for each pair of parameters based on values observed for convergence time and minimum fitness of DGCO. As listed in the table, p-values less than 0.05 indicate that there a statistically significant difference between groups. For convergence time, the p-value is greater than 0.05 for the T-Test between exploration percentage and mutation rate. This indicates that there is no statistically significant difference between their effects on the convergence time. In addition, mutation rate and iterations count have no statistically significant difference in minimum fitness. Mutation rate and vector k also have no statistical significance in terms of minimum fitness. Similarly, iterations count and vector k have no statistical significance for minimum fitness.

D. DISCUSSION AND RANKING OF PARAMETERS
According to Table 22, the parameters of DGCO can be ranked based on their effect on the fitness values as the following: Vector K, Iterations Count, Mutation Rate, Exploration Percentage, and Population Size. Moreover, DGCO's parameters can be ranked based on their effect on the convergence time as the following: Iterations Count, Population Size, Vector K, Exploration Percentage, and Mutation Rate. The mutation rate has the least effect on the convergence time of DGCO. It is noted that DGCO uses mutation as one of two techniques for exploring the search space. This means that 50% of the exploration group is using mutation. Moreover, the main role of mutation is to avoid stagnation into local optima. The convergence speed depends highly on the exploitation group. All of these reasons explain why the mutation rate has the least effect on the convergence of the algorithm.
As a summary, the convergence time of DGCO is highly responsive to population size and iterations count. Moreover, the convergence time of DGCO is very responsive to the values of exploration percentage greater than 25. The convergence time has a notable response for K values greater than one and mutation rate values greater than 0.7. The fitness value is strongly affected by population sizes greater than 70. Moreover, vector K has shown a good impact on the fitness for values greater than one. The fitness of the algorithm is slightly affected by the iterations count, mutation rate and exploration percentage.

VII. CONCLUSION
In this paper, we proposed a novel meta-heuristic optimization algorithm called Dynamic Group-based Cooperative Optimization Algorithm (DGCO) It is inspired by the way individuals of a group cooperate to achieve a global goal. DGCO divides the individuals into two groups: exploration group and exploitation group. The exploration group applies two different techniques (Search around individuals and Mutation) to highly explore the search space for promising areas. On the other hand, the exploitation group applies two techniques (Move towards the leader and Search around the leader) to find better solutions from good ones. DGCO allows dynamic control of the number of individuals in each group to guarantee the balance between exploration and exploitation.
A comparative study has been conducted on twenty-three standard benchmark mathematical functions to evaluate the performance of the proposed algorithm in finding the optimal points of each function and to study its convergence behavior. The obtained results showed that DGCO is very competitive with state-of-the-art meta-heuristic algorithms. DGCO has a fast convergence behavior due to its high exploration and exploitation capabilities and the ability to avoid local optima.
Moreover, DGCO has been tested for solving constrained engineering design optimization problems (pressure vessel design, tension/compression spring design, and welded beam design). The results showed that DGCO is able to solve real-world constrained optimization problems. A sensitivity analysis of the parameters of DGCO has been conducted to study the impact of the parameters on the performance of DGCO.
For future work, we are planning to deploy parallel and multi-objective versions of DGCO. Moreover, DGCO will be applied to solve further optimization problems. He is also a visiting part-time Professor with the MET Academy. He has been teaching computer engineering at American University and Mansoura University. He has taken over many positions of leadership and supervision of many scientific articles. He has published hundreds of articles in well-known international journals. His research interests include machine learning, data mining and pattern recognition, optimization, and evolutionary computation. He is also a member of the IEEE Computational Intelligence Community.
EL-SAYED M. EL-KENAWY (Member, IEEE) is currently an Assistant Professor with the Delta Higher Institute for Engineering and Technology (DHIET), Mansoura, Egypt. He has inspired and motivated students by providing a thorough understanding of a variety of computer concepts. He has pioneered and launched independent research programs. He is interested in computer science and machine learning fields. He is adept at explaining sometimes complex concepts in an easy-tounderstand manner. VOLUME 8, 2020