A New Metaphor-Free Metaheuristic Approach Based on Complex Networks and Bezier Curves

A metaheuristic method is an optimization technique that is generally inspired by natural or physical processes. The use of metaphors has created a tendency to reproduce existing algorithms with slight modifications or variations rather than encouraging the development of novel algorithmic techniques and principles. On the other hand, a complex network is a mathematical structure whose main characteristic is the ability to capture and analyze the intricate patterns and properties that emerge from the interactions between the elements that it connects. In this paper, a new metaphor-free metaheuristic algorithm based on complex networks and Bezier curves is presented. In this approach, candidate solutions are represented as nodes in a graph, whereas the connections between nodes or edges reflect the differences in their objective function values. Therefore, the graph provides a higher-level representation that captures the essential relationships and dependencies among the solutions. Once the graph is generated, the shortest path between each solution and the best solution is obtained. Then, the nodes obtained from this process are used as control points in the Bezier equation to generate the new agent position. Therefore, during the optimization process, the graph is continuously modified based on the evaluation of new candidate solutions and their objective function values, producing trajectories that allow the exploration and exploitation of the search space. The experimental results demonstrated the effectiveness of our approach by achieving competitive results compared to other well-known metaheuristic algorithms on various benchmark functions.


I. INTRODUCTION
Optimization [1] is the process of determining the best solution for a problem within a given set of constraints.This involves maximizing or minimizing an objective function while satisfying specific conditions or limitations.Optimization methods can be broadly classified into two categories: classical and metaheuristic.Classical optimization methods refer to a set of well-established mathematical techniques that aim to find the optimal solution through systematic mathematical analysis and calculations.Metaheuristic optimization methods [2] are general-purpose algorithms inspired by natural or physical processes or social behavior.Metaheuristic techniques offer several advantages [3] over classical opti- The associate editor coordinating the review of this manuscript and approving it for publication was Nazar Zaki .mization methods.They do not require explicit mathematical models and can handle complex, large-scale, or nondifferentiable problems.
Metaheuristic methods can be classified into several categories [2] based on their underlying principles, sources of inspiration, or search strategies.Nevertheless, the most common classification divides these techniques into evolutionbased, swarm intelligence, and physics-based algorithms [4], [6].Evolution-based algorithms simulate biological evolution by employing selection, crossover, mutation, and reproduction operators to generate better candidate solutions.The most popular algorithms in this category are Genetic Algorithm (GA) [7], Genetic Programming [8], Tabu Search (TS) [9], Differential Evolution (DE) [10], Evolutionary Programming [11], and Evolutionary Strategies [12].Swarm intelligence algorithms are based on the collective social behavior of animals and insects.In this case, candidate solutions are obtained through the interactions between individuals and their environments.The most important metaheuristic algorithms based on swarm intelligence include Particle Swarm Optimization (PSO) [13], Bat Algorithm (BA) [14], Cuckoo Search Algorithm (CS) [15], crown search algorithm (CSA) [16], and Grey Wolf Optimization (GWO) [17].Finally, physics-based algorithms use the universe's physical rules to define the search operators and find candidate solutions.Examples of such algorithms include simulated annealing (SA) [18], Harmony Search (HS) [19], Sine Cosine Algorithm (SCA) [20], and state-of-matter search (SMS) [21].
Metaheuristic algorithms have demonstrated their superiority in several real-world applications where classical techniques cannot be used [22], [23], [24], [25], [26], [27], [28].However, it is impossible that all metaheuristic techniques can solve all problems competitively due to the inherent complexity and diversity of optimization problems [4].Each problem has unique characteristics and requires specific strategies to achieve optimal solutions.Therefore, a single metaheuristic algorithm cannot be universally effective for all problem types.Additionally, many metaheuristic algorithms may have some limitations, such as slow convergence rate, difficulty handling high-dimensionality, parameters and noise sensibility, and convergence to local optima.Under such conditions, introducing new methods is necessary to expand the capabilities of metaheuristics and improve their applicability to various problem domains [5].By developing novel techniques, researchers attempt to address a higher number of problems and enhance the effectiveness and efficiency of optimization algorithms.New approaches can provide novel ways to solve the limitations of existing metaheuristic algorithms, improving the convergence rate, solution quality, and robustness.
Traditional metaheuristic algorithms incorporate metaphors or analogies from various natural or physical systems to guide the search process.Metaphors have played a significant role in developing metaheuristic algorithms and have provided valuable insights and inspiration.However, potential drawbacks and limitations are associated with the use of metaphors in this context [29].The exclusive use of metaphor-based principles may hinder innovation in the design of algorithms.It can create a tendency to reproduce existing algorithms with slight modifications or variations rather than encouraging the exploration of novel algorithmic techniques and principles [30].This limits the potential for algorithmic breakthroughs and advancements.In contrast, a metaphor-free metaheuristic algorithm [31] is an optimization algorithm that does not rely on any specific metaphor or analogy to guide the search process.A metaphor-free metaheuristic algorithm is designed to avoid any direct reference to particular metaphors or analogies.Instead, they focus on developing algorithmic mechanisms based on the combination of mathematical and computational principles [32].The most important advantage of metaphor-free metaheuristic algorithms over classical metaheuristics is their potential for novel and creative problem-solving approaches.Metaphorfree methods can explore unconventional search operators based on mature mathematical computational principles that have already been successful in various scientific areas.
Complex networks [33] are mathematical and computational models used to represent and analyze relationships or interactions between entities or components in a system.They present a structure consisting of nodes (also called vertices) and edges.Nodes represent individual entities or elements, and edges represent the connections or relationships between them.The main characteristic of complex networks compared to other modeling techniques, is their ability to capture and analyze the intricate patterns and properties that emerge from the relationships and interactions between components in a system.In complex networks, one notable characteristic is that the number of nodes and edges can dynamically change over time [34].This dynamic nature of complex networks enables the representation of systems that undergo adaptation or reconfiguration.Complex networks, as a modeling technique, have found notable applications in various fields.Some examples include social networks [35], [36], [37] and economic analysis [38], [39] and telecommunications and energy networks [40], [41], [42], [43], [44].
Bezier curves are polynomial curves used for geometry approximation [45].The Bezier curves are defined by a set of control points that influence their shapes.The control points act as handles that guide the curve, allowing precise control over the curvature and direction.In the trajectory, each point is computed considering Bernstein polynomials, which calculate the position of the point according to any given number of control points.By varying the positions and number of control points, different Bezier curves can be created, ranging from straight lines to complex curves.Bezier curves have been combined with metaheuristic methods to create interesting applications [46], [47], [48], [49], [50], [51], [52].In these approaches, metaheuristic techniques have been employed to manipulate the control points of Bezier curves, resulting in meaningful shapes within a specific domain.The curves can be tailored to satisfy certain criteria or constraints by optimizing the positions of the control points.However, the proposed method uses a different approach.Instead of using metaheuristic techniques to modify the control points of the Bezier curves for shape manipulation, we utilize the trajectories generated by the Bezier curves to create new candidate solutions that explore and exploit the search space.In this method, each control point in the Bezier curve corresponds to a candidate solution, and the curve defines a trajectory that guides the movement of each agent in the search space.
In this paper, a new metaphor-free metaheuristic approach based on complex networks and Bezier curves is presented.This method generates graphs that represent the fitness relationships between the possible solutions.The shortest path between each solution and the best solution is then obtained.
The nodes obtained from this process are used as control points in the Bezier equation to generate feasible agent trajectories.During the optimization process, the graph is continuously modified based on the evaluation of new candidate solutions and their objective function values, producing trajectories that allow the exploration and exploitation of the search space.This methodology reduces the complexity of high-dimensional functions by modeling only the relationships between search agents using graphs.A set of benchmark functions, including multimodal, unimodal, and hybrid functions, was used to compare the performance of the proposed approach numerically and statistically with that of several state-of-the-art metaheuristic algorithms.The experimental results indicated that the proposed approach produces competitive results in terms of accuracy and robustness.
The remainder of this paper is organized as follows: In Section II, the preliminary concepts of complex networks, shortest path problem, and Bezier curves are presented.The proposed method is explained in Section III.The experimental results and comparative analysis are presented in Section IV.Finally, in Section V, conclusions are offered.

II. PRELIMINARY CONCEPTS
The main objective of this section is to provide an overview and discussion of the most important concepts that form the foundation of the proposed approach.By addressing these concepts, we aim to establish a common understanding of the fundamental elements and principles that underlie the proposed method.

A. COMPLEX NETWORKS
Complex networks refer to the study and analysis of systems composed of interconnected elements, represented as nodes or vertices, and the relationships between them, expressed as edges or links.In a complex network, the nodes can represent various elements, such as individuals, organizations, or candidate solutions, while the edges capture the connections, interactions, or dependencies between these elements.Complex networks, as a modeling technique, possess notable characteristics that enable the analysis of interconnected systems.They capture the intricate patterns of connectivity and topology, revealing features such as distributions and clusters.With unique structural properties, complex networks facilitate efficient information flow and exhibit power-law degree distributions.They offer various network analysis techniques, including metrics, centrality measures, and community detection algorithms.Moreover, the dynamic nature of complex networks [53] allows for modeling growth, adaptation, and reconfiguration, enabling the study of evolving systems and dynamical phenomena.Complex networks also represent important mechanisms to simplify the complexity of high-dimensional problems by representing them as configurations of nodes and edges.In high-dimensional problems, the number of variables or dimensions involved can be overwhelming, making analysis and understanding challenging.However, by employing complex networks, these problems can be transformed into a network representation where nodes represent the variables or components, and edges denote the relationships or interactions between them.
A simple network (or graph) consists of a set of nodes (or vertices) and a set of edges (or links) that connect nodes.The mathematical definition of a network is shown in equation (1).Where V is a finite set of nodes, E⊆V ⊗ V = {e 1 , e 2 , . . .,e m } is a set of links, and f is a mapping which associates some elements of E to a pair of elements of V , such as that if v i ∈V and v j ∈V , then f :e p → v i , v j and f :e q → v j , v i [33].
Graphs can maintain weights in the links.They are known as weighted graphs [34].In a weighted graph, each edge or link between nodes is assigned a numerical value called a weight.These weights represent some measure of importance, distance, cost, or any other relevant quantity associated with the node connection.
Graphs can be arranged in a matrix known as an adjacency matrix.It should be mentioned that a vertex is adjacent to another node if there is an edge to it from that vertex.The adjacency matrix A is defined as the |V | × |V | matrix, where V is the number of nodes, and each element A ij contains the mapping relation between nodes i and j (usually represented by their weights).If the network has no self-loops, then the diagonal elements of A are zero.

B. SHORTEST PATH PROBLEMKS
The shortest-path problem (SPP) is a well-studied topic in computer science.In graph theory, communication between two non-adjacent nodes depends on the path connection.A path is defined as a sequence of nodes in which each successive node is adjacent to its predecessor [53].In other words, given a set of vertices V , a source vertex u, a destination vertex v, where u, v∈V , and a set of weighted edges E over set V , the shortest path is the minimum length criterion sum S (u, v) from source to destination (see Equation 2).where d (u, v) denotes the edge weights of the nodes [54].
Dijkstra's algorithm [55], [56], [57] is a popular method for finding the shortest path between two nodes in a weighted graph.The Dijkstra's algorithm is specifically designed to find the path with the minimal cost in terms of the weights of the edges in a graph.This algorithm employs a greedy approach to iteratively determine the shortest paths from a source node to all other nodes in the graph.It considers two types of vertices: solved and unsolved vertices.First, a source vertex is defined and marked as the solution.Then, all other edges (through unsolved nodes) connected to the source vertex are checked.Once the algorithm identified the shortest link, it added the corresponding vertex to the solved node list.The algorithm iterates until all vertices are solved or the destination node is reached.However, it does not need to analyze all links.

C. BEZIER CURVES
Bezier curves (BC) are a well-known standard tool in computer graphic modeling [45].They are polynomial approximations based on a set of points known as control points.Figure 1 shows some examples of BC using different control points.As illustrated in the image, the curve is constructed by mapping the polynomial straight lines.Where P 0 and P 3 are the initial and final points on the curve, respectively.In addition, the curve does not pass through P 1 or P 2 , because these points are used to provide directional information.Depending on the number of control points, various curve shapes can be obtained.
BC uses a linear combination of Bernstein polynomials (see equation ( 3)).Where B n i (x) are elements of a binomial distribution and c i are the approximate function values generated based on the range [a, b].This is mathematically described by equations ( 4) and ( 5): Thus, a BC of degree n can be defined as shown in equation (6).Where P i represents the control points.BC present the following characteristics:1) convex hull, which is bounded by the convex hull of the control points; 2) symmetry, the opposite order of the control points produces the same curve with reverse parameterization; 3) affine invariant, any move operation (translate, scale, rotate, or skew) affects the entire curve; 4) any subdivision of a BC retains its properties; and 5) easy programming, which can be subdivided into simple recursive steps to compute high-order BC [58].

III. COMPLEX NETWORKS AND BEZIER CURVES AS A METAHEURISTIC METHOD
In this paper, a new metaphor-free metaheuristic algorithm that combines complex networks and Bezier curves is pre-sented.For this reason, this approach is called through the manuscript CNBC.The objective of the algorithm is to find the global solution for a nonlinear problem based on the formulation of an optimization problem described as follows: Maximize/ minimize Subject to J (x) where J :R d → R is a d-dimentional nonlinear function and X represent the search space x ∈R d | l i ≤ x i ≤u i , i= 1, . . .,d defined by lower (l i ) and upper (u i ) bounds.Regardless of whether a metaheuristic method is classical or metaphorfree, they typically maintain a similar structure consisting of three main elements: initialization, movement operators, and a selection mechanism.In this section, we will explain each of these elements for the proposed approach.

A. INITIALIZATION
Initialization is the first operation of the algorithm, and its objective is to generate an initial population of N agents, Ag = {a 1 , . . .,a N } where each agent a i represents the combination of decision variables a i = a i,1 , . . .,a i,d .They can be initialized based on the specifications of a defined problem; however, their initial positions are typically set randomly.To make this possible, the positions for each decision variable a i,j , where (i ∈ 1, . . ., N ;j∈ 1, . . .,d), are established with a numerical value uniformly distributed between the lower (l i ) and upper (u i ) bounds, as shown in Equation 8.

B. MOVEMENT OPERATORS
Once the population is initialized, an iterative process is executed to produce a new agent position in each iteration.In metaheuristics, exploration and exploitation are two important behaviors.The first focuses on an examination of the search space, looking for new candidate solutions.The second refines the search process to obtain a more accurate solution.For CNBC, both effects are produced by considering two main processes: the creation of a complex network and the generation of trajectory.

1) CREATION OF A COMPLEX NETWORK
In the proposed approach, the complex network is structured in a way that each node in the network represents an agent or a candidate solution from the population.These nodes capture the individual solutions that are being explored and evaluated.
On the other hand, the edges in the network represent the relationships between pairs of agents in terms of their fitness values.Specifically, if we consider the node n i , it corresponds to the candidate solution a i ( i∈ 1, . . .,N ).The edge or link fitDif i,j (see equation ( 9)) connecting the node n i to node n j (j∈ 1, . . .,N ) in the network represent the absolute difference between the fitness or objective values of the solutions associated with agents a i and a j .Therefore, the value of fitDif i,j is computed as follows: where z i = J (a i ) and z j = J (a j ) symbolize the values in terms of the objective function J (•) produced by agents a i and a j , respectively.In Figure 2, we can observe an example where the agents or particles in the search space are associated with the construction of a complex network.The figure displays four agents {a 1 , a 2 , a 3 , a 4 } distributed in different positions within the search space.The graph visualizes how each agent is represented as a node {n 1 , n 2 , n 3 , n 4 } in the complex network.In this representation, the nodes of the graph correspond to the individual agents, and the edges connecting the nodes capture the relationships between them.
The weights assigned to these edges reflect the differences observed among the particles regarding their fitness values.
In Algorithm 1, the creation of the complex network is presented in lines 6 and 7.
Once the complex network has been constructed, the next step in the proposed approach is to identify the node (agent) with the best value in terms of the objective function.This node n B represents the solution a B that currently has the highest fitness or optimality within the population.By identifying the best node n B , the algorithm determines the reference point or benchmark for evaluating the other candidate solutions.It serves as a guide for exploring and exploiting the search space.
After identifying the best node n B , the proposed algorithm calculates the new position for each particle or agent in the population.Therefore, for each node n i or agent a i (excluding the node n B that represents the solution with the best value), it is applied the Dijkstra's algorithm to find the set of nodes SN i,B that form the shortest path connecting node n i with the best node n B in the network (see Algorithm 1 line 9).In Figure 3, we can observe an example that demonstrates the application of Dijkstra's algorithm in a specific configuration of a complex network.The complex network comprises eight nodes, and the values of the edges represent the relationships between the differences in fitness values for pairs of nodes.Each edge indicates the absolute difference in fitness between the corresponding nodes.To determine the new position of agent a 2 (or node n 2 ), the algorithm utilizes Dijkstra's algorithm.This algorithm is applied to find the set of nodes S 2,B that form the shortest path between n 2 and the best node n B in the network.By identifying this shortest path, we can determine the trajectory that agent a 2 should follow to approach the optimal solution.Upon applying Dijkstra's algorithm, the resulting set of nodes that form the shortest path between n 2 and n B consists of five nodes SN 2,B = {n 2 , n 1 , n 6 , n 7 , n B }.In Figure 3, these five nodes are highlighted in red.As observed in Figure 3, the Dijkstra's algorithm effectively determines the path with the minimum cost or weight in terms of the fitness differences among the candidate solutions.By following the principle of selecting the path with the minimal cost, Dijkstra's algorithm ensures that the trajectory chosen for a specific agent optimally balances the exploration and exploitation of the search space.The algorithm prioritizes the edges with lower weights, indicating smaller differences in fitness values, as it seeks to find the most efficient route towards the best solution.

2) GENERATION OF TRAJECTORY
For the generation of trajectories, for each agent a i from the population Ag, the set SN i,B of nodes obtained Dijkstra's algorithm are used in Eq. ( 6) as control points.In this step, it is generated a series of points that lie along the curve.The number of the points N pts generated along the Bezier curve will determine the granularity of the trajectory.A higher density of points will result in a smoother trajectory but may also increase the computational complexity.In this paper, the value of N pts has been set to 100.
In the proposed method, updating the position of agent a i involves dividing the trajectory into four segments, namely A, B, C, and D. Each segment corresponds to a quarter of the complete trajectory, containing 25 points.The purpose of dividing the trajectory into segments is to introduce variability in the agent's movement and exploration.When updating the position of agent a i , two different positions p 1 and p 2 from the trajectory are considered.These positions are selected based on the results of the search strategy.The specific segments from which the positions are chosen depending on the algorithm's performance.To keep track of the algorithm's performance, a counter called Cont is implemented.Cont is incremented each time the algorithm fails to find a better solution than the current best solution.This counter serves as a measure of stagnation or lack of progress in the search.By monitoring the value of Cont, the algorithm can adapt its behavior and explore different segments of the trajectory to potentially discover new and better solutions.In Algorithm 1, the generation of the trajectory is represented by GetBezierCurve function (line 10).
In the proposed method, the selection of points p 1 and p 2 from the trajectory is determined based on the performance of the algorithm, as indicated by the value of the counter Cont.Four different cases are considered, each corresponding to a different scenario or behavior of the algorithm during the search.The cases are determined by the value of the counter Cont and dictate which of trajectory segments, namely A, B, C, or D, are used to select the points p 1 and p 2 .The selection of the segments is important as it influences the exploration-exploitation balance of the algorithm.If the algorithm frequently finds better solutions (case I), indicating a successful search strategy, the points p 1 and p 2 are randomly selected from the segment A. This case represents a scenario where the algorithm is actively exploring and discovering new, improved solutions.If the counter cont indicates that the algorithm takes a bit longer to find good solutions (case II), the points p 1 and p 2 are selected from both segments A and B. This case reflects a slightly slower search strategy, where the algorithm explores more within the initial segments of the trajectory.Similarly, if the algorithm takes even longer to find good solutions (case III), the points p 1 and p 2 are randomly selected from segments A, B, and C.This case represents a scenario where the algorithm explores more extensively throughout the trajectory, including the earlier segments.Finally, if the algorithm struggles significantly in finding good solutions (case IV), the points p 1 and p 2 are randomly selected from all four segments, A, B, C, and D. This case reflects a more exploratory behavior, where the algorithm extensively explores different parts of the to overcome stagnation and find better solutions.
The selection of the appropriate case and segments for choosing points p 1 and p 2 is determined by the value of the counter cont.This adaptive approach allows the method to adjust its behavior dynamically based on the success or difficulty encountered in the search process.Equation (10) specifies the different cases and the corresponding segments used for selecting points p 1 and p 2 .Once the points p 1 and p 2 have been determined based on the corresponding case, the new position of the agent a i (k+1) is computed using equation (11).This equation defines the calculation of the new position considering the actual agent a i (k) and the middle point obtained by the selected points p 1 and p 2 computed by a scaling factor rand (0, 1).The computational implementation of this step is shown in Algorithm 1, line 11.
A bound checking is also implemented inside the UpdatePosition function (see Algorith1, line 11) to ensure that the updated point ua i (k + 1) is within the dimensional search space.equation 12 shows the bound-checking method.
Figure 4 illustrates an example trajectory generated using the proposed approach.The trajectory aims to connect agent a 2 with the best element a B identified in the complex network.By applying Dijkstra's algorithm, the set of nodes SN 2,B comprising the agents a 2 , a 1 , a 6 , a 7 and a B is determined as the shortest path.Using the Bezier curve, the trajectory is calculated and depicted in the figure.The trajectory is divided into four distinct parts: A, B, C, and D, each representing a quarter of the complete trajectory.These segments are indicated in the figure.Additionally, the figure shows the different cases that arise during the selection of points p 1 and p 2 for determining the new position of agent a 2 .The cases depend on the performance of the algorithm and are reflected in the selection of the corresponding segments of the curve.If the algorithm consistently finds better solutions, the points p 1 and p 2 are selected randomly from segment A. If the algorithm takes slightly longer to find improved solutions, the two points are selected from segments A and B. This pattern continues for the other cases involving different combinations of segments.The figure also illustrates the hypothetical positions of points p 1 and p 2 for each case, provides a visual representation and understanding of the selection process.

3) SELECTION MECHANISM
In the proposed approach, the position of each agent is updated based on two conditions.The first condition evaluates the fitness of the new agent after its position is changed.If the new fitness value is better than the previous value, indicating an improvement, then the position is updated.
However, the second condition allows the possibility of updating the agent's position even if it does not result in a better fitness value.This condition is determined by an updating probability Up.If the updating probability is met, the agent's position is changed regardless of the fitness improvement (see Algorithm 1 lines 14-20).This introduces an element of exploration in the search process, allowing the agent to potentially explore new areas of the search space that may lead to better solutions.In this work, the value of Up has been set to 0.3.
By incorporating these two conditions, the algorithm balances exploitation (improving current solutions) and exploration (searching for potentially better solutions) during the optimization process.Agents whose fitness improves will update their positions.In contrast, agents with lower fitness values may still have a chance to explore new regions of the search space based on the moving probability.

4) COMPUTATIONAL PROCEDURE
Algorithm 1 summarizes the operations of the proposed method as pseudo-code.This approach considers as input values: the number of agents N , the maximum number of access functions af , and the updating probability Up.
First, the proposed method generates a set of N evenly distributed agents (line 2).These represent the initial population {a 1 , . . .,a N }.Subsequently, the best agent a B from {a 1 , . . .,a N } is selected, and the iterative process begins based on the maximum number of access functions af .At each iteration, the fitness differences between all elements of {a 1 , . . .,a N } are calculated (line 6), producing a preprocessing adjacent matrix fitDif that satisfies Dijkstra's algorithm requirements (see Section II-B).Subsequently, a graph was generated (line 7).This network captures the relationships among all the agents through their fitness values.Subsequently, Dijkstra's algorithm is applied (line 9) to find the shortest path through each agent and the best path (a B ).The path nodes obtained from this process are used as control points for the Bezier curve equation to generate feasible agent trajectories (line 10).To increase the randomness and diversity of solutions and to generate a non-deterministic new agent position, two points p 1 and p 2 from each trajectory are operated based on the Cont value (which changes based on the necessity of the method to explore or exploit the search space) (line 11).Finally, all new agent positions are evaluated and updated based on two cases:1) direct updating for all solutions that improve the previous solution and 2) probability-based updating for the rest of the solutions (lines 14-20).Output: a B An example is implemented to illustrate the CNBC approach.The objective is to detect the minimum value of the two-dimensional objective function defined in equation ( 13), as illustrated in figure 5(a).
− 10 , where the square elements represent the points p 1 and p 2 used in the trajectories.Next, the new position of each agent was calculated using Eq.11.Finally, the selected mechanism was applied to update the agent's position.These operations are applied sequentially until the number of access functions af is reached.

5) COMPUTATIONAL COST
This subsection considers the computational cost associated with our proposed approach.Metaheuristic algorithms, by nature, encompass intricate structures entailing stochastic components.Consequently, undertaking a conventional complexity analysis for such systems becomes unviable.The execution duration of an algorithm is susceptible to the impact of various factors, rendering a traditional assessment of complexity impracticable.In light of these considerations, the standard methodology is to employ Big-O notation [59].This notation provides an essencial structure for evaluating the algorithmic cost quantifying the count of operations needed for successful execution.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The evaluation of the computational cost of our method has been conducted utilizing the foundational principles of the Big-O notation introduced in [60].Table 1 provides the partial computational costs attributed to our approach.Within the table, we adopt the assumption that N denotes the number of candidate solutions subject to algorithmic operations, while MAXITER represents the upper limit of iterations within the methodology.For the sake of clarity, the table also offers a breakdown of the Algorithm 1 operations, aligning them with respective partial computational costs.
Since the steps 2-7 of Table 1 are executed MAXITER iterations, the total cost O (N ) is formulated as follows: Under such conditions, the Big-O notation of the proposed method is polynomial (quadratic complexity N 2 ), which means that the algorithm's time complexity grows at a polynomial rate as the input size of N increases.Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

IV. EXPERIMENTAL RESULTS
To assess the performance of the proposed method, a comprehensive set of 23 benchmark functions has been chosen.These benchmark functions represent various types of optimization problems, including unimodal, multimodal, and hybrid functions.By applying the proposed method to these benchmark functions, we can quantitatively measure its effectiveness in finding high-quality solutions, handling different types of objective landscapes, and adapting to various problem complexities.The mathematical description of the test functions is shown in Table 17, available in Appendix A.
Where n corresponds to the n -dimensional vector at which the test functions are evaluated, f (x * ) represents the optimal value of a given function evaluated at position x * , and S corresponds to the lower and upper limits of the search space.
The results obtained from the proposed approach have been compared to a set of algorithms, including classical and state-of-the-art metaheuristic methods.These algorithms have been chosen to represent a diverse range of optimization techniques.The comparison provides a comprehensive evaluation of the proposed approach's performance against well-established and widely used methods.The set of algorithms used for comparison includes the Bat Algorithm (BA) [14], Cuckoo Search Algorithm (CS) [15], Crow Search Algorithm (CSA) [16], Differential Evolution (DE) [10], Grey Wolf Optimization (GWO) [17], Harmony Search (HS) [19], Particle Swarm Optimization (PSO) [13], Simulated Annealing (SA) [18], Sine Cosine Algorithm (SCA) [20], and State of Matter Search (SMS) [21].Each of these algorithms presents interesting characteristics in their search strategies.To ensure a comprehensive evaluation, the comparison has been conducted on each of the benchmark functions selected for the study.The evaluation considers the performance of the algorithms across various dimensions, specifically 30, 100, and 200 dimensions.Additionally, a high-dimension comparison analysis has been made between the proposed method and Dynamic Stochastic Search (DSS) [61] to evaluate the CNBC performance at 300 and 3000 dimensions.All experiments were carried out utilizing a computer equipped with an AMD Ryzen 3 processor with Radeon Vega Graphics 3.50Ghz and 8 GB of memory.Additionally, the proposed method has been programmed using MATLAB's language (M language) in the integrated development environment MATLAB R2022b.
This section is divided into four sub-sections, each addressing a specific aspect of the evaluation and analysis of the proposed approach.The first subsection (A) provides details about the setup and configuration of the test environment.In the second subsection (B), the performance of the proposed approach is evaluated and compared with other popular metaheuristic algorithms.The evaluation involves running the algorithms on the benchmark functions and collecting relevant performance metrics.Statistical techniques are employed to compare the results obtained by the proposed approach with those of the other algorithms, allowing for a quantitative assessment of its performance.The third subsection (C) validates the proposed approach's ability to converge towards an accurate solution.This analysis examines the algorithm's behavior over iterations or gener-104046 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.ations, tracking the convergence of the objective function values.By analyzing the convergence patterns, the effec-tiveness of the proposed approach in reaching optimal or near-optimal solutions is assessed.In the final subsection (D), 104048 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.a comparative analysis was carried out over three engineering design problems to evaluate the performance of the proposed method over real-world applications.

A. CONFIGURATION OF THE TEST ENVIRONMENT
In this study, a set of 23 benchmark functions has been employed to evaluate and compare the performance of the 104050 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.CNBC approach with other metaheuristic methods.The evaluation focuses on the fitness values achieved by the algorithms.The optimization process aims to minimize these fitness values.To control the optimization process, a maximum number of function evaluations, denoted as af , has been set as a stop criterion.In this case, the value of af is 104052 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.fixed at 5000.This ensures that the algorithms are given a limited number of function evaluations to find the optimal or near-optimal solutions.To account for the stochastic nature of the algorithms, each benchmark function is executed independently 30 times.By conducting multiple runs, the effects of randomness and variability in the algorithms' performance can be better understood and statistically analyzed.For the experimental test, a population size of 50 individuals has been set for each metaheuristic approach.The population size determines the number of candidate solutions that are generated and evaluated in each iteration of the algorithm.Furthermore, the evaluation is conducted on three different dimensional search spaces: 30, 100, and 200 dimensions.This allows for the assessment of the scalability of the algorithms, examining their performance across different levels of problem complexity.Table 2 summarizes the parameter values used for each metaheuristic algorithm in the comparative test.
It should be noted that these parameter values correspond to the settings recommended by the original authors of each approach.The authors have selected these parameter values to obtain the best possible performance from their respective algorithms.By utilizing this consistent experimental setup with predefined parameter values, population size, and dimensional search spaces, a fair and controlled compari-son of the algorithms can be conducted.This facilitates a meaningful assessment of their performance, highlighting the strengths and weaknesses of each approach.

B. PERFORMANCE COMPARISON
The performance results of each method are presented in Tables 3-5  lowest fitness values.The tables present the minimization results obtained for three different dimensional search spaces: 30, 100, and 200 dimensions.Each table provides a comprehensive overview of the performance of the algorithms across these dimensions, allowing for a comparison of their effectiveness in solving optimization problems of varying complexity.
The CNBC approach presented in Table 3 shows better performance than its competitors for most of the benchmark functions (f 1 − f 5 , f 7 , f 10 , f 11 , f 15 − f 19 , and f 23 ).These results indicate that CNBC can find optimal solutions even in the presence of multiple optima.It also shows that the proposed approach produces similar results to GWO in functions f 6 , f 9 , f 12 , f 13 , f 20 , f 21 , and f 22 .In addition, CNBC obtains the same fitness value as PSO and SA for functions f 8 and f 6 .In addition, SCA and CNBC share similar results for f 6 , f 13 , f 20 , and f 22 .Based on these numerical results, it can be seen that the proposed method shares the same results with GWO and SCA in the unimodal function f 6 , hybrid functions f 20 and f 22 , and multimodal function f 13 .This condition can be produced because both methods consider the influence of several solutions in guiding the search process.In the proposed method, the trajectories produced by Bezier curves are influenced by a set of particles that represent the shortest path to the best solution.In GWO, the positions of the wolves are updated based on the position of several kinds of wolves, which represent different types of solutions.The similarities in some results between the proposed method and SCA can also be explained by their coding mechanisms.The proposed method represents candidate solutions using a graph, where nodes represent the solutions and edges capture the differences in fitness values between solutions.Similarly, SCA utilizes a vector representation known as sine-cosine vectors to encode each solution.Then, there are five other functions (f 8 , f 9 , f 12 , f 14 , f 21 ) in which only one method (GWO, PSO, or SA) yields similar results to CNBC.Of these three algorithms, GWO appears most frequently.This reinforces the claim that GWO has a similar performance (owing to its search mechanism) for some of the benchmark functions, as in the proposed method.However, in the remaining 14 functions (f 1 − f 5 , f 7 , f 10 , f 11 , f 15 − f 19 , and f 23 ), the CNBC approach outperforms all other algorithms.This demonstrates the effectiveness of the proposed method in terms of accuracy and stability.
To evaluate the scalability of the proposed Complex Network-based method (CNBC), additional tests were conducted in higher-dimensional search spaces.Specifically, tests were performed in 100 and 200 dimensions.Table 4 presents the results obtained for the 100-dimensional test.The table clearly demonstrates that the CNBC approach outperforms its competitors on most of the benchmark functions.Unlike the other algorithms, there are no instances in this test where two or more algorithms exhibit better performance than CNBC.This highlights the scalability of the proposed method, as its effectiveness is maintained even as the dimensionality increases.The results indicate that CNBC is capable of handling high-dimensional problems and offers superior performance compared to its counterparts.Furthermore, the GWO generates the same fitness value as the CNBC for f 6 , f 10 , f 12 , f 13 , f 20 , and f 22 .When analyzing the statistical values, it can be seen that CNBC is more robust than its principal competitor (GWO) because CNBC's mean values for f 6 , f 10 , f 12 , and f 20 are equal to the best fitness obtained, and their standard deviation values are smaller than those obtained by GWO.In addition, PSO exhibits similar performance to that of CNBC for functions f 8 , and SA has the same fitness value as CNBC in function f 14 .However, the proposed approach is more consistent with SA because CNBC's standard deviation of CNBC is smaller than that of SA.
It should be mentioned that the results obtained by CNBC for the 30 and 100-dimension tests are interesting because they are very similar even when the dimensionality increases.This demonstrates the scalability of the proposed method.In addition, to include higher dimensionality in the optimization procedure and validate its effectiveness, the performance results based on a 200-dimensional search space are presented in Table 5.
This experiment illustrates the scalability and robustness of mixing Bezier curves and complex networks to obtain optimal solutions, even in high-dimensional search spaces.For most benchmark functions (f 1 − f 7 , f 9 − f 12 , f 15 − f 19 , and f 23 ), the proposed CNBC approach outperforms the other metaheuristic methodologies considered in the numerical experiment presented in Table 5.The performance of GWO is similar to that of CNBC in f 13 , f 14 , f 20 , f 21 , and f 22 .As can be observed, since the 30-dimensional experiment (Table 3), GWO presents a performance similar to that of CNBC for some of the benchmark functions.However, the number of functions in which this occurs is reduced when the search space dimensions are increased.This result supports the scalability of the proposed approach.In addition, even when the GWO shows the same fitness and mean value, the CNBC has a smaller standard deviation, ensuring the method's effectiveness.In addition, CS and SA generate the same fitness value as CNBC in function f 14 ; however, as mentioned previously, the standard deviation of CNBC is smaller.Finally, the PSO and proposed methods showed similar performances in f 8 .
Consequently, it is evident that the proposed approach generates exceptional performance in higher-dimensional search spaces.The Bezier curves used for generating feasible motion trajectories allow us to explore and explode the search space satisfactorily, particularly in higher-dimensional spaces.The numerical results obtained in higher-dimensional search spaces indicate that the representation of fitness relationships based on complex networks in the optimization process produces a higher level of scalability because it outperforms the rest of the tested methods for most benchmark functions.
The Wilcoxon signed-rank test [62] is commonly used to evaluate metaheuristic algorithms to assess their statistical significance and compare their performance against each other or against a baseline method.Metaheuristic algorithms are stochastic optimization methods that rely on randomness to explore and search for optimal solutions in complex problem spaces.Due to their stochastic nature, the performance of these algorithms can vary across different runs or datasets.The Wilcoxon test helps determine whether the observed differences in performance between two algorithms are statistically significant or simply due to chance.The Wilcoxon test is a non-parametric test that does not make any assumptions about the underlying distribution of the data.It compares the ranks of paired observations, typically the performance results obtained by two algorithms on the same set of benchmark functions.By comparing the ranks, the Wilcoxon test provides a p-value that indicates the likelihood of observing the observed differences in performance by chance.If the p-value is below a predetermined significance level (e.g., 0.05), it suggests that the observed differences are statistically significant, indicating that one algorithm performs significantly better or worse than the other.
The p-values obtained from the Wilcoxon test are listed in Table 6.These statistical results were based on the performance results presented in Table 3 (where n = 30).On the other hand, Table 7 shows the p-values obtained by the rank-sum test considering the results of Table 4 (where n = 100).Finally, Table 8 presents the p-values obtained by the Wilcoxon test.The performance results for this case are listed in Table 5 (where n = 200).These tables present a pairwise comparison between CNBC and the rest of the tested algorithms to validate the experiments implemented statistically.Tables 6-8 employ the symbols ▲, ▼, and ▶.When CNBC achieves remarkably better results than a given adversary algorithm, the symbol ▲ is used.Then, the symbol ▼ is utilized when CNBC generates a worse result than its opponents.Finally, the symbol ▶ is employed in the case when the Wilcoxon test cannot discriminate between the numerical results.This symbology can help to visualize the results clearly.
Based on the p-values in Table 6, it can be suggested that the Wilcoxon test cannot identify a significant difference between CNBC and GWO for functions f 6 , f 9 , f 13 , and f 22 .In addition, it can be confirmed that the proposed method produces a result similar to that of the SCA for functions f 6 , and f 17 .Additionally, under the pairwise comparison of CNBC and PSO, it can be deduced that both algorithms show similar performance for the function f 8 .These results corroborate those presented in Table 3.In this analysis, the Wilcoxon test based on Table 3 demonstrated the effectiveness of the method in terms of accuracy and sturdiness.There are only seven comparative cases in which the rank sum is unable to distinguish between the numerical results.In addition, it is noticed that the three algorithms (GWO, PSO, and SCA) generate similar results to CNCB in six functions over twenty-three benchmark functions.
According to Table 7, (where the 100-dimensional results are exposed), the CNBC approach produces similar results to GWO for functions f 12 , f 13 , f 20 , and f 22 .Additionally, in function f 8 , the p-values obtained in the comparison between 104056 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.CNBC and PSO suggest that the rank sum cannot establish a significant difference between them.In addition, the Wilcoxon test conducted for the 100-dimensional results supports the scalability characteristic of the proposed approach 104060 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.because it is observed that only two algorithms (GWO and PSO) generate similar results to CNCB in five functions over twenty-three benchmark functions.It is evident that CNBC overcomes the other techniques that have been compared, especially in higher-dimensional search spaces.
Finally, based on Table 8, it can be pointed out that the Wilcoxon test cannot distinguish the results generated in f 13 and f 22 between the pairwise comparisons between CNBC and GWO.In addition, the p-values obtained by the rank-sum method indicate that the proposed method and PSO generate similar results in function f 8 .Results of Table 8 confirm the scalability property of CNBC because it should be mentioned that only two algorithms (GWO and PSO) generate similar results to CNCB in three functions over twenty-three benchmark functions.In addition, it verifies CNBC's effectiveness, robustness, and scalability because even when the dimensions increase, the proposed method produces feasible solutions, whereas other algorithms decrease their effectiveness.
To make a deeper analysis of the capabilities of the proposed method for dealing with high-dimentios, a comparison is made against a metaheuristic algorithm for high dimensional optimization problems called Dynamic Stochastic  Search (DSS) [62].This algorithm has presented its effectiveness in developing experiments over 300 and 3000 dimensions over unimodal and multimodal benchmark functions.In our comparison, it is considering a representative sample of these functions.The set of functions used for our comparison is constituted by unimodal and multimodal functions: Sphere, Step, Ackley, Generalized Penalized, and Griewank.These functions are described in Appendix A Table 17 as f 1 , f 6 , f 11 , f 12 , and f 17 .Also, the parameters used are equal to the original article: the population size has been set in 50, the maximum number of iterations is set to 500, and both algorithms are performed for 30 experiments.The best results are highlighted in bold to emphasize the better performances.
Table 9 presents the comparative results between the Dynamic Stochastic Search and the proposed method on 300dimensions.The proposed method shows its effectiveness over most of the analyzed functions.In Table 10, the comparative analysis is made over 3000-dimensions.Examining both tables, it can be observed that even when the dimensionality had increased ten times (from 300 to 3000 iterations), the method's effectiveness was maintained.
Based on this analysis and the statistical analysis presented in Tables 6-8, it can be corroborated that the metaheuristic approach based on complex networks and Bezier curves (CNBC) reduces the complexity of high-dimensional  functions by modeling the fitness relations between search agents using complex networks.In addition, the effectiveness of the method in terms of accuracy and scalability is validated.
The remarkable capabilities of the proposed method can be attributed to its unique mechanisms that are employed during the optimization process.Two fundamental mechanisms contribute to its effectiveness: the generation of agents used for generating possible trajectories and the selection of points p 1 and p 2 based on different cases depending on the algorithm's performance.
Firstly, the method leverages the use of agents to generate trajectories.These agents represent different candidate solutions within the search space.By considering the relationships and distances between these agents, the method constructs complex networks.It applies graph-based algorithms such as Dijkstra's algorithm to identify the shortest paths.This enables the generation of feasible trajectories that guide the exploration and exploitation of the search space.Secondly, the method utilizes different cases for selecting the points p 1 and p 2 .These cases are determined based on the algorithm's performance, which is reflected in a counter variable.By considering the success or delay in finding better solutions, the method adapts its selection strategy.This flexibility allows for the incorporation of both random exploration and targeted exploitation, enabling a more robust and effective search process.By combining these mechanisms, the proposed method enhances its capacity to navigate the search space, balance exploration and exploitation, and identify promising trajectories to improve the optimization process.These distinctive features contribute to the remarkable performance and capabilities of the method in tackling complex optimization problems.

C. CONVERGENCE ANALYSIS
Convergence analysis in metaheuristic methods refers to the study of the algorithm's behavior and its convergence towards an optimal or near-optimal solution over iterations.The main objective of convergence analysis is to assess the performance and effectiveness of a metaheuristic algorithm in terms of its ability to converge to a high-quality solution within a reasonable number of iterations.This subsection studied the converge analysis between the proposed method (CNBC) and ten well-known metaheuristic algorithms (BA, CS, CSA, DE, GWO, HS, PSO, SA, SCA, and SMS).
Convergence data from the 100-dimensional test (Table 4) were used to generate the convergence graphs.This information was selected because it can represent the general performance of the proposed method, as it is the middle experiment of the three conducted in Section (4.2) over 30-, 100-, and 200-dimensional spaces.This information facilitates understanding the performance in both low and high dimensionality.Convergence graphs are presented in Figure 7. Based on its convergence, the CNBC approach has a faster convergence rate than its competitors.This suggests that complex networks and Bezier curve operators allow the manipulation of a large amount of information and represent it by simple connections between nodes, generating feasible agent trajectories to explore and exploit the search space.Some of the functions on which it is easier to observe CNBC profit are f 1 , f 7 , f 11 , f 13 , f 19 , and f 23 .
To produce the trajectories, the Dijkstra's algorithm selects the path with the minimal cost in terms of the weights (fitness values) of the edges in a graph.Prioritizing small differences can accelerate the algorithm's convergence towards a near-optimal solution.By placing more weight on subtle improvements, the algorithm can quickly converge towards regions with incremental gains in fitness.

D. ENGINEERING DESIGN PROBLEMS
The optimization method's main purpose is to generate an optimal solution to an optimization problem.Many disciplines, such as medicine, engineering, economics, among others had defined many of their fundamental problems as optimization problems.This section exhibits the proposed method applied over several engineering design problems to evaluate the capabilities of CNBC in real-world applications.
The engineering design problems used in this section are the three-bar Truss design problem, pressure vessel design problem, and welded-beam design problem [63], [64].They  are described in Appendix B. Below, the obtained results for each optimization problem are presented.

1) THREE-BAR TRUSS DESIGN PROBLEM
The Three-Bar Truss Design Problem is an optimization design problem that considering a 2-dimensional search space subject to three constraints (see Appendix B Table 18).This optimization problem focuses on reducing the volume of a loaded three-bar truss.Figure 6 illustrates the structure of this problem.Additionally, Table 11 shows the decision variables, the constraints, and the fitness value obtained for applying CNBC in the Three-Bar Truss design problem.
Table 12 reports the statistical results obtained for the Three-Bar Truss executed independently 30 times.For each experiment, the maximum number of generations is set to 1000 for each algorithm.In this table the worst, mean, standard deviation, and the best fitness values are presented.Based on both tables, it is exhibited that the proposed method can obtain competitive results like other well-known meta-heuristic algorithms such as BA, CS, CSA, DE, GWO, HS, and PSO.Also, the obtained results validate the adaptability of the CNBC algorithm to function adequately for high and low-dimensional problems.

2) PRESSURE VESSEL DESIGN PROBLEM
The Pressure Vessel Design is one of the most common engineering problems used for validating optimization algorithms.It consists of finding the optimal design of a compressed air storage tank considering the thickness of shell (T s = x 1 ), thickness of head (T h = x 2 ), inner radius (R = x 3 ), and length of shell (L = x 4 ) such that the total cost of material, forming, and welding is minimized based on four constraints.The complete definition of this problem can be found in Appendix B Table 19.In addition, Figure 8 illustrates a graphic representation of the Pressure Vessel Design Problem.
In Table 13 the statistical results are presented, and Table 14 exposes the numerical results obtained for  exploration and targeted exploitation, enabling a more robust and effective search process.104066 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

3) WELDED-BEAM DESIGN PROBLEM
This design problem optimizes the fabrication cost of a welded beam based on the width h = x 1 , length l = x 2 , depth t = x 3 and thickness b = x 4 .Also, this challenge considers seven constraints.The specifications of the Welded-Beam design problem are exposed in Appendix B Table 20.In addition, figure 9 illustrates the graphical representation of this optimization problem.
Table 15 displays the statistical results, and Table 16 shows the numerical results of the Welded-Beam optimization problem.However, the proposed method generates a less favorable fitness value than to CS, DE, GWO, HS, and PSO.It has the capability to produce better results than BA, CSA, SA, SCA, and SMS.Therefore, the proposed approach produces competitive results.Its performance is attributed to the higher-level representation applied using complex networks that capture the essential relationships and dependencies among the solutions, which are employed to create potential search trajectories using Bezier curves, producing feasible solutions that can correctly explore and exploit the search space.

V. CONCLUSION
Metaheuristic techniques are powerful optimization methods capable of handling complex and large-scale problems without relying on explicit mathematical models.However, due to the inherent complexity and diversity of optimization problems, not all metaheuristics can solve all problems competitively.Each problem requires specific strategies for optimal solutions, necessitating the introduction of new methods to enhance the capabilities of metaheuristics across various domains.Traditional metaheuristic algorithms often draw inspiration from natural systems, using metaphors and analogies.While this approach has its merits, it can limit innovation by encouraging the replication of existing algorithms with slight modifications.In contrast, metaphor-free metaheuristic algorithms focus on developing novel algorithmic mechanisms based on a combination of mathematical and computational principles.Considering these mechanisms, these methods can explore unconventional search operators and offer improved performance across diverse problem domains.
This paper introduces a novel metaphor-free metaheuristic algorithm that combines complex networks and Bezier curves.In this approach, solutions are represented as nodes in a graph, where the edges capture the differences in their objective function values.This graph-based representation enables a higher-level understanding of solution relationships and dependencies.The algorithm calculates the shortest path between each solution and the best solution, using the resulting nodes as control points in the Bezier equation to determine new agent positions.Throughout the optimization process, the graph is continuously updated based on the evaluation of new solutions and their objective function values, leading to trajectories that facilitate the exploration and exploitation of the search space.
A set of unimodal, multimodal, and hybrid benchmark functions was used to numerically and statistically compare the performance of the proposed approach with various stateof-the-art metaheuristic algorithms.The comparative results demonstrated the effectiveness of the method.In addition, the robustness and scalability of the proposed approach have been verified because even when the dimensionality increases, the proposed method produces feasible solutions, whereas other algorithms decrease their effectiveness.

FIGURE 1 .
FIGURE 1. Examples of BC using four different control points.

FIGURE 2 .
FIGURE 2. Example where the agents or particles in the search space are associated with the construction of a complex network.

FIGURE 3 .
FIGURE 3. Application of Dijkstra's algorithm in a specific configuration of a complex network.

FIGURE 4 .
FIGURE 4. Trajectory example generated by using the proposed approach.

Figure 5 (
Figure 5(b) shows the first step of CNBC initialization.Figures 5(c) and 5(d) illustrate the creation of a complex network process.The generation of trajectories is presented in figures 5(e) and 5(f), where the square elements represent the points p 1 and p 2 used in the trajectories.Next, the new position of each agent was calculated using Eq.11.Finally, the selected mechanism was applied to update the agent's position.These operations are applied sequentially until the number of access functions af is reached.

FIGURE 6 .
FIGURE 6. Graphical description of the three-bar truss design problem.

FIGURE 8 .
FIGURE 8. Graphical description of the Pressure Vessel Design Problem.

FIGURE 9 .
FIGURE 9. Graphical description of the Welded-Beam Design Problem.

TABLE 1 .
Partial computational cost in terms of the Big-O notation attributed to our approach.

TABLE 2 .
Parameter configuration for each evolutionary method.
. These tables contain numerical indexes that are used to evaluate the performance of the algorithms.The indexes used are the Best Fitness Value, the Average Fitness, the Standard Deviation and the Worst Value.The Best Fitness Value (f Best ) represents the lowest fitness value achieved by each method.It indicates the quality of the solutions obtained, with lower values indicating better performance.The Average Worst ) represents the highest fitness value achieved by each method.It indicates the poorest quality solutions obtained, with higher values indicating poorer performance.The best performance entries in the tables are highlighted in boldface, making it easier to identify the methods that achieved the

Table 4 .
VOLUME 11, 2023 FIGURE 7. (Continued.)ConvergencegraphsfromTable4.Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE 11 .
Numerical results of Three-Bar Truss Design Problem.

TABLE 12 .
Statistical results of three-bar truss design problem.

TABLE 13 .
Statistical results of pressure vessel design problem.

TABLE 14 .
Numerical results of pressure vessel design problem.

TABLE 15 .
Statistical results of welded-beam design problem.

TABLE 16 .
Statistical results of welded-beam design problem.

TABLE 17 .
Mathematical description of the test functions.Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE 17 .
(Continued.) Mathematical description of the test functions.

TABLE 18 .
Definition of three-bar truss design problem.

TABLE 19 .
Definition of pressure vessel design problem.employing the CNBC algorithm to solve the Pressure Vessel Optimization Problem.It is notable that the flexibility of the proposed approach allows the incorporation of both random

TABLE 20 .
Definition of welded design problem.