Computational Intelligence-Based Methodology for Antenna Development

The antenna design is a challenging task, which might be time-consuming using conventional computational methods that typically require high computational capability, due to the need for several sweeps and re-running processes. This work proposes an efficient and accurate computational intelligence-based methodology for the antenna design and optimization. The computational technical solution consists of a surrogate model application, composed of a Multilayer Perceptron (MLP) artificial neural network with backpropagation for the regression process. Combined with the surrogate model, two multiobjective optimization meta-heuristic strategies, Non-dominated Sorting Genetic Algorithm (NSGA-II) and Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D), are used to overcome the mentioned issues from the traditional antenna design method. A study of case considering a dipole antenna for the 3.5 GHz 5G band is reported, as proof of the proposed methodology concept. Comparisons of antenna impedance matching obtained by the proposed methodology, numerical full-wave results from ANSYS HFSS and experimental result from the antenna prototype are performed for demonstrating its applicability and effectiveness for antenna development.


I. INTRODUCTION
Antennas are essential wireless communication components, such as radar, cellular systems, satellite communications, RFID tags, and airborne navigation. The antenna design process is a challenging task based on many objective functions with a high degree of non-linearity and diverse parameters to be optimized [1].
The evolution in the computational area from the last decades provides a revolutionary and efficient set of tools, called artificial intelligence (AI), including machine learning (ML). These AI-based models have been applied to improve the quality and accuracy of engineering applications over different knowledge areas. For instance, AI might The associate editor coordinating the review of this manuscript and approving it for publication was Shah Nawaz Burokur . be employed to improve the diagnoses and treatment of diseases in medicine [2], [3]. Also, robots coordinating operational functions can be optimized in automation [4]. Moreover, channel estimation and antenna development can be improved in the electromagnetic field [5]- [7].
Among engineering applications in the telecommunications field, there are many problems related to wireless communications. Full-wave electromagnetic simulations have been used to evaluate and optimize the electromagnetic antenna properties using Electromagnetic (EM) solvers. Some commercial EM solvers allow optimizing the antenna performance by using Genetic Algorithms (GA), Particle Swarm Optimization (PSO), and Differential Evolutionary (DE) [8]. However, this process demands a considerable amount of EM-simulation, turning the optimization process time-consuming and computationally intensive. In this way, most computational simulations are carried out as a hands-on procedure involving substantial interactions with the radio-frequency (RF) engineer [9]. These issues have been inducing and motivating RF-engineers to discover, in computer science, suitable skills to emulate the behavior of EM-solvers systems based on a set of collected examples used to train computational models [10]. This procedure is named Learning-by-examples (LBE) technique, which is a computer-aided approach based on machine learning that is focused on solving complex real-world problems, which are mapped by a surrogate model (SM) [10].
SM enables fast function evaluation instead of computational intensive simulation [11], which makes it suitable to be applied in EM commercial software simulations, e.g., HFSS, CST, and Feko [12]. This technique can be used in two distinct problems: classification and regression. In the case of a discrimination function, the SM is used as a classification problem. On the other hand, with estimation functions, the model works in a regression problem. The Support Vector Machine (SVM), Gaussian Process Regressor (GPR), Radial Basis Function (RBF), as well as the most famous Convolutional Neural Network (CNN) and Artificial Neural Network (ANN), are some algorithms that enable the SM to be empirically trained [13]- [16]. The SM model predicts optimal regions in the design space using an acceptable regression method and might be useful for global optimization.
Particularly, the antenna development for the current fifthgeneration (5G) and future sixth-generation (6G) of mobile networks is quite challenging [17]- [19], in such a way that optimization techniques applicable to multi-talks environments [20]- [22] in conjunction with SM give rise to a potential computational environment for automatic antenna design. Although the surrogate modeling for antenna design and optimization does not sound highly complex, an intelligent design process has not been well-defined in literature yet.
This manuscript presents an efficient and accurate methodology for antenna design based on a personalized surrogate model working as a preliminary and complementary, method for commercial EM tools, aiming to reduce the computational time and cost. Figure 1 describes our methodology, in which assess the optimization of the SM using two metaheuristics for maximizing the antenna bandwidth by means of enhancing its impedance matching. The first algorithm is a consolidated approach, namely the Non-dominated Sorting Genetic Algorithm (NSGA-II). The other one is a recent and efficient algorithm called Multi-objective Evolutionary Algorithm founded on decomposition (MOEA/D) [23], [24], which provides high performance in multi-objective tasks. Both approaches are applied to maximize the antenna bandwidth and obtain the desired resonance frequency with a minimum possible error. A printed-dipole antenna is utilized as a case of study. Furthermore, our computational tool applicability is demonstrated by validating the surrogate model optimization process results with the ANSYS HFSS, which is a worldwide widely used commercial EM piece of software, and an antenna prototype.
The remainder of this article is divided as follows. Section 2 presents related works regarding machine learning techniques applied to the antenna design. Section 3 exposes the antenna design process with a surrogate model, while Section 4 explains the evolutionary multi-objectiveoptimization process. Section 5 describer the entire proposed methodology by grouping the subjects from Sections 3 and 4. Finally, Sections 6 and 7 report the printed-dipole antenna results and article conclusions, respectively.

II. RELATED WORKS
Some research groups have applied ML techniques for designing antennas [12], [25]- [29]. Table 1 highlights the critical points of the related works compared to the current manuscript, as a function of the dataset generation, regression type, regression tool, optimization process, and the number of objectives, i.e., multi-objective (MO) or single-objective (SO). The primary SM in the context of antenna design has been exploited in [16], [30], [31]. In [32], the Gaussian Process Regressor was proposed to build an SM and optimize it with the Bayesian single-objective optimization algorithm. On the other hand, in [26], the MOEA/D multiobjective algorithm was applied to the Kriging interpolation from ANSYS HFSS. J. Dong et. al proposed an SM based on a sparsely connected back-propagation neural networks (SC-BPNN) with Matlab toolbox to work on the regression process [12].
Diverse optimization algorithms might be used for antenna performance improvement. Focusing on the antenna optimization process, D. Ding and G. Wang proposed a MOEA/D to optimize the reflection coefficient of a bowtie antenna and compared it with an NSGA-II [6]. In [8], a survey on evolutionary algorithms applied to antennas and propagation in the last third years was conducted considering the three main SO algorithms: GA, PSO, and DE. Authors from [38] evaluated the PSO use for improving the reflection coefficient of microstrip antennas. Finally, an optimization design of antennas by accelerated gradient search with sensitivity and design change monitoring was proposed in [37]. The technique is based on a trust-region algorithm to improve the reflection coefficient of a patch antenna.
None of the previously mentioned works provided a full pipeline for antenna design. The primary contributions of the current work over the state-of-the-art are as follows: proposing a clear and well-defined efficient computational intelligence-based methodology for antenna design using the surrogate model machine learning technique; optimization of the antenna dimensions using two population-based metaheuristics; numerical and experimental validation of the optimized model using a commercial EM tool and a printed-dipole antenna prototype, respectively.

III. ANTENNA DESIGN WITH SURROGATE MODELS
The SM generation is a process that can be compared with a black-box, wherein a set of input/output pairs are added inside of the box. A model is built based on patterns and correlations VOLUME 10, 2022  among the variables. The model has a generalization capacity that enables different information from the same structure, e.g. antenna, communication channel or images, to be interpreted by the surrogate model and offer reliable outputs based on the training process.
The dataset generation process consists of obtaining a suitable number of input/output pairs, based on pre-defined variables in a pseudo-aleatory format inside the antenna design space. Datasets are evaluated as a fitness function related to antenna performance, such as scattering parameters, resonance frequency, bandwidth, among others. Then, the dataset is used to train a surrogate model. In this work, SM was built with a backpropagation-Multilayer Perceptron (MLP) neural network (BPNN).
MLP aims to map the output variables and input parameters data set generated by the EM-solver. Typically, an MLP has two types of essential components: neurons, which are the elements that process the information on the network; links that transmit the information between neurons to the corresponding parameter called synaptic weight. In addition to the essential components, an MLP architecture comprises three layers: the input layer, one or more hidden layers and the output layer. For training the MLP neural network, the back-propagation algorithm [39], [40] is used in two phases, namely forward and backward.
In the forward phase, the samples are presented to the network and propagate from input to the output layer. It computes the sum of inputs times the synaptic weights, which are randomly chosen, and transmits to the activation function, resulting in the output signal. Then, the difference between predicted and desired outputs is calculated in order to obtain the error value. Afterward, in the backward phase, the synaptic weights are adjusted by the training process to minimize the mean squared error (MSE), aiming to analyze the model performance. After training the network with the optimal weights adjusted, SM in question has a strong generalization capacity, and it is ready to be used with new data never used in the training process.

IV. EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION
In a Single-Objective Optimization Problem or Scalar-Objective Optimization Problem (SOOP), the idea of an optimal solution is clear since the idea of a comparison between the two solutions is also well-defined. A classic SOOP problem can be stated as follows: where x can be a scalar or a d-dimensional vector, whereas the output of function f is always a scalar. The output is an ordered set in which every pair of points can be compared. Given the function domain, it can be projected to the output space and point out a minimum value solution.
A Multi-Objective Optimization Problem (MOOP) differs from traditional SOOP by having multiple output variables. Consequently, SOOP has a unique optimal solution, whereas MOOP presents a set of trade-off solutions. A MOOP can be stated as follows: where k is the number objectives and the input vector x is a d-dimensional vector from a euclidean space. MOEAs make use of some concepts in their framework. Definitions are stated as follows: Definition 3 (Pareto Optimal Set): From the definition of Pareto Dominance, the definition of Pareto Optimal Set, denoted by P, is straightforward: In other words, P is the set of all decision vectors composed of non-dominated solution vectors in the objective space.
Definition 4 (Pareto Front): is the projection of the Pareto Optimal Set in the objective space. Thus, the Pareto Front, denoted as PF, can be stated as: With these definitions, the MOEA framework can be used to find the final set of best trade-off solutions through an iterative process.

A. EVOLUTIONARY MULTI-OBJECTIVE ALGORITHMS
MOEAs are population-based algorithms that can approximate P and PF sets in a single run. One of the main challenges in optimization algorithms is to balance exploration and exploitation. The first one is concerning the search of the entire decision space, whereas the second concept regards a refined search in promising areas of the decision space and is associated with the local search.
In this paper, two general-purpose MOEAs are compared among state-of-the-art evolutionary algorithms. Several specific implementations have been recently proposed [26], [41], and several surveys on MOEAs and quality indicators were also published in the last years [42]- [46]. Critics have been made about these problem-oriented implementations due to a lack of comparison with other MOEAs and applications to different real-world problems. Based on this study, the NSGA-II, a popular baseline algorithm, and MOEA/D, a modern approach to tackle MOOPs using evolutionary algorithms, have been chosen. It follows a discussion on these two algorithms, as well as their quality indicators.

B. NSGA-II
The NSGA-II was introduced in [47], as a generic non-explicit building block applied to MOOPs [48]. Its general purpose design allowed the development of several MOEAs that use it as an inner mechanism [42]. In NSGA-II, the population of individuals competes against each other through an elitist mechanism. It ranks and sorts each individual according to their non-dominated level and uses several genetic operators to generate diversity.

C. MOEA/D
The MOEA/D was proposed in [49]. Dominance-based and quality indicators guided approaches have well-known limitations. Their main struggle is dealing with high dimensional objective space, once they have difficulties maintaining diversity. MOEAs based on decomposition are a recent trend that aims to decompose the MOOP in several SOOPs, associating each individual to a subproblem and sharing information with their neighbors.

D. QUALITY INDICATORS
Several MOEAs have been developed, thus the need to compare and measure their performances [44]- [46], [50], [51]. Quality indicators are used to address the three main aspects of an MOEA: (1) Accuracy, which is related to how close to the true Pareto Front an algorithm has reached, (2) Cardinality, which addresses how many solutions the algorithm was able to found and (3) Diversity, which is how well-spread the solutions are on the Pareto Front approximation. VOLUME 10, 2022 Hypervolume (HV) indicator, also known as hyper-area or S metric [52], appears at the first position in several surveys preferred metric by the research community. It is a unary metric that quantifies how much of the Pareto Front's objective space was covered. It requires a reference point, which is used to compute the metric. This point is selected far from the Pareto Front's points to compute the metric and all other points dominate it. It can be used to determine when the Pareto Front of an MOEA is better than another. As larger the HV metric value, the better is the quality of obtained solutions for approximating the whole PF.
Spacing was proposed in [53] to measure the spread of the Pareto Found approximation solutions and address the diversity aspect of an MOEA. This metric takes into account the distance between a solution and its closest neighbor. It is simple and very straightforward to compute, as can be seen in Equation 3.
whered is the average of all d i and d i is the Manhattan distance between solution i and its closest neighbor. Spreading was first introduced in [54]. It is an extension of the Spacing metric that incorporates extreme points to its equation, which is equivalent to calculating Spacing as if the extreme points were the part of the Pareto Front approximation found by an MOEA. It provides additional information about the solution's spread, but it is very dependent on the extreme points [44], [46]. The Spreading formula can be seen in Equation 4.
whered and d i are computed in the same way as Spacing, and d ek is the distance between the extreme solution on the k th objective and the closest point in the Pareto Front approximation.

V. THE PROPOSED COMPUTATIONAL INTELLIGENCE-BASED METHODOLOGY
Some computational intelligence-based methodologies for antenna development were reported in Section 2. In most cases, some essential feature details are forgotten, such as the dataset generation, surrogate model type, and deployed optimization algorithm. Furthermore, the validation of the process is not outlined. Our work presents a well-defined step-by-step methodology for antenna development using computational intelligence. Figure 2 outlines the proposal methodology work-cycle, which starts with dataset generation; surrogate model training and validation; optimization algorithms application; optimized values testing in the electromagnetic simulation software; then, whether achieving the objective, the manufacturing model is done, else the work-cycle need to be restarted in order to improve dataset and surrogate model.
Computational intelligence-based antennas development and optimization might be structure, as follows: dataset generation; surrogate model generation; Optimization and MOEAs definition; model validation; and manufacturing.
The dataset is the first and foremost step, in which two different approaches might be used to obtain the input sample set: Matlab rand function or Latin Hypercube Sampling (LHS). The first generates arrays of random numbers, whose elements are uniformly distributed in a predefined interval inside the antenna design space. Compared to randomized sampling, the LHS enables the sample points to fill the entire parameter space at identical intervals. It ensures that only one sample from each interval is selected per analysis, reducing the chances of replicating the input parameters. With the input set defined, the finite element method (FEM) within the ANSYS HFSS is used to obtain a high-fidelity and accurate response set. The number of input/output pairs is not defined and varies according to the antenna model complexity. Then the antenna surrogate model based on ANN can be started.
The surrogate model works as a black box that mimics the EM-solver activities based on the learning process. To validate our methodology, the MLP was deployed with backpropagation for the SM. In the BPNN, k-fold with cross-validation was used to guarantee the learning process efficiency. In other words, the folds were used to ensure that the neural network algorithm is not biased. The input/output pairs have been divided into 75% to train the ANN, while 25% have been used to validate to ensure the model generalization capacity when new input sets are added. To measure this capability, MSE was used, which means that as lower the value, the higher is the generalization capacity. After the BPNN convergence surrogate model, the MOOP based on MOEAs have been started. In this step, the evolutionary algorithms use the SM as an objective function to guide the optimization process throughout the decision variables space. In a MOOP, the goals are related to finding the best trade-off solution. Thus, a unique solution is not the goal, but a set of solutions instead. Several MOEAs were designed according to [55]. Once the optimization step finishes, several quality indicators were used, as described in Section 4, to evaluate the overall quality of the optimization process [46], [56].
Through the optimization process results, we start the validation step. At this moment, the best input/output pairs found by the MOEAs are applied in the EM simulation software to evaluate the method accuracy and efficiency before the manufacturing process. There are three possible situations in the validation step. In the first one, the EM simulation is entirely reliable to the output value of the MOEA, meaning the MSE value is negligible. In other situations, the output value offered by the MOEA differs from the EM simulation, but the results are acceptable. It means that the MSE degraded the MOEA performance, but the desired result can be achieved with a simple adjustment on the EM-solver. In both situations, the manufacturing process can be started. Finally, in the worse possible situation, the input/output pairs differ entirely from the simulation values due to direct interference of the MSE. At this moment, it becomes necessary to restart the process, generate a complementary dataset, and train the SM to apply in the MOEAs again.

VI. CASE OF STUDY: PRINTED-DIPOLE ANTENNA
This section reports a case study of the proposed methodology based on multi-objective optimization of a printeddipole antenna. The NSGA-II and MOEA/D evolutionary strategies were used to tune the input parameters from the BPNN-surrogate model to improve the antenna bandwidth centered at 3.5 GHz with the minimum possible error. Finally, the resultant optimized antenna dimensions are validated by comparisons with ANSYS HFSS numerical simulations and experimental results from the printed-dipole antenna prototype.

A. ANTENNA DESIGN AND SURROGATE MODEL GENERATION
The printed-dipole antenna structure is shown in Figure 3. It comprises a two-arms active radiating element, excited by a microstrip line connected to a conventional SMA connector. It was considered a fiber-glass (FR-4) dielectric substrate (ε r = 4.4 and tan δ = 0.02), which has 100 x 25 mm of overall size with 1.6 mm thickness. The decision input/output variables are v = [L, t, S, f , BW ], in which L and t are the arm length and thickness, concomitantly; S is the microstrip line thickness, while f and BW are related to the resonance frequency and bandwidth values, respectively. It is important to highlight that all dimensions are in mm.  Figure 4 represents the input and output variable inside the artificial neural network.
For the printed-dipole antenna, 243 samples were generated to feed the MLP, divided between training and validation sets. After the training process, the K-Fold cross-validation and GridSearchCV method were applied, which does an exhaustive search with the network hyper-parameters combinations, aiming to minimize the Bias-Variance trade-off. The algorithm converged after 5000 iterations, the training process took 6 min and obtained a MSE = 0.0087. The MLP has been deployed with three layers with three neurons, eight hidden layers with a hundred neurons, and two output layers with two neurons to attain this minimum error. MLP is ready to be used as a surrogate model after training, testing and achieving good generalization performance. Figure 5 (a) and Figure 5 (b) show the regression comparison between the ground truth and predictions for BW and f variables, respectively.

B. MULTI-OBJECTIVE OPTIMIZATION BASED ON SURROGATE MODEL
As explained in the previous subsection, the surrogate model was trained and used to formulate the MOOP. Moreover, the NSGA-II and MOEA/D optimization algorithms have been applied with the SM to obtain the objective functions. Pareto Front was transformed to the original output variables for allowing the Decision Maker (DM ) to make use of it. It is crucial to notice the convergence study using HV quality indicators and investigation of Spread and Spacing was made on both algorithms in the original Pareto Front for the antenna.
For each experiment, it was performed the convergence analysis with the normalized HV indicator over 30 simulations. Furthermore, we have individually investigated absolute HV, Spacing, and Spreading over 30 simulations. Figure 6 (a) and Figure 6 (b) present the normalized convergence curve along with the transformed Pareto Front for NSGA-II and MOEA/D. One can note that the NSGA-II algorithm had a superior convergence speed. It has presented a stationary behavior before ten iterations, while MOEA/D after 100 iterations. Despite that, both algorithms were able to converge toward the Pareto Front, without major difficulties concerning the optimization process aspect. Table 2 presents the quality indicators applied to the printed-dipole MOOP. The two applied algorithms presented approximately the same statistical performance for the absolute hypervolume since differences were noticed only after four decimal places. On the other hand, for both Spacing and Spreading metrics, in which smaller values are desired, the NSGA-II showed superior. Figure 7 (a) and Figure 7 (b) report the Pareto Front approximation for the printed-dipole with NSGA-II and  MOEA/D, respectively. One can conclude that this particular optimization problem has a peculiar Pareto Front approximation for both algorithms. The solutions found by NSGA-II presented a set of discrete points, minimally spaced and without a concentration in a simple region. It causes smaller values for Spacing and Spreading since the solutions for those indicators are based on distance.
As mentioned in Section 4, the MOOP provides a set of PF that enables the achievement of the desired goal, which in this case study is to minimize the error around the resonance frequency centered at 3.5 GHz and maximize the bandwidth. The optimal solution provided by the algorithm is L = 23.33 mm, t = 4.02 mm and S = 2.19 mm, which results in the following output estimated variables: resonance frequency (minimum point of the reflection coefficient) centered at 3.5 GHz with a bandwidth (frequency range with reflection coefficient lower or equal to -10 dB) equal to 0.75 GHz. For the validation process, the input estimated values were verified in ANSYS HFSS, as Figure 8. The numerical simulation resulted in a resonance frequency centered at 3.5 GHz with 0.77-GHz bandwidth, giving rise to an excellent agreement. The next step was the antenna fabrication for further validating the proposed methodology. Figure 8 displays a photograph of the antenna prototype and a comparison between the simulated and experimental results in terms of the reflection coefficient.
The antenna prototype provided a reflection coefficient centered around 3.5 GHz and 0.76-GHz bandwidth. It is possible to notice that the resonance frequency and bandwidth are in accordance with the estimated and simulated models, validating the proposed methodology. Some small differences occurred due to impressions of the manufacturing process. Additionally, the minimal difference in the bandwidth value is due to the MSE variance in regressor, which can be reduced by adding more input/output pairs in the neural network and refining the model.
The printed dipole antenna was designed using three different strategies and numerically evaluated in ANSYS HFSS. Table 3 reports the dimensions of each model. The parameters L, W d and g d are the dipole length, width and gap values, respectively. While L m and W m are corresponding to feeding line length and width. First one, the conventional dipole antenna equations have been directly and exclusively applied for creating the initial HFSS numerical model [57]. The second design corresponds to a RF-engineer expert on antenna development. Finally, the last design comes from the antenna numerical results obtained by NSGA-II optimization algorithm, based on surrogated model. Figure 9 shows the dipole antenna results in each of the mentioned strategies, further the prototype antenna  measured-values. The dipole was fed by a matched port at 200 , which is the typical input impedance value for a   wavelength-dipole antenna when fed in its maximum current point [57], and reported a resonance frequency at 3.4 GHz. The model spent 50 minutes to be formulated, designed and simulated. Considering the RF-engineer antenna design, wasted 4 hours, in which 50 minutes are regarding theoretical design and more 3 hours and 10 minutes until the final model. For this model, the engineer added a resonance-matching structure to reduce the reactive-part and improve the antenna performance. Finally, in the AI technique were used 5 seconds related to the optimization algorithm processing time to provide the input-output pair. To validate the predicted model, a simulated model was generated. It was purchased starting by theoretical model and spent 5 minutes to be simulated in HFSS, computing 55 minutes and 5 seconds. Table 4 presents a summary related to the time-consuming in each strategy.

VII. CONCLUSION
This work proposed a computational intelligence-based methodology for antenna development. it was performed an in-depth explanation of the methodology and provided a step-by-step to build the dataset, surrogate model, and optimization. As a study of case, a printed-dipole antenna was estimated by a personalized surrogate model based on ANN with MOEA strategies. After that, the antenna was simulated in the HFSS simulation software and measured. This prototype was designed to operate at 3.5 GHz to validate our methodology efficiency with two widely-used MOEAs.
It was observed that the NSGA-II algorithm reports a stationary behavior after 10 iterations, and it has an improved performance comparing the MOED/D, despite having the same HV value. The solutions found by NSGA-II presented a set of discrete points, minimally spaced and without a concentration in a simple region. It causes smaller Spacing values, 0.02, and Spreading, 0.43, since the solutions for those indicators are based on distance.
The algorithm's optimal solution spent only 5.187 seconds, while the simulation process considering the same dimensions consumed 5.13 minutes. Based on NSGA-II reports, it was obtained the printed-dipole antenna with L = 23.33mm, t = 4.02 mm and S = 2.19 mm as input parameters. Additionally, the estimated response was a resonance frequency centered at 3.5 GHz and 0.75-GHz bandwidth following our methodology. In the validation step, the printeddipole antenna's simulation with the dimension proposed by the MOEA presented a resonance frequency centered at 3.5 GHz and 0.77-GHz bandwidth. The minimal difference in the bandwidth value is due to the MSE variance in the regression process. This problem can be solved by adding more input/output pairs in the neural network and refine the model. Finally, the measured model reported a resonance frequency centered at 3.5 GHz and 0.76-GHz bandwidth.
Based on the presented results, our method might be a potential tool in both design and optimization of generalist antennas development. This method might work as a preliminary and complementary for commercial electromagnetic simulation software for time-saving and computational effort reduction. It is critical to consider that for comprehensive frequency range solutions. It is necessary to insert input-output pairs in the database, adjust the number of neurons and its associated weights in the neural network. In a short-term period, the solution might present itself as a timeconsuming task, in which the RF-engineer will need to add the input-output pairs to the cloud and build a broad dataset. Every single modification in the antenna structure needs to be uploaded to feed the dataset. However, in the long-term period, the cloud should be composed of a massive amount of content, i.e., input-output pairs, and the unique RF-engineer effort will define the goals and apply simple tune in the simulation software since the methodology would be already defined.
For future works, a more sophisticated antenna will be designed with the LHS approach for dataset generation to validate another type of input-output pairs construction method. The challenge for this approach is to define the algorithm's type to be used in the regression process to acquire a suitable fitness function to obtain the SM. Furthermore, the input-output pairs ideal quantity is a gage since the antenna model works at three different frequency bands, increasing the complexity of the SM and optimization process.