On Improved-Reliability Design Optimization of High-Frequency Structures Using Local Search Algorithms

The role of numerical optimization has been continuously growing in the design of high-frequency structures, including microwave and antenna components. At the same time, accurate evaluation of electrical characteristics necessitates full-wave electromagnetic (EM) analysis, which is CPU intensive, especially for complex systems. As rigorous optimization routines involve repetitive EM simulations, the associated cost may be significant. In the design practice, the most widely used EM-driven procedures are by far local (e.g., gradient-based) ones. While typically incurring acceptable expenses that range from dozens to a few hundreds of objective function evaluations, they are prone to failure whenever a decent initial design is not available. Representative scenarios include simulation-based size reduction of compact devices or re-design of structures for operating/material parameters being distant from those at the available design. A standard mitigation approach is the involvement of global search methods, which entails significantly higher computational costs. This paper reviews the recent methodologies introduced to improve the reliability of local parameter tuning algorithms without degrading their computational efficiency. We discuss frequency-based regularization, adaptively adjusted design specification approach, as well as accelerated feature-based optimization. All of these techniques incorporate mechanisms that improve the performance of the search process under challenging scenarios, primarily poor initial conditions. The outline of the mentioned methods is accompanied by illustrative examples including passive microwave circuits and microstrip antennas. Benchmarking against conventional local search is provided as well. Furthermore, the paper discusses the advantages and disadvantages of the reviewed frameworks as well as speculates about future research directions.


I. INTRODUCTION
Design of high-frequency devices and systems has been traditionally rooted in circuit-theory-based methods, including both analytical approaches [1], and equivalent network models [2]. Meanwhile, the signifiance of full-wave elec- The associate editor coordinating the review of this manuscript and approving it for publication was Zhenzhou Tang . tromagnetic (EM) simulation techniques has been growing, not only due to rapid advancements in simulation hardware and software, but mainly because of practical necessity [3], [4]. EM simulation enables proper evaluation of electrical and field characteristics of microwave and antenna structures, in particular, quantification of the effects that cannot be accounted for using simpler means. Examples include mutual coupling, dielectric and radiation losses, substrate anisotropy, etc. These effects play non-negligible role for a growing number of modern components, such as miniaturized circuits [5], [6], MIMO systems [7], wearable antennas [8], or metamaterial-based structures [9]. Further, due to growing complexity of high-frequency devices, EM simulation is often used in the design process itself, primarily for final tuning of geometry parameters [10], but also sensitivity/statistical analysis [11], tolerance optimization [12], [13], or multi-criterial design [14], [15], [16]. Although traditional EM-driven design methods (e.g., parametric studies guided by engineering insight) are still popular, utilization of rigorous numerical methods is highly recommended [17], [18], [19] due to their ability to handle multiple parameters and design goals, as well as to carry out constrained optimization [20].
Despite its benefits, simulation-based design optimization of high-frequency structures is challenging. The fundamental bottleneck is high computational cost incurred by repetitive EM analyses involved in the process. The expenses associated with local search algorithms (e.g., gradient-based [21] or stencil-based methods [22]) are normally acceptable, in the range of a few dozens to a few hundred of system simulations. A popular approach to solving global optimization tasks is nowadays the use of nature-inspired algorithms, e.g., [67], [68], [69], [70], [71]. Yet, global [23], [24] or multi-objective optimization [25] entails considerably higher costs, mainly because of the necessity to explore the entire parameter space as well as the use of population-based metaheuristic algorithms [26]. The situation is similar for uncertainty quantification [27], [28]. Here, the limiting factor is the estimation of statistical performance figures, e.g., the yield [29], which requires numerical integration of the underlying probability density functions describing parameter tolerances [30].
The issues highlighted in the previous paragraph have been addressed by extensive research conducted over the last two decades or so. Some of the methods developed to expedite EM-based design procedures include utilization of adjoint sensitivities [31], [32], parallelization [33], sparse Jacobian updates [34], [35], response feature technology [36], cognition-driven design [37], as well as surrogate-assisted approaches [38], [39]. The latter have been rapidly growing in terms of popularity over the recent years, and incorporate data-driven [40], and physics-based models [41]. The former are more generic, i.e., problem independent; yet, they suffer to a large extent from the curse of dimensionality. Popular modelling procedures include kriging [42], radial basis functions [43], neural networks [44], or Gaussian process regression [45]. Physics-based surrogates exhibit better generalization capability, yet, are heavily reliant on the underlying lower-fidelity model, therefore, they are not easily transferrable between the problem domains. Other methods include variable-resolution techniques (e.g., co-kriging [46], response correction methods [47]), as well as machine learning frameworks [48], which are often used for global search purposes [49].
Despite the plethora of optimization techniques available in the literature, the most popular and widespread in practical applications are local algorithms, primarily gradient-based ones [50]. The reasons include-as mentioned earlierreasonable computational cost, as well as availability of wellestablished methods, e.g., trust-region [51], conjugate gradient [52], as well as sequential quadratic programming [53], or interior point methods [53] (in the context of constrained optimization). Unfortunately, local procedures are prone to failure if an adequate initial design is not available, which is often the case in practice. A representative example is optimization of compact structures, which often feature parameter redundancy due to employing various miniaturization techniques such as slow-wave phenomenon [54], stubs [55], slots [56], defected ground structures [57]. Other examples include multimodal problems (e.g., antenna array pattern synthesis [58], metamaterial-based structures [59]), as well as re-design of components for operating or material parameters being considerably different from those at available designs. In situations like these, the designers typically resort to global methods [23], [24], [26], [67], [68], [69], [70], [71], which are computationally inefficient. Depending on the setup, the computational cost of the global search using a widely used particle swarm optimizer is at least ten times higher than that of the local gradient-based search, as demonstrated in [72] and [73].
In high-frequency design, the main challenges faced by the researchers may be epitomized as follows: (i) costly EM simulations are indispensable for reliable evaluation of high-frequency components, (ii) local search procedures are associated with an acceptable cost, yet, require a decent initial design, which may not be available, and (iii) global optimization algorithms allow to circumvent this issue, yet, their computational cost is exorbitant when carried out using full-wave simulations. A possible strategy is the enhancement of the existing local search routines. Nevertheless, the literature has not been offering this type of techniques thus far. Recently, several approaches for the improvement of the reliability of local optimization algorithms have been proposed, especially in terms of making the search process immune to unavailability of quality starting points. One of these is frequencybased regularization [60], where the objective function for the design task is reformulated to include an additional term that fosters the alignment of the system operating frequencies with the assumed targets. Another method is adaptive adjustment of design specifications [61]. Therein, design goals are relocated towards the actual operating parameters of the structure at hand in order to make them attainable from the current design. In the course of the optimization process, the specs are gradually re-adjusted to eventually converge to the original targets. Attainability of the current goals through local search is maintained throughout the process. In [62], a feature-based algorithm coupled with sparse sensitivity updating scheme has been proposed, which enables quasi-global search capability at local optimization costs by the employment of response features [63]. The latter results VOLUME 10, 2022 in flattening the objective function landscape and making the design goals reachable even from initial designs that are normally too poor for conventional algorithms to succeed. This paper reviews the methodological approaches outlined in the previous paragraph. We provide brief formulations of each of the techniques [60], [61], [62], and showcase them using real-world antenna and microwave design examples. Furthermore, the benefits of the particular methods are discussed along with benchmarking against conventional local search, which is to demonstrate the capabilities of the said algorithms, in particular, successful handling of poorquality starting points. The paper also discusses the advantages and disadvantages of the reviewed frameworks and speculates about future research directions.

II. FREQUENCY REGULARIZATION FOR RELIABLE DESIGN OPTIMIZATION
In this section, we briefly formulate the frequency-based regularization, originally introduced in [60]. The method can be incorporated into most iterative local search procedures. It enables relocation of the operating frequencies of the structure under design towards their target values, even if they are distant from those at the initial design.

A. DESIGN PROBLEM FORMULATION
We use the following notation: x -a vector of designable (usually, geometry) parameters; R(x) -the response of the EM-simulated model of the considered high-frequency structure at the design x; U (R(x)) -a scalar objective function quantifying the design quality, monotonically decreasing with respect to improving design quality. We consider the design task formulated as in which x * is the optimum design. The objective function is problem dependent. Table 1 provides a few examples.

B. FREQUENCY REGULARIZATION
The regularization concept [60] has been described below and illustrated in Fig. 1 for a quasi-Yagi antenna designed for maximum in-band gain. If the operating frequency of the structure is away from the target (cf. Fig. 1(a)), local search normally fails by being stuck in a local minimum. For the example of Fig. 1, the two minima are separated by a local maximum. Frequency regularization modifies the design task (1) by adding a regularization term to the objective function, so that the problem is formulated as In (2), f r (x) is a regularization function that quantifies the discrepancies between the operating frequency (or frequencies) of the system at design x, and the target ones. The maximum acceptable discrepancy is marked as f r.max . Note that regularization aims at worsening (increasing) the objective function  Observe the monotonicity of the regularized merit function, which makes the optimum reachable from the shown initial design with the use of a local search algorithm, not attainable for the standard formulation.
in a manner proportional to the said discrepancy (with β r being the scaling coefficient). At the same time, the added contribution is zero if the discrepancy is smaller than f r.max . The latter ensures that the original and regularized objective functions coincide when close to the optimum design.   shows the effects of regularization: the objective function profile is altered in a way that makes the design task unimodal. In particular, the optimum is reachable from the originally poor initial design, through a local search.
The definition of the regularization function depends on the system at hand. Table 2 provides a few examples, corresponding to those in Table 1. The regularization function is to estimate the distance between the actual operating frequencies and the target ones, which requires extracting appropriate data from the EM-simulated system characteristics.

C. DEMONSTRATION EXAMPLE
As mentioned earlier, the regularization procedure can work with any iterative optimization algorithm. Here, it is illustrated using gradient-based trust-region routine with numerical derivatives [51] as the optimization engine.
Consider a quasi-Yagi antenna with a parabolic reflector shown in Fig. 2 [64]. The structure is implemented on FR4 substrate (ε r = 4.4, h = 1.5 mm), and described by ten geometry parameters x = [W L L m L p S d S r W 2 W a W d g] T (all dimensions in mm). The EM model is simulated in CST Microwave Studio, using the time-domain solver.
The objective is to design the antenna for the target operating frequency f 0 and to ensure 8-percent impedance band- width (symmetrical around f 0 ). The main goal is maximization of the realized gain at f 0 . The conventional objective function is similar to that in the second row of Table 1, except that the average gain is replaced by the gain at a single frequency f 0 . The frequency regularization involves the function f r (x) listed in the second row of Table 2, and f r.max = 0.15 GHz. We consider three design scenarios, with f 0 = 2.5 GHz, f 0 = 4.5 GHz, and f 0 = 5.0 GHz. The initial design is the same in all cases, and corresponds to the operating frequency of approximately 3.5 GHz. Figure 3 presents the initial and the optimal designs rendered with the use of the conventional and regularized approach. It should be emphasized that optimization based on the standard formulation failed in all cases. On the other hand, regularization-enhanced search was successful for all target frequencies. Table 3 provides numerical data, which demonstrates superior reliability achieved using regularization, which also results in lowering the computational costs of the optimization process.

III. ADAPTIVELY-ADJUSTED DESIGN SPECIFICATIONS
The goal of the adaptively re-set design specifications approach [61] is to improve the reliability of the parameter tuning process under challenging conditions, e.g., unavailability of a sufficiently good initial design, or system  re-design for operating parameters largely different from the current ones.

A. SPECIFICATION ADJUSTMENT SCHEME
To explain the concept, we consider a specific design task, i.e., matching improvement at the target operating bands f j.1 ≤ f ≤ f j.2 , j = 1, . . . , N . Formally, the design problem is to solve The problem (4) is the auxiliary task, formulated only for the specification management purposes. The main problem may be defined similarly as discussed in Section II. A (cf. Table 1). Figure 4 shows a typical situation, illustrating the case of dual-band antenna. The target operating bands are centered at 3.5 GHz and 5.3 GHz. The goals may be reachable or not, depending on the initial design (cf. Fig. 4). According to adaptive design specifications methods [61], the target operating bands are relocated towards the actual ones at the available design to ensure that they are reachable through local optimization. Throughout the optimization process, the specifications are stepwise moved to the original locations to ensure that the final solution is optimized for the initially assumed targets. The management scheme is developed to ensure that the current specifications are reachable from the current design in each algorithm iteration. Figure 5 provides a graphical illustration of the concept. Let G S (x, f ) refer to the reflection response gradient at the frequency f . The optimization procedure is assumed to be iterative and generating approximations x (i) , i = 0, 1, . . . , to x * with (here, x (0) is the starting point). Let be the linear model of S 11 (x, f ) at the design x (i) . We consider an auxiliary optimization sub-problem with D being optimization domain size (typically D = 1). Table 4 summarizes the decision factors utilized to implement the specification management scheme. The factor E r determines the potential for design improvement according to (3) and (4); E 0 evaluates the design quality, whereas E c is employed as a protection to make the altered design goals close enough to F c . Figure 6 summarizes the conditions that have to be satisfied to enable design specification adjustment. Having satisfied any of these, the updating procedure is launched, which is outlined below. Let F cr (a) denote the adjusted specifications in the subsequent iteration, which are parameterized using a 90590 VOLUME 10, 2022 with Here, a is the maximum value for which E r ≥ E r.min , E 0 ≤ E 0.max , and E c ≤ E c.max , at the design Note that identification of a requires solving an auxiliary subproblem where a is gradually diminished (from 1 to 0) until the aforementioned conditional are met. The above procedure relaxes the specifications until they become reachable from the current design. When close to optimum, the conditions are met with a = 1 (i.e., the original goals). If the optimum is not attainable, the algorithm  will bring the design as close to these as possible. The adjustment procedure is launched before each iteration of the optimization process. As a result, we observe continuous design goal alterations. Note that the adjustments do not entail additional costs because the response sensitivities (required in (5)) are already available, assuming that the search process is a gradient-based one.

B. DEMONSTRATION EXAMPLE
For the sake of illustration, consider a dual-band branch-line coupler (BLC) [65] shown in Fig. 7. Figure 7(b) provides VOLUME 10, 2022   Table 1). relevant circuit data, including the target operating frequencies, and design objectives. The center frequencies of the BLC at the initial design are at around 1.7 GHz and 3.5 GHz, respectively (grey plots in Fig. 8), i.e., they are severely misaligned with the target ones. Consequently, conventional local optimization fails. The search process enhanced by adaptively adjusted specification of Section III. A yields the design x * = [41.1 8.19 0.95 2.26 1.68 0.96 0.34 1.18 1.14] T (marked black in Fig. 8), which is well-aligned with the target and satisfies the original specs. Figure 9 shows the evolution of the design goals. Note that initial relocation of the targets is significant, and it takes nine iterations to have them back at the original values.

IV. ACCELERATED FEATURE-BASED OPTIMIZATION
Utilization of response features [36] allows for flattening the objective function landscape owing to the fact that the relationships between the characteristic point coordinates and design variables of the system at hand are weakly nonlinear. In [62], the response feature approach was combined with sparse sensitivity updating schemes in order to develop an optimization algorithm that exhibits quasi-global search features and improved computational efficiency at the same time. This section outlines this approach and illustrates it using a triple-band dipole antenna.

A. FEATURE-BASED OPTIMIZATION
As explained before, local optimization using conventional formulation (1) of the optimization task may or may not be successful depending on the the initial design quality (cf. Fig. 10), and relocation of the operating frequencies may be necessary for obtaining the optimal design meeting the targets. The response feature approach allows for alleviating the difficulties pertinent to optimization task multimodality and the issues related to a possibly poor starting point by reformulating the task in terms of the coordinates of so-called characteristic (or feature) points of the system response, which are defined having in mind the design objectives and the shape of the system outputs [36]. For illustration purposes, consider a multi-band antenna with the feature points defined using frequency and level coordinates of the antenna resonances, f k and l k , respectively, k = 1, . . . , p, where p denotes the number of antenna operating bands. The feature points are gathered in the vector As mentioned earlier, the relationship between the entries of vector R F and the design variables is not as much nonlinear as that of the complete responses [36].
The design task may be restated in terms of response features as where, for the design task corresponding to simultaneous reflection minimization at all target frequencies (cf. Table 1), the feature-based merit function U F (R F (x)) is formulated as (12) where β is the penalty factor. Here, the system output of interest is the antenna reflection and l k (x) = S 11 (x, f k ), k = 1, . . . , p, refer to the level coordinates (see Fig. 10).

B. TR SEARCH WITH JACOBIAN CHANGE TRACKING
In [62], the optimization routine was the trust-region gradient search algorithm [51] enhanced by response features. It yields approximations x (i) , i = 0, 1, . . . , to the optimal vector x * . The following linear expansion model R where J F (x (i) ) = [∇f 1 (x (i) ) . . . ∇f p (x (i) )∇l 1 (x (i) ) . . . ∇l p (x (i) )] T is the Jacobian of the feature vector. The new design is a solution to In (14), d (i) denotes the search region size vector set using the conventional TR rules [64], based on the gain ratio lin (x (i) ))). Typically, J F is evaluated through finite differentiation (FD), at the costs of n extra EM analyzes. Here, the cost is 90592 VOLUME 10, 2022 FIGURE 11. Sparse sensitivity updating scheme utilized in [62]. reduced by omitting most of the FD-based updates, specifically, the system parameters that exhibit the smallest variability of the response gradients. The sparse sensitivity concept has been summarized in Fig. 11.  Studio. The antenna has been optimized using a conventional trust-region algorithm with Jacobian matrix evaluated using finite differentiation, and the feature-based procedure with acceleration, described in this section. The following values of the control parameters have been used (cf. Fig. 11): N min = 1, N max = 5, as recommended in [62]. To investigate the reliability of the optimization process, each algorithm has been executed twenty times, using the same set of initial (random) designs. Table 5 provides the numerical results, whereas Fig. 13 shows the antenna responses for the representative algorithm runs. Because the conventional algorithm and the feature-based technique use different objective functions, VOLUME 10, 2022 their direct comparison is not possible. Therefore, it is carried out based on the feature point coordinates extracted from the antenna responses at the final design (for both algorithms).
The design quality is quantified using the standard deviation of the antenna center frequencies across the set of twenty algorithm runs. Table 5 also provides the optimization cost, which is given in the number of EM analyses of the antenna.
Observe that the feature-based approach clearly outperforms the conventional procedure. As a matter of fact, it has been capable of properly allocating antenna resonances in all runs (note that the standard deviations are zero for all center frequencies), which is not the case for the conventional approach. Furthermore, the CPU cost of the optimization process is lower than for the conventional methods, and it should be emphasized that for unsuccessful runs of the latter, the algorithm converged prematurely due to trust region size reduction; the cost of successful runs is actually considerably higher, typically, at least a hundred of EM simulations.

V. CONCLUSION
This paper reviewed the recent developments concerning reliability-enhanced local procedures for high-frequency design optimization. We outlined the frequency-based regularization approach, adaptive design specification adjustment scheme, as well as feature-based optimization with sparse sensitivity updates. The discussed algorithms can be incorporated into any iterative search procedure, and aim at improving the quality of the optimization process under challenging design scenarios, such as the lack of a sufficiently good initial design, or the need to re-design the operating parameters of the system at hand over broad frequency ranges.
The fundamental advantages of these methodologies are the following: (i) improving immunity of the local parameter tuning to poor starting points, (ii) enabling reliable relocation of the operating parameters (e.g., centre frequencies) to their target values that are significantly misaligned with those at the available designs, (iii) versatility and easy incorporation into the existing optimization algorithms, (iv) low computational cost, which is comparable to that of conventional local search methods. Note that the properties (i) and (ii) effectively enable quasi-global search capabilities.
A disadvantage of the discussed techniques is the necessity of extracting the operating parameters from EM simulated responses, which normally requires a separate post-processing codes (here, implemented in Matlab). Also, if the system responses are severely distorted at the initial designs, such an extraction may not be possible, which hinders utilization of the reliability-enhancement procedures. Clearly, defaulting to conventional search upon detecting a failure of operating parameter extraction would at least ensure that the mentioned methods are no worse than the conventional techniques. On the other hand, available designs normally exhibit all the necessary features (e.g., resonances, etc.), which is the only essential prerequisite for the reviewed techniques to work.
Needless to say, further automation of such methodologies is one of the future research directions. The other would be generalization, i.e., the development of procedures that apply appropriate feature extraction procedures upon detecting the system response shape in the context of design specifications. Notwithstanding, the results presented in this paper demonstrate that the reliability-improvement approaches are promising, and offer viable alternatives for global search procedures, at least for certain types of design optimization problems.
PIOTR PLOTKA received the M.Sc. and D.Eng. degrees in electronic engineering from the Gdańsk University of Technology, Poland, in 1976 and 1985, respectively, and the D.Sc. degree in electronic engineering from the Institute of Electron Technology, Warsaw, Poland. Since 1981, he has been with the Gdańsk University of Technology. In 1990, he joined Nishizawa Terahertz Project of Research Development Corporation of Japan, Sendai, where he was developing device applications of GaAs molecular layer epitaxy. From 1992 to 2008, he worked as a Senior Researcher at the Semiconductor Research Institute, Sendai. He led a group developing nanometer-scale GaAs static induction transistors for application in future communication circuits. Since 2008, he has been with the Gdańsk University of Technology. His current research interests include fabrication and physics of operation of poly-and nano-crystalline diamond devices and sensors for electrochemical applications.