Worst Case Uncertainty Construction via Multifrequency Gain Maximization With Application to Flutter Control

In the analysis of uncertain systems, we often search for a worst case perturbation that drives the $H_\infty $ gain of the system to the maximum over the set of allowable uncertainties. Employing the classical technique, an uncertainty sample is obtained, which, indeed, maximizes the gain but only at the single frequency where that maximum occurs. In contrast, this article considers a method to calculate a worst case perturbation that maximizes the gain of a system with mixed uncertainty at multiple frequencies simultaneously. This approach involves a nonlinear optimization that selects the worst case value of the uncertain parameters and the application of the boundary Nevanlinna–Pick interpolation to calculate the dynamic uncertainty sample. Such a perturbation can be used to augment Monte Carlo simulations of uncertain systems, especially if the system has multiple resonance frequencies. The worst case analysis of a flutter control system designed for a small flexible aircraft is provided to demonstrate the applicability of the proposed method.

used in analysis and occasionally in control synthesis, e.g., [2] and [3]. The worst case gain can be assessed by a skewed structured singular value (skewed-μ) calculation. This problem is known to be NP-hard [4]; therefore, efficient techniques for obtaining upper and lower bounds have been developed.
The upper bound calculation is turned into a convex optimization problem using D-G scales [5], [6]. The lower bound involves heuristics for constructing "bad" uncertainty samples, i.e., those that achieve large gains for the uncertain system. A common heuristic is the skewed-μ power iteration [7]. This yields the value of the uncertain parameters and a complex (structured) matrix, which causes the uncertain system to achieve the worst case gain lower bound at a given frequency. A linear time-invariant (LTI) uncertainty sample corresponding to the dynamic uncertainty is constructed by interpolating the complex matrix at the given frequency. This single-frequency interpolation is always possible using a stable LTI system with norm bounded by that of the complex matrix [8,Th. 9.1]. The upper and lower bound calculations are readily available in MATLAB [9], [10].
Expanding our previous work in [11], this article presents a new method to construct worst case mixed uncertainty samples. The proposed algorithm maximizes the gain of the uncertain system at multiple (given) frequency points. This is in contrast to the existing method that computes an uncertainty to maximize the worst case gain lower bound at a single frequency (often the peak frequency). The proposed method consists of performing a nonlinear optimization to find the worst case value of the uncertain parameters and the complex matrix uncertainty samples computed by the worst case gain lower bound power iteration at multiple frequencies. Next, the boundary Nevanlinna-Pick (BNP) method [12] is used to create a stable, norm-bounded LTI uncertainty that interpolates the collection of matrix samples.
This uncertainty sample is useful for further time and frequency domain analyses. For example, it can be incorporated in a higher fidelity nonlinear simulation [13], especially for systems with multiple resonance frequencies, such as hard disk drives [14] and flexible aircraft [15]. In addition, it can be useful as part of control design methods that require bad samples of dynamic uncertainty [2]. Selecting uncertainty samples that provide large gain at multiple frequencies could improve the convergence speed of such synthesis methods. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ The most closely related work is [16], which considers sample construction for randomly sampling an uncertainty set. The Nevanlinna-Pick theorem is employed to sample unit H ∞ norm systems. The approach involves randomly chosen frequencies and data that are interpolated while constraining the resulting interpolant to be stable and norm bounded by one. The construction in [16] is similar to the one presented in this article. The main difference is that random uncertainty samples are created in [16], while our method aims to maximize gain at multiple frequencies.
The worst case analysis of a real-life flutter control example is provided to demonstrate the use of the algorithm. The controller and the aircraft model are taken from our previous work in [17]. Time-domain simulations with the worst case uncertainty sample are conducted to investigate the applicability of the flutter controller. The MATLAB implementation of the algorithm along with all the examples presented in this article can be downloaded from [18]. A short video presentation explaining our preliminary work is available at [11].
The rest of this article has the following outline. Section II details the problem and introduces the classical approach to solve it. In Section III, the worst case uncertainty construction is described for the purely parametric, purely dynamic, and the mixed uncertainty cases. The flutter control analysis problem is elaborated in Section IV. Conclusions are drawn in Section V.

A. Notation
Let X ∈ r X ×c X and Y ∈ r Y ×c Y be matrices with c Y < r X and r Y < c X . Partition X such that Denote the elementwise conjugate of Y by Y , its conjugate transpose by Y * , and its largest singular value byσ (Y ). If X and Y are the same size, then X > Y (or <, ≥, ≤) denotes the elementwise inequality. For X = X * and Y = Y * , the expression X Y means that X − Y is positive definite. The symbols ≺, , and are to be understood accordingly.
If G(s) is an LTI system, G(s) ∞ is its H ∞ -norm. That is, G(s) ∞ is the supremum ofσ (G( j ω)) over all frequencies if G(s) is stable and ∞ otherwise.

B. Worst Case Gain
Consider the block diagram in Fig. 1. This is an interconnection of a stable, multi-input-multi-output (MIMO) LTI system M(s) and a structured dynamic uncertainty (s). The uncertain system is (1) Define the set of structured and unit norm bounded parametric and dynamic uncertainties, respectively, as Again, (s) ∈ means (s) ∞ ≤ 1.
The worst case gain of the uncertain system is the maximum induced L 2 gain (i.e., H ∞ norm) of P (s) over the set of allowable uncertaintieŝ The interconnection in Fig. 1 is assumed to be robustly stable, i.e., P (s) is stable for all (s) ∈ . This implies thatγ < ∞. The worst case gain defined in (3) is equivalent to calculating the peak gain of the system frequency by frequency. To make this statement precise, note that the response of d (s) ∈ at any frequency is a block structured, complex matrix. It is sufficient to define the set of block structured, unit norm bounded complex uncertainties as In addition, note that it is sufficient to restrict the complex matrices to be rank one as in the definition of d [5], [8]. The set of mixed uncertainty samples at a fixed frequency is The worst case gain calculation in (3) is equivalent tô where γ (ω) is the peak gain of F U (M(s), (s)) at ω, i.e., Note that, in Q ∈ , the parametric uncertainty p ∈ p is allowed to vary from frequency to frequency. A standard approach is to approximateγ by performing the calculation in (6) on a finite frequency grid. Specifically, a sufficiently dense grid {ω k } N ω k=1 is chosen. At each frequency, γ (ω k ) is the result of a skewed-μ analysis, which, for many cases, is known to be NP-hard [4]. Instead, lower bounds The upper bounds are computed via semidefinite programming involving scaling matrices to account for the structure of [6]. The lower bounds are obtained employing a skewed-μ power iteration [7]. The power iteration at ω k yields Q k ∈ such thatσ Finally, the worst case gain is bounded by max k L k ≤γ ≤ max k U k . This computation is performed in the MATLAB function wcgain [9]. Similar calculations are done by the MATLAB-Simulink Systems Modeling, Analysis and Control (SMAC) toolbox [10].

C. Limitations of Maximizing the Gain at a Single Frequency
Suppose that the lower bound power iteration at ω 0 yields the lower bound L 0 and worst case perturbation p,0 can be substituted into P (s) directly, but Q d,0 is a complex matrix at this point. Therefore, the goal is to find an LTI uncertainty, which interpolates Q d,0 at the single frequency ω 0 . The uncertainty must be stable, unit norm bounded and have the correct block structure, i.e., we want to find d,s (s) ∈ d (the "s" subscript refers to single peak). The uncertainty s (s) = diag p,0 , d,s drives the gain of the uncertain system to the power iteration lower bound at ω 0 ; in other words, it satisfiesσ (F U (M( j ω 0 ), s ( j ω 0 ))) = L 0 . The construction of d,s (s) is given in the proof of Theorem 9.1 (small gain theorem) in [8] and is summarized in the following.
Each block Q i ∈ r i ×c i of Q d,0 is rank-one with unit maximum singular value, i.e., each block can be expressed as Q i = uv * for some vectors u 2 = v 2 = 1. Let u l ∈ denote the lth element of the vector u. It is possible to find an SISO transfer functionû l (s) such thatû l ( j ω 0 ) = u l and û l (s) ∞ ≤ |u l |. This SISO interpolation can be performed with a constant or first-order transfer functionû l (s), as described in [8]. Each entry of u can be interpolated to obtain an r i × 1 stable systemû(s) with û(s) ∞ ≤ 1.
Similarly, each entry of v * can be interpolated to obtain a 1×c i and stable systemv(s) with v(s) ∞ ≤ 1. Finally, the block i (s) =û(s)v(s) is stable and satisfies i ( j ω 0 ) = Q i and i ∞ ≤ 1. As a result, d,s (s) = diag 1 (s), . . . , N d (s) ∈ interpolates Q d,0 at frequency ω 0 . The wcgain function computes the worst case uncertainty using this method. In the implementation, a bandpass filter is added to the uncertainty. This bandpass filter is omitted from the discussions in this article.
One entry of u can be normalized in each block of Q i = uv * . Specifically, u 2 = 1 implies that u has at least one nonzero entry, say u l = 0. Then, Q i =ũṽ * , wherẽ u := u/u l andṽ := u l v. This normalizes the lth entry, i.e., The number of states can be less if there are real numbers in the rank-one decomposition of the blocks of Q d,0 . The most notable situations when that happens is when ω 0 = 0 or ω 0 = ∞. In those cases, M( j ω 0 ) is a real matrix; therefore, in some cases, the power iteration results in a real Q d,0 . Typically, this method is used to interpolate the matrix Q d,0 that achieves the maximal lower bound at frequency ω 0 . Therefore, s (s) achieves the largest gain found by the power iteration over all frequencies in the grid. However, the gain σ (F U (M( j ω), s ( j ω))) may not be large at other frequencies (ω = ω 0 ). A simple example is presented to demonstrate this.
Example 1: Consider two bandpass filters with peak at 1 and 100 rad/s, respectively, Let δ 1 ∈ with |δ 1 | ≤ 1 and 1 (s) with 1 (s) ∞ ≤ 1. The uncertain system is given as That is, P (s) is the weighted sum of F 1 (s) and F 2 (s) with 10% dynamic uncertainty on low frequencies and 20% on high frequencies. The uncertainty sets are Due to the construction of P (s), it is sensitive to excitation at 1 and 100 rad/s. This is illustrated in Fig. 2 in which the gain of the nominal system P 0 (s) and the worst case gain lower bound L(ω) are depicted. The maximum of L(ω) is at ω 0 = 1 rad/s. The skewed-μ power iteration yields The dynamic uncertainty constructed with the classical interpolation is Thus, s (s) = diag(I 2 , 1 (s)). As depicted in Fig. 2, this uncertainty sample drives the magnitude of P s (s) to its maximum at ω 0 . However, the gain of P s (s) at the second peak is less than the gain of the nominal system P 0 (s).

D. Objective: Gain Maximization at Multiple Frequencies
The example from Section II-C motivates the need to move beyond the typical single-frequency interpolation in the worst case gain analysis. This section provides a concrete formulation for the multiple frequency interpolation problem addressed by this article.
Assume that the following are given: a robustly stable uncertain system 1 in (1) with (s) ∈ , a collection of frequencies {ω k } N ω k=1 , and worst case gain lower bounds Find m (s) ∈ for which J ( m (s)) is maximal. The "m" subscript stands for multiple peak. For any (s) ∈ , If the system only has dynamic uncertainty ( = d ), then it is possible to find a m (s) such that J ( m (s)) = J U . This follows from the BNP results in Section III-B. Specifically, a single m (s) ∈ d can be constructed to interpolate the complex matrices found by the power iteration at the given collection of frequencies. However, in the mixed case, it is generally not possible to find a m (s) such that J ( m (s)) = J U . The parametric uncertainty p couples the frequencies together, and different values of the same parameter may be required to achieve the lower bound at each frequency.
To address these issues, we propose a search over the parametric uncertainty and an interpolation of the dynamic uncertainty. Specifically, a nonlinear optimization is performed, which yields p,m ∈ p and the samples of the complex The interpolant d,m (s) ∈ d is obtained by the application of the BNP interpolation method.

III. MIXED UNCERTAINTY CONSTRUCTION TO MAXIMIZE
THE GAIN AT MULTIPLE FREQUENCIES This section describes the solution of the problem formulated in Section II-D. First, the case is treated when there is only dynamic uncertainty in the system (i.e., when = d ). This is a restatement of the results in [11]. The overview of the method is given in Algorithm 1. Then, the purely parametric uncertainty case ( = p ) is handled. These two methods are combined in Algorithm 2 to construct the worst case mixed uncertainty. Finally, Section III-E presents how the algorithm is validated using an array of example systems. The main theoretical contribution of this article compared to [11] is in Sections III-C and III-D and also in Appendix A.

A. Boundary Nevanlinna-Pick Interpolation
The BNP interpolation method is detailed next. The algorithm is given in [ satisfies H (ρ) 0. 2) Assume that H (ρ) 0, and define Let G(s) be any q × q rational function analytic on the closed right-half-plane with G(s) ∞ ≤ 1. Define In the rest of this section, several modifications are added to the basic BNP interpolation result in Theorem 1 that makes it applicable to worst case dynamic uncertainty construction. First, the theorem assumes that the interpolation vectors are both of the same dimension. Zero-padding is used if the dimensions are not equal. For example, assume that {u k } N ϑ k=1 and {v k } N ϑ k=1 have dimensions r and c, respectively, with r > c. In this case, the vectors {v k } N ϑ k=1 are augmented with are augmented with zeros when r < c. Next, the interpolant provided by Theorem 1 is not necessarily a real system, i.e., its coefficients in transfer function or state-space form can be complex. A system with real coefficients is obtained by interpolating the given data and its complex conjugate. Specifically, assume that the given interpolation data are , and a collection of nonnegative frequencies {ω k } N ω k=1 . The interpolant in Theorem 1 is constructed with the data If zero is among the frequency points, then it is not duplicated.  [19]. For simplicity, we always pick G(s) = I . In addition, the interpolant satisfies −u * k d (ϑ k )v k = ρ k , and hence, smaller values of {ρ k } N ϑ k=1 are related to smaller derivatives of interpolant. Specifically, if d (s) is SISO, smaller ρ k values mean that the phase of d (s) varies more gradually with frequency (see Example 2). Because of the construction of the data in (16), there are only N ω independent variables in {ρ k } N ϑ k=1 since, for negative frequencies The following optimization is formulated to minimize ρ k . Let R := diag ρ 1 , . . . , ρ N ω so that diag(R, R) is the main diagonal of the boundary Pick matrix H (ρ) defined in (11). Define H 0 := H (ρ) − diag(R, R), and solve the optimization to obtain {ρ k } N ω k=1 . The objective functionρ + trace(R) combines a bound on the peak value and the sum of {ρ k } N ω k=1 . The two terms are weighted equally in this formulation, but alternative weightings could be used. The optimization also includes an upper bound constraint on the condition Fig. 3.
Effect of the values of the derivatives when using the BNP interpolation.
number κ of the boundary Pick matrix H (ρ). This constraint improves the conditioning of the matrix inversion H (ρ) −1 that appears in (15). The condition number bound is selected as κ = 10 4 . The following example illustrates the effect of this optimization on the phase of the interpolant.
Example 2 (Role of the Derivatives): Interpolate the complex numbers e − j (π/4) and e − j (5π/2) at the frequencies of 1 and 100 rad/s, respectively. When using the optimization in (17), the derivatives become 0.72 and 0.02. Perform the interpolation two more times, but, instead of minimizing the derivatives, set them both to 1 and then 10. Because of the BNP construction, the magnitude of the resulting transfer functions is one at all frequencies. The phase of these three transfer functions is depicted in Fig. 3. The figure clearly illustrates the effect of the derivatives on the interpolant.

B. Interpolation of the Dynamic Uncertainty
Using the method detailed in Section III-A, the worst case dynamic uncertainty construction is described next. The following are given: a robustly stable uncertain system in (1) along with a block structure , distinct frequencies {ω k } N ω k=1 , and worst case gain lower bounds {L k } N ω k=1 at those frequencies. We want to find a d,m (s) ∈ d such that for k = 1, . . . , N ω . To achieve this, we first compute the complex uncertainty samples at {ω k } N ω k=1 and use the BNP interpolation block by block to obtain the interpolant d,m (s). The method is summarized in Algorithm 1.
The worst case uncertainty samples {Q d,k } N ω k=1 ⊂ d are complex matrices computed on the frequency grid {ω k } N ω k=1 using the existing worst case gain lower bound power iteration [7]. The uncertainty d,m (s) ∈ d is the result of interpolation between these matrices. The uncertainty samples have block diagonal structure, i.e., Q d,k = diag Q 1,k , . . . , Q N b ,k ∈ d has block-diagonal structure. Thus, the interpolation procedure in Theorem 1 and Section III-A is repeated for each block separately. For the block Q k of the sample Q d,k , compute rank-one decomposition at each frequency so that Q k = u k v * k , k = 1, . . . , N ω . If Q k is not square, add zeros at the end of u k or v k , whichever has fewer elements, so that they have the same size. Form the input of the BNP procedure as in (16). Apply the BNP interpolation as described in Section III-A. This procedure interpolates through the rank-one blocks of the uncertainty sample. Every block of the resulting d (s) will interpolate the {u k } N ϑ k=1 and {v k } N ϑ k=1 corresponding to that Algorithm 1 Worst Case Dynamic Uncertainty Construction block, but they can, in general, be full rank matrices at the interpolation frequencies.
In Theorem 1, d (s) interpolates the vectors u k and v k in the sense that d ( j ω k )v k = u k for k = 1, . . . , N ω . At ω k , the lower bound power iteration yields Q d,k for which σ F U M( j ω k ), Q d,k = L k . This means that there exist unit vectors u p,k and v p,k such that F U M( j ω k ), Q d,k u p,k = L k v p,k . (The "p" subscript refers to the word performance, since in skewed-μ analysis, e and d are usually called performance channels, and the fictitious uncertainty block associated with them is call the performance block.) The interpolation means that d,m ( j ω k )v k = u k . This is sufficient since the uncertainty satisfies the equations (18). This applies for when (s) is a full block uncertainty. The generalization to the case when (s) has a block structure is straightforward but is omitted for notational simplicity.
Note that, because of the definition of A 0 in (14) and our choice of G(s) = I , the state order of the interpolant in Theorem 1 is N ϑ . Because the interpolation is repeated for every block, the number of states in d,m (s) is N ϑ N d = 2 N ω N d (or (2 N ω − 1)N d ). Thus, if N ω = 1, this method generally provides a lower dimensional uncertainty sample than the classical solution in Section II-C.

C. Optimization of the Parametric Uncertainty
Before we study the construction of the mixed uncertainty sample, let us consider the case when there is only parametric uncertainty in the system, i.e., = p . In this case, the objective function in (9) is and the goal is to find p,m = arg max p ∈ p J p .
Since the parametric uncertainty couples the frequencies together, it is not possible to maximize J in the given frequency points independently. Therefore, there is no guarantee that a p ∈ p exists such that Even if such a p does exist, J is nonlinear and nonconvex; therefore, there is no guarantee that it can be found.
To perform the optimization in (20), a multidimensional version of the interval search is performed combined with a gradient ascent algorithm. We take advantage of the fact that the independent variables of p form an N p -dimensional hypercube. First, J is evaluated at the center and at the corners of the hypercube. Then, the hypercube is split into 2 N p smaller cubes that all have the center and one of the corners as their respective corners. Finally, the small hypercube is selected, which contains the two points with the highest objective value. This process is repeated until convergence or until the maximum number of objective function evaluations is reached. A gradient ascent search implemented in the MATLAB function fmincon with the "interior-point" solver is then run starting from the resulting point.
Another solution could be to simply run a nonlinear optimization algorithm starting from the center of the hypercube. We use 37 example systems to test our method and compare it to potential alternatives (see Section III-E for more information). Based on our test results with the 30 examples that have uncertain parameters, the cube splitting algorithm combined with a gradient ascent performs better than a simple nonlinear optimization. The advantage of our solution arises from the fact that p,m is often close to one of the corners (or to the border) of p . On the other hand, the number of corner points of each hypercube grows exponentially with N p . Due to constraints on computation time, this limits the accuracy of the result for problems with a high number of uncertain parameters.
The following example illustrates this solution. Example 3: Consider the second-order system with resonance frequency ω r and damping ξ The parameters of this system are uncertain such that ω r = 1+ 0.3δ 1 and ξ = (1/ and |δ 2 | ≤ 1. The uncertainty set corresponding to P (s) is Let us maximize the gain of P (s) at ω 1 = 0.75 rad/s and ω 2 = 1 rad/s simultaneously. Fig. 4 shows a contour plot of the objective function. The figure also shows the points where the objective function was evaluated (black-x) during the hypercube refinement. The optimization yields p,m = diag(0.25 I 2 , −1) that is the maximum on p with J p,m = 2.89. The upper bound of the objective function is J U = 3.02, which is clearly unattainable for this system.

D. Construction of Worst case Mixed Uncertainty
If the system has dynamic uncertainty only, the interpolation method in Section III-B yields an uncertainty sample, which ensures that the worst case lower bound is reached at the given frequencies. As demonstrated in Section III-C, this is not necessarily possible for the mixed uncertainty case.
To maximize the objective function J in (9), consider the functionJ : p → such that To evaluateJ , substitute p into P (s) and perform the lower bound power iteration on d at the given frequencies {ω k } N ω k=1 . This yields the complex uncertainty samples {Q d,k } N ω k=1 that maximize the largest singular values at the given frequencies. The interpolation method in Section III-B ensures that, Therefore, Algorithm 2 Worst Case Mixed Uncertainty Construction To find the maximum ofJ , the optimization in Section III-C is employed. Aside from p,m , this optimization also yields the samples of the complex uncertainty {Q d,k } N ω k=1 . These samples are interpolated using Algorithm 1 to obtain d,m (s). The worst case uncertainty is m (s) = diag p,m , d,m (s) . This method is summarized in Algorithm 2. Finally, the demonstrative example in Section II-C is continued.
Example 4 (Example 1 Continued): The worst case gain lower bound in Fig. 2 has two peaks at ω 1 = 1 rad/s and ω 2 = 100 rad/s. The proposed algorithm is used to find the worst case uncertainty m (s) that maximizes the gain of P (s) at ω 1 and ω 2 simultaneously. The value of the objective functioñ J (δ 1 I 2 ) as a function of the uncertain parameter δ 1 is depicted in Fig. 5. The maximum ofJ (δ 1 I 2 ) occurs at δ 1 = −1, which makes p,m = −I 2 . The skewed-μ power iteration yields As shown in Fig. 6, the gain of P m (s) is large at both ω 1 and ω 2 . The worst case gain lower bounds are L 1 = 2.64 and L 2 = 2.28, which makes the upper bound of the objective function J U = 4.92. As depicted in Fig. 5, the objective function does not reach this theoretical upper bound for any p ∈ p . The gain of P m (s) attains L 2 at ω 2 but only at the cost of some drop at ω 1 .

E. Basic Validation of the Method
We use 37 example systems to test our method and compare it to potential alternatives. Thirty of these examples are based on [2]. The rest are composed of randomly generated systems and physical systems that we encountered in our research. Almost all the examples are closed-loop systems with a controller designed using the method in [3]. The state dimension of M(s) varies between 4 and 57 (with an average of 14). The systems have between zero and 11 uncertain parameters and between zero and three dynamic uncertainty blocks. Various components of the algorithm were chosen based on the test results with these examples, e.g., the weighting in (17) and the nonlinear search method in Section III-C.
The computations are performed on a computer that runs Ubuntu 18.04 LTS and features a four-core 2.6 GHz Intel Core i5 processor with 8-GB memory. The algorithm is run on MATLAB R2016b. For our examples, the computation takes no more than 2 min and approximately 20 s on average.

IV. WORST CASE ANALYSIS OF A FLUTTER CONTROL LOOP
In this section, we analyze a flutter suppression controller for a small flexible aircraft. Flutter is an aerodynamic instability caused by the resonance between the flexible structure of an aircraft and the surrounding airflow. The flutter speed, at which the resulting vibrations become unstable, can be pushed outside the operational regime using active control as shown in [17]. The details of the modeling, flutter control design, and closed-loop performance evaluation are found in [17]. In this article, we conduct worst case time-and frequency-domain analyses based on the uncertain controloriented model of the aircraft.

A. Uncertain Model of the Flutter Control Loop
The closed flutter control loop is illustrated in Fig. 7.
Here, K (s) is the MIMO flutter controller, and G (s) is the uncertain model of the aircraft.   the natural frequency of the structural dynamics varying within 1% of the nominal (ω 0 ), and the damping of the structural dynamics varying within 10% of the nominal (ξ ). The statespace representation of the system dynamics is of the forṁ T is the vector of parameters. The state of the system consists of the velocity components along the three coordinate axes in the body frame (u, v, w); roll pitch and yaw angular rates ( p, q, r ); pitch angle (θ ); nine modal coordinates and their derivatives; two lag states; and four actuator states (two for each actuator on both wings). The model also contains the fourth-order Padé approximation of 15-ms output delay. Altogether, this results in a state order of 35. For further details, see [17] and references therein. Let us p 0 denote the nominal (mean) value of the parameter vector p, and p denote a diagonal matrix with the ranges of the parameters in p on the main diagonal. Then, p = p 0 + p · δ, where δ = [δ V δ ω 0 δ ξ ] T is the vector of uncertain parameters. All entries of δ are real numbers allowed to vary between ±1. The state-space matrices of the uncertain system are assumed to have the form , and D(δ) similarly). Least-squares fitting is used to obtain the coefficient matrices, and the generalized Morton method is applied to reduce the repetition of the uncertain parameters [20]. The latter is achieved with the function gmorton in the Linear Fractional Representation Toolbox [21] in MATLAB. Input multiplicative uncertainty is added to the system to account for neglected dynamics and cross coupling between the input channels. The dynamic uncertainty d (s) is a stable LTI system with two inputs and outputs with d (s) ∞ ≤ 1. The uncertain systems is of the form The fourth-order weight W d (s) is obtained by comparing the singular values of C(δ)(s I − A(δ)) −1 B(δ) + D(δ) to a high fidelity model. The system dynamics are assumed to be wellrepresented by G (s) up to 100 rad/s beyond which the magnitude of W d (s) grows beyond 1, indicating more than 100% uncertainty. The gain of the resulting uncertain system is depicted in Fig. 9. The two resonance peaks observable in the vicinity of 50 rad/s correspond to the two flutter modes.

B. Worst case Analysis of the Closed Loop
The controller K (s) is designed with fixed bandwidth to minimize the closed-loop sensitivity function This aims to achieve robust stabilization and avoid the excitation of uncertain high-frequency dynamics. The controller excites the system at low frequencies and rolls at around 100 rad/s where the uncertainty in the dynamics is high. The resulting S (s) is robustly stable. In the remainder of this section, we analyze how well the sensitivity minimization is achieved and how well the controller performs in the time domain.
The uncertainty sets of S (s) are The worst case gain lower bound of S (s) is depicted in Fig. 10. The gain is high around the flutter frequencies (50 rad/s), but the controller clearly provides damping. Two peaks are observable in the gain, L 1 = 4.228 at ω 1 = 26.59 rad/s, and a slightly higher L 2 = 4.229 at ω 2 = 61.09 rad/s. The classical worst case uncertainty s (s) maximizes the gain of the system at ω 2 only. Based on Fig. 10, the gain of S s (s), indeed, reaches L 2 at ω 2 , but it is very close to the nominal gain at the rest of the frequencies. Algorithm 2 finds an uncertainty sample; however, that drives the gain of S m (s) to L 1 at ω 1 and also produces high gain at ω 2 , as illustrated in Fig. 10. 3 The Bode sensitivity integral provides insight for this example. For our MIMO control system with an unstable open loop, the integral is of the form where all the right half-plane poles p k of the open-loop system G (s)K (s) are summed [22,Sec. 6.2.3]. The right-hand side of (23) depends on the uncertainty since the uncertain parameters in S (s) move the poles of G (s). G 0 (s) is stable; therefore, the integral of S 0 (s) evaluates to zero. The classical uncertainty also produces a stable G s (s), which means that the integral of S s (s) is also zero. The multipeak uncertainty pushes the poles corresponding to the flutter modes to the right-hand side of the complex plane, which makes This is why such a sizable difference is observed between the gain of S s (s) and S m (s) in Fig. 10.
Since we are interested in the damping the flutter control provides, we study the closed-loop transfer function from the input disturbance d u = [d u,L d u,R ] T to the relative acceleration of the wing tipsâ z in Fig. 7. More specifically, let us denote the transfer function from d u,L to a z,L by T (s). The step response of T (s) is illustrated in Fig. 11 for the nominal value and the two worst case uncertainties. Notice that, next to the high-frequency harmonic observable in both worst cases, there is an additional low-frequency harmonic due to m (s). The relative acceleration also clearly assumes higher values for this uncertainty; therefore, the use of Algorithm 2 is deemed necessary in the evaluation of the closed-loop performance. The response of the closed-loop from any input in d u to any output y m orâ z is considerably worse due to the uncertainty sample m (s) compared to s (s). The relative vertical acceleration was chosen for demonstration because it is related to the forces acting on the wing.

V. CONCLUSION
A worst case perturbation construction method is provided in this article for systems with mixed uncertainty. As opposed to the classical approach, our technique maximizes the gain of the system at multiple frequency points simultaneously. A nonlinear multidimensional interval search is combined with a gradient ascent algorithm to find the value of the uncertain parameters that maximize the sum of the largest singular values of the system at the given frequencies. The corresponding complex matrix samples of the dynamic uncertainty are interpolated using the BNP theorem. The resulting uncertainty is stable, has the correct block structure, and is norm bounded by one. In the flexible aircraft control example, this construction method yields considerably worse time-domain behavior than the classical approach. Define Then, C 0 = F 1 F 1 , which makes C real = C 0 T −1 = 2 Re(F 1 ) Im(F 1 ) .
To prove that B real = T B 0 is a real-valued matrix, we establish that, because of the definition of H (ρ) in (11), it can be written as where S = S * , and H 0 = H * 0 . (H 0 is the boundary Pick matrix corresponding to the interpolation data without the conjugation.) Using both expansions of the block matrix inverse lemma, we obtain that the inverse is of the form where Y = Y * and X * = X. Defining Also, which has the structure [F 3 F 3 ] with F 3 = F 2 X + F 2 Y . Because T * = (1/2)T −1 , B real = (1/2)(B * 0 T −1 ) * , and since B * 0 T −1 has the same structure as C real in (24) If zero is among the frequency points, delete the first row and column of A real and D real , the first row of B real , and the first column of C real . With that, the elements corresponding to the data duplicated by the conjugation are removed.