Extended Fixed-Point Methods for the Computation of Virtual Analog Models

A family of iterative root-finding methods for nonlinear discrete-time systems of equations is presented, with a formulation that puts it in between the Fixed-Point (FP) and Newton-Raphson (NR) methods. Applicability of this family is allowed, provided that the Jacobian matrix of the nonlinear system has a spectral radius less than one. By varying the order of a matrix geometric sum that approximates the inverse Jacobian matrix, root-finding at any iteration can be steered toward the FP or conversely toward the NR method, becoming identical to either of them if the order is equal to zero or infinitely large, respectively. Since the methods in this family do not need the solution of a linear system at each iteration as required by NR, their computational cost makes them palatable for the online digital implementation of nonlinear models. As an example of application, a Virtual Analog model of the voltage-controlled filter onboard a popular music synthesizer is tested, showing that for some orders of the aforementioned geometric sum the proposed methods perform better than FP and NR in terms of computational cost, while exhibiting the same accuracy.


I. INTRODUCTION
V IRTUAL Analog (VA) modeling is the practice to design linear or nonlinear digital filters that emulate the behavior of reference analog audio circuitry [1], [2], [3]. Models of audio circuits are usually characterized by systems of Ordinary Differential Equations (ODEs). Several strategies to turn systems of ODEs into discrete-time filters have been developed in the literature, ranging from Kirchhoff domain methods, like State-Space methods [4], [5], Port Hamiltonian methods [6], and Delay-Free Loop Filter Networks [7], [8], to methods in domains in which Kirchhoff variables (i.e., voltages and currents) are transformed into other kinds of variables, like wave variables in Wave Digital Filters [9], [10], [11], [12].
All the aforementioned VA modeling strategies rely on two important design choices that greatly impact on the implementation of the resulting digital model: the discretization scheme used to approximate continuous-time derivatives (e.g., Forward Euler, Backward Euler, trapezoidal rule) and the technique employed at Manuscript  each sampling step to find the solution of the (possibly nonlinear) system of finite-difference equations in the discrete-time domain. On one hand, non-iterative techniques have been recently developed to compute VA and digital acoustic models [13], [14]; they are characterized by a fully predictable computation time which is a valuable perk in online applications, however their accuracy and stability performance degrade when the sampling frequency is reduced. On the other hand, iterative techniques, like the Fixed Point (FP) method [7], [11], [15], [16] and the Newton-Raphson (NR) method [8], [12], [17], [18], are characterized by a less predictable behavior in terms of computational cost. However, under the assumption of convergence, they are capable to return arbitrarily accurate solutions of systems of implicit equations, which are also referred to as delay-free loops in the literature on VA modeling [7], [8].
The FP method is characterized by a simple and computationally inexpensive update formula, but the number of iterations it needs to convergence strongly depend on the distance of the initial guess from the solution [7]. The convergence rate of the FP method is generally linear. The update formula of the NR method, instead, is computationally more costly, since at each iteration it requires to form a N × N Jacobian matrix and to solve a N -dimensional linear system of equations; however, NR converges with quadratic rate (i.e., generally requiring less iterations than the FP method) in case the initial guess is properly chosen [8], [18].
In the literature on circuit simulation, see e.g. [17], [19], generalized iterative methods have been proposed which are based on parametric update formulas and reduce to the FP method or the NR method as particular cases. Inspired by such works, in this letter, we derive a family of extended FP methods by expressing the inverse Jacobian matrix appearing in the update formula of the NR method as a geometric sum which converges to the inverse Jacobian matrix itself when its order L is infinity. The family of proposed extended FP methods is obtained by truncating the matrix geometric sum to different orders 0 ≤ L < +∞. The FP method can also be obtained as a particular case of the presented generalized formulation, by setting L = 0. The parameter L can be varied (also at runtime) to select methods that will be proved to converge faster than FP, meanwhile avoiding a linear system solution at each iteration as required by NR. The simultaneous presence of such two properties makes the computational cost of these methods palatable for the online digital implementation of nonlinear circuits.
This letter is structured as follows. Section II presents the general update formula of the proposed extended FP methods. Section III provides a theoretical analysis on their convergence condition, converge rate, and computational cost. Section IV describes, as an application example, a VA model of the voltage-controlled filter onboard a popular music synthesizer, showing that for some values of L the proposed methods perform better than FP and NR in terms of computational cost, while exhibiting the same accuracy. Section V concludes this letter.

II. A FAMILY OF EXTENDED FP METHODS
After the discretization of time derivatives in the reference system of ODEs, a system of finite-difference equations is obtained which needs to be solved at each sampling step. Let the system contain N unknown signals v 1 , . . . , v N . The output at each step is found by solving is a (generally nonlinear) vector mapping. Both u and x are known and constant during each sampling step by definition.
The FP method iteratively searches a solution v = c(v, u, x) by repeatedly applying the map c to the resulting v. The NR method iteratively searches for the roots of the N -dimensional function Since u and x are constant, this search is performed by iterating only on v as follows: where J f = ∂f i ∂v n is the Jacobian matrix of the function f and the positive integer number γ in round brackets is the iteration index. Since , we get: where, here and later on, the vectors u and x are not indicated in the arguments of c and of its Jacobian for ease of notation. Now, let A be a square matrix and let ρ(A) be its spectral radius. It is known that if ρ(A) < 1 then the series ∞ l=0 A l is convergent [20]: Equation (4) suggests to look at the NR method (3) as the limit case of a family of methods based on a partial sum L l=0 A l , which converges to (I − A) −1 when the order L → ∞. Similarly to [19, eq. (9)], we therefore express the general update formula of the proposed family of methods in a parametric way as: and later we set the generic (γ) . (6) Equation (6) therefore reduces to the update formula of NR in For this reason, we will refer to the proposed family of methods with 0 ≤ L < +∞ as extended FP methods.

III. ANALYSIS ON CONVERGENCE AND COMPUTATIONAL COST
Under fair hypotheses, NR has local quadratic convergence rate [8], [18]. Conversely, FP has local linear convergence rate to a solution v * provided that ρ J c (v * ) < 1. Moreover, the smaller the spectral radius ρ J c (v * ) the faster the convergence [7], [15], [19], [21], [22]. Hence, if condition ρ J c (v * ) < 1 holds, it is interesting to study the local convergence rate of the family of extended FP methods (6) when the parameter L is varied. In the proposed iterative methods, in fact, the solution of a linear system with matrix I − J c required for NR, see (3), is substituted by the computation of a partial sum, through accumulation of L + 1 powers of J c . In the asymptotic worst case, every iteration (6) therefore costs O(LN 2 ) flops; in the same case, one NR iteration (3) costs O(N 3 ) flops. In the limits of its applicability to situations in which N is small and/or J c is sparse (as it is indeed true for the majority of VA models), at least this case qualitatively suggests to keep L < N so as to compare (6) favorably against NR in terms of computational cost. Moreover, if ρ J c (v (γ) ) is small compared to 1 then the convergence of the series is fast, and a partial sum counting few terms is expected to approximate (I − J c ) −1 sufficiently well.
The convergence rate of each extended FP method can be studied by putting (6) in the form (7) and considering the Jacobian J φ L (v). In order to write J φ L in terms of J c , we set the following structural identity expressing the Jacobian of the product Aw, with A a N × N matrix and w a vector of length N whose entries depend on v: where A |v n is the matrix obtained by deriving the entries of A with respect to v n . It follows that the product in (8) has a Jacobian equal to with w = v − c(v). Hence, from (8) and (11), Now, the terms other except for the last one, which is equal to J c (v) L+1 . If (12) is computed at the solution point then, by (7), v * = c(v * ). Hence, all matrices L l=0 J c (v) l |v n in (12) are multiplied by w = 0. Thus, at the solution point, (12) reduces to Recalling Hence, for finite values of L, (13) proves that extended FP (6) locally converges L + 1 times as fast as FP. Table I summarizes the asymptotic costs per iteration and convergence rates of the extended FP methods. Once a stop condition is set, (13) suggests that for small values of L the number of extended FP iterations I EX is close to I FP /(L + 1), in which I FP is the number of FP iterations. Furthermore, quadratic rate of NR suggests that I EX is greater than, and tends to the number I NR of iterations made by NR to converge for increasingly large L. If we choose the most simple convex approximation of I EX matching I FP /(L + 1) and I NR respectively for small and large values of L, then the number of extended FP iterations is described by the hyperbolic function I EX (L) = I FP +LI NR L+1 , L ≥ 0. Let every iteration of the FP method (7) require C FP flops to compute c(v). Let every iteration of the extended FP method (6) require additional C J flops to compute J c (v), and finally LP flops for P (L) The range of values of L such that extended FP performs better than both FP and NR can be inferred by considering the condition in which C NR are the flops needed to solve a linear system with matrix I − J c . Outside that range, FP or NR should be chosen instead.
It is interesting to find the value of L that minimizes the cost. In this regard, the rational function H(L) approximating the cost of extended FP in (14) is We notice that H(L) goes to infinity with asymptotic coefficient I NR P for increasing L. This happens because the computational cost linearly increases indefinitely beyond some L. Once I NR < I FP is assumed, its derivative H (L) has two real roots if P < C FP + C J . The last inequality is usually true, since P accounts for multiply-and-add operations. In most signal processing architectures these operations cost less than nonlinear function evaluations, like those quantified by C FP and C J . One of these two roots is positive if and only if In this case, H(L) has a minimum in correspondence of the positive root which is equal to Otherwise, H (L) has two negative roots, meaning that H(L) is always increasing for positive values of L. Assuming a low variability along L and across iterations of C FP +C J P , (17) suggests that if there exists a minimum L min around which some extended FP methods perform better than FP and NR, then this minimum increases with the number I FP /I NR − 1. In spite of its limited applicability unless I FP and I NR are computed in advance, in the next section we will see that the results from the application example support the validity of (17).

IV. EXAMPLE: VCS3 VOLTAGE-CONTROLLED FILTER
The voltage-controlled filter (VCF) aboard the EMS VCS3 and Synthi music synthesizers is a known case study in the VA literature, recently marketed also as a digital effect [23], [24], [25], [26]. A fast and accurate realization of the circuit was obtained using the filter network in Fig. 1 [24]. This network is characterized by the delay-free loop containing the unknown signals v 1 , . . . , v 8 defining the vector v. The same loop incorporates the integrator blocks receiving the signals v 2 , v 4 , v 6 , v 8 and defining the state variables x 2 , x 4 , x 6 , x 8 , respectively. Each integrator was explicitly modeled in the digital domain using the trapezoidal rule. This way, a solution can be found by applying the FP method to this loop at each sampling step: once v has converged, the state x is updated and the network is ready to process a new sample u 1 . In Fig. 1, r(·) is the hyperbolic tangent function of an element in v weighed by a constant χ, s(·) is the cascade of a gain s driven by the VCF resonance frequency parameter followed by a discrete-time integrator and, finally, t is a gain driven by the resonance level parameter. The reader is referred to [24] for a detailed explanation of this digital model and its parameters.
It was observed that the number of FP iterations is weakly affected by the input amplitude as well as the resonance level value [24]. Conversely, this number exponentially increases with the resonance frequency value. Later this behavior was explained as part of a formal analysis [7], by showing that the spectral radius of the Jacobian increases with the ratio between resonance frequency parameter and sampling rate.
The 8 × 8 Jacobian J c (v), shortly J in the next formulas, has nonzero elements J (i,j) that are respectively equal to in which x 8 is the state of the integrator fed by v 8 .
The network in Fig. 1 has been implemented in MATLAB by using FP, extended FP for 0 < L < 20, and NR, with update formulas (7), (6), and (3), respectively, on a 4.7 GHz Intel Core i7-based laptop PC operated by Ubuntu Linux. The results of the implementations are shown in Fig. 2. Three different resonance frequency parameter values are set respectively to 500 Hz, 1500 Hz, and 3500 Hz. Simulations are repeated ten times for each resonance frequency value. In all such cases the resonance level parameter is set to 4 and the filter network was fed with a 0.5 V white noise (upper and mid plot) and a 0.5 V sinusoid at 1 kHz (lower plot). Both the input signals last 10 ms at 44.1 kHz sampling rate. Iterations in all cases are stopped when v (γ+1) − v (γ) / v (γ) < 10 −4 . When the convergence condition is met, we set v * = v (γ+1) . The same value is chosen at the next sampling step, to initially guess the solution before running the search.
The resulting boxplots in Fig. 2, whose spread logically increases with the average number of iterations-with good predictions of this number given by I EX (L)-show that the hyperbolic dependence on L of the convergence speed improves simulation time, and yet the linear increase of the number of flops eventually nullifies this advantage. L min depends on the respective resonance frequency parameter: 1, 2 and between 6 and 7 with white noise input, conversely 1, 1 and 3 with sinusoidal input. Such values reflect the dependence on I FP /I NR − 1: for

V. CONCLUSION
In this letter, we have shown that extended FP methods can be complementary to FP and NR in all cases when they reduce the number of iterations needed by FP meanwhile keeping the computational cost reasonably low, hence possibly achieving higher efficiency performance than both FP and NR. It is worth adding that, since their applicability is limited to cases where FP locally converges, several nonlinear discrete-time models of "stiffer" systems might be not solvable by using the proposed extended FP methods, thus requiring to resort to alternative root-finding methods. On the other hand, explicit solutions bypassing iterative methods could become competitive alternatives when stiffness is especially low [27].