Accelerated Implicit Single-Step Splitting Iteration Method for a Class of Coupled Lyapunov Equations

For the coupled Lyapunov problem derived from continuous Markov jump systems, a new implicit single-step splitting(ISS) iterative algorithm is proposed based on the idea of single step split iteration. After that, the accelerated implicit single-step splitting(AISS) iterative method, which is an accelerated algorithm of ISS iterative method, is proposed by using the idea of two-step alternate iteration. In this paper, the relevant convergence proofs of ISS and AISS iteration methods are given. Then, the selection range of the parameters of ISS and AISS iteration methods and the selection method of optimal parameters are given. Finally, this paper demonstrates the feasibility and advantage of the novel methods through actual calculations.


I. INTRODUCTION
All along, researchers are focusing on coupled Lyapunov matrix equations(CLMEs). The main reason is that by studying the coupled Lyapunov equations, for Markov jump systems, we can more conveniently conduct stability analysis and design of it. Markov jump system is a multi-modal system controlled by Markov chain. Its application is very extensive, and it is important in the models containing dynamic systems [1], [2], the optimal control problems [3], [4], coupled Riccati equations [5] and nonlinear systems problems [6], [7].
Because finding the only positive definite solution of the Lyapunov equation is equivalent to the stochastic stability analysis of the Markov jump system [8], [9], many scholars have proposed many effective solutions for coupled Lyapunov equations. In [10], an explicit direct algorithm for solving continuous CLMEs is given by applying Kronecker product and matrix vectorization. However, because the algorithm uses Kronecker product, this method is not suitable for solving large problems. Therefore, recursive algorithms for continuous and discrete CLMEs are proposed in [11] and [12], respectively. But similar to the Jacobi method for solving linear systems, they did not apply the latest estimates.
The associate editor coordinating the review of this manuscript and approving it for publication was Juan Wang . Therefore, similar to the Gauss method for solving linear systems, implicit iteration methods to solve continuous and discrete CLMEs are proposed in [13] and [14] through the application of the latest estimates. In the method of solving linear system, the successive over relaxation (SOR) method accelerates the convergence speed of Gauss method by introducing parameters. Therefore, new implicit iteration methods to solve continuous CLMEs and the discrete periodic Lyapunov matrix equations are proposed in [15] and [16] respectively through the idea similar to SOR.
The emphasis of the above method is to divide the matrix addition part of CLMEs according to the new and old estimates. Different from this, this paper will use the split matrix to optimize the properties of coefficient matrix. Among many methods for solving linear systems, [17] has used the idea of alternative direction implicit (ADI) iteration [18], [19] for reference, divided the coefficient matrix into Hermitian and skew-Hermitian parts, and proposed a Hermitian and skew-Hermitian splitting (HSS) iteration method to solve linear systems, which is far better than previous methods. Sylvester equation was solved by the HSS iterative method [20]. To solve the Sylvester equation, [21] gave a generalized HSS iteration method. By using the idea of HSS, [22] solves a special case of the coupled Sylvester equation. Inspired by the above methods, and due to the particularity of CLMEs matrix, this paper proposes an implicit single step split iteration (ISS) method to solve the non-Hermitian negative definite CLMEs related to continuous Markov jump systems, and proposes an accelerated ISS(AISS) iteration method to make ISS iterative method converge faster. In Section 2, we give some previous results. In Section 3, we convert the matrix equation into an equivalent equation with better properties of the coefficient matrix by taking negative values on both sides of the equation and splitting the coefficient matrix, and thus obtain the ISS iteration format. Then, through matrix vectorization and other transformations, we give a proof of the convergence of the ISS iterative method and give the selection method and related proofs of its optimal parameters. In Section 4, we give the accelerated ISS iterative method under the condition that the coefficient matrix satisfies certain conditions, and also give the proof of its convergence and optimal parameters. In Section 5, we compare the ISS and AISS methods with the existing methods through numerical examples, and prove their stability and effectiveness.
In this paper, for A ∈ R n×n , let ρ(A) and A T represent spectral radius and transpose of A, respectively, and define vec(A) = [a T 1 a T 2 · · · a T n ] T ∈ R n×n as vectorized form of A = [a 1 a 2 · · · a n ] ∈ R n×n . The symbol ⊗ denotes the Kronecker product, · 2 and · F represent the 2 norm and F norm of the matrix.

II. PREVIOUS RESULTS AND PRELIMINARIES
The stability analysis of continuous Markov jump systems is equivalent to solving coupled Lyapunov equations, and their relationship is shown in [8]. Then the CLMEs is as follows: given positive definite matrix, and P i , i ∈ [1, N ] is the solution of (1.1). Since we can rewrite (1.1) as follows: , and can get that A i , i ∈ [1, N ] lie on the left-hand plane, which is given in [23]. Based on (1.2), the iterative algorithms in [11] and [13] are available. On this basis, in order to accelerate convergence, [24] and [15] apply the idea of solving the SOR iterative method in a linear system, and respectively propose two algorithms. Here we only give the algorithm format about two of them.
Another algorithm format in [15] is as follows: where |γ | < 1 is an adjustable parameter, and P(m) = (P 1 (m), P 2 (m), . . . , P N (m)) generated by (1.4) is the solution of it. Algorithms (1.3) and (1.4) are implicit iterative algorithms. Due to the special nature of the CLMEs coefficient matrix in [23], in this article, we will discuss the following continuously coupled Lyapunov equations: where A i ∈ R n×n is non-Hermitian negative definite, = [π ij ] N ×N , when j = i, there are π ij ≥ 0 and N j=1 π ij = 0, is a symmetric negative definite matrix.

III. ISS METHOD
In this section we will discuss the implicit single-step iterative method and discuss the convergence of it and the selection of its optimal parameters. First, we make B i = −A T i and substitute it into the Coupled Lyapunov matrix equations, then we get: because A i is negative definite, B i is positive definite. Obviously, equations (1.1) and (2.1) have the same solution.
Next, we give the Hermitian and skew-Hermitian parts of B i as follows: Since B i is positive definite, H i is Hermitian positive definite matrix and S i is skew-Hermitian.
Then, we can rewrite B i and B T i as where α and β are given positive definite constants. So we bring (2.2) and (2.3) into equation (2.1) and get the AISS splitting format as follows: From this, we can get the iterative format of ISS iteration method to solve (2.1) as follows: (2.4) Its internal iteration can use the existing iterative methods for solving CLMEs.

A. THE CONVERGENCE ANALYSIS OF THE ISS ITERATION
In this subsection, some proofs of convergence for ISS iteration will be established. For the convenience of proof, next we make some equivalent transformations to equation (2.4). First, we make γ = α + β, and use Kronecker product to vectorize the left and right ends of the equation to get the following formula: where And since H i is Hermitian positive definite, S i is skew-Hermitian, we can get H i is Hermitian positive definite and S i is skew-Hermitian. Next, we make and let Then equation (2.5) can be reduced to the following equivalent form: Let U = H − I n×n ⊗ , since H is Hermitian positive definite and is Hermitian negative definite, so U is Hermitian positive definite, we convert equation (2.6) to which means that we can equivalently convert iterative equation (2.1) into iterative equation (2.7). So, next, we discuss the convergence of the iterative equation (2.7).
First, we convert equation (2.7) to the following form: The coefficient matrix of iteration format (2.8) is Because for any initial vector, the iterative format (2.8) converges to the exact solution if and only if the spectral radius ρ(G γ ) < 1, so for any initial vector, if and only if ρ(G γ ) < 1, the iterative format (2.1) converges to the only solution.
Therefore, for the iterative equation (2.1), its convergence theorem is as follows: be symmetric negative definite, α and β are normal numbers, be the Hermitian and skew-Hermitian parts of B i , and define Then the upper limit of ρ(G γ ) is here η max is the maximum singular value of S and µ min is the minimum eigenvalue of U. Moreover, a) If η max ≤ µ min , then there is ρ(G γ ) ≤ δ γ < 1 for any γ > 0, that means the ISS iteration method is unconditionally convergent.
So we can easily get, δ γ < 1 if and only if Since U is a positive definite Hermitian matrix, so µ min > 0, then when η max ≤ µ min , for any γ > 0, the inequality (2.10) will always hold; when µ min < η max , (2.12) holds if and only if the condition (2.11).
Theorem 1 shows that the convergence speed of ISS iterative method is determined by the upper bound δ γ of ρ(G γ ), and δ γ only related to the maximum singular value of S and the minimum eigenvalue of U, independent of the minimum singular value of S and the maximum eigenvalue of U. The parameter to minimize ρ(G γ ) is the optimal parameter, but it is difficult to get it. As an alternative, we give the parameter γ * to minimize δ γ .

B. THE CONVERGENCE ANALYSIS OF THE ISS ITERATION
In this subsection, we will give the way to choose the optimal parameter of ISS iteration and the related theorem.

IV. AISS METHOD
In this chapter, AISS method is proposed to accelerate the ISS method. This method improves the convergence speed of the ISS method by two-step interleaved iteration for the coefficient matrix A i that meets certain conditions. Furthermore, the related theorems of the convergence of the AISS method and the selection of optimal parameters are given.
Since H i is the Hermitian part of B i and S i is the skew-Hermitian part of B i , then according to (2

.2) and (2.3) we have
The iteration format of AISS can be obtained from formulas (3.1) and (3.2) as follows: Similar to the ISS iterative method, due to the equivalence of (3.3) and (3.4), we only need to study the convergencerelated properties of (3.4).

VOLUME 8, 2020
First we bring the second equation in the iterative system of equations (3.4) into the first equation, we can get P(m + 1) Then the iteration matrix of (3.5) is Similarly, if and only if ρ(Ḡ γ ) < 1, the iteration format (3.4) is convergent, that is, the AISS iterative method is convergent.
Theorem 2: Let A i ∈ C n×n , i ∈ [1, N ] be negative definite, be symmetric negative definite, α and β be positive definite constants, γ = α + β, then the upper limit of ρ(Ḡ γ ) is where η max is the maximum singular value of S, µ min is the minimum eigenvalue of U, and the definitions of U and S are given in (2.9) and (2.10). And give the following definition: Then we have the following conclusion a) If η max ≤ µ min , then ρ(Ḡ γ ) ≤ ϑ γ < 1 holds for any γ > 0, that is, the AISS iteration method is unconditionally convergent. b) If µ min < η max < ( 1+ √  5 2 ) 1 2 µ min , then ρ(Ḡ γ ) ≤ ϑ γ < 1 holds if and only if holds, that is, the AISS iteration method converges to a unique solution under condition (3.7). c) If η max ≥ ( 1+ From this we have obtained the upper limit of ρ(Ḡ γ ) is Therefore, we can get a) If η max ≤ µ min , then from Theorem 1 we know that δ γ = √ γ 2 +η 2 max γ +µ min < 1, then holds. So when η max ≤ µ min , for any γ > 0, ϑ γ < 1 is always established. b) If µ min < η max < ( 1+ Since k = η max µ min , then from formula (3.8) we can get (k 2 − 1)γ 2 − 2µ min γ + (k 4 − 1)µ 2 min < 0. (3.9) If we want to have γ to make (3.9) true, we should have By solving the above equation, we can get 1 2 µ min . Therefore, we can find the parameter γ that makes the formula f hold. Next, we solve the range of values of γ . Let l = k 2 − 1, we can get the equation as follows: (k 2 − 1)γ 2 − 2µ min γ + (k 4 − 1)µ 2 min = 0. Its solution is So under the condition µ min < η max < ( 1+ 2 ) 1 2 , which does not satisfy equation (3.10). Therefore, there is no parameter γ that makes equation (3.9) true, that is, the AISS method is divergent at this time. The same as the ISS iterative method, the convergence speed of the AISS iterative method is also limited by its spectral radius upper bound ϑ γ . And according to Theorem 2 we can see that the value of ϑ γ is determined only by the minimum eigenvalue of U and the maximum singular value of S. The parameter that minimizes ρ(Ḡ γ ) is the optimal parameter, but this parameter is generally difficult to find. Similarly, we obtainγ * that minimizes the upper limit of ϑ γ as an alternative.
Corollary 1: If the convergence condition of Theorem 2 holds, then the optimal parameter to minimize the upper bound ϑ γ of ρ(Ḡ γ ) is And its corresponding upper bound ϑγ * is Proof: By simple calculation, we can find that the derivative of ϑ γ is Then according to the monotonicity property of ϑ γ , when γ * = η 2 max µ min , ϑ γ reaches the minimum. Sinceγ * > 0, we only need to judge whetherγ * satisfies equation (3.7) when equation (3.10) is established.

V. NUMERICAL EXPERIMENT
Now, some examples will be used to prove the advantage of the ISS and AISS iteration methods to solve the CLMEs. We will compare the convergence performance of the method (1.4) with the ISS and AISS iterative methods proposed in this paper. The comparison between the algorithm proposed in this paper and the algorithm before the algorithm (1.4) will be no longer performed here, because the comparison between the relevant algorithm (1.4) and the previous algorithm has already been performed in [15]. Numerical experiments were run on Inter (R) Core (TM) i5-8250U (1.60 GHz, 8GB RAM) through MATLAB R2016a. All iterative methods start from zero initial value, and the parameters are selected according to the method of selecting the optimal parameters. For the internal iterative method of the ISS and the AISS iteration methods, we select the existing coupled Lyapunov solution method. In this paper, the algorithm in [15] is selected as the internal iteration algorithm. The iteration error is defined as , and the accuracy of the iteration method is 10 −13 . Example 1: [20] We select the coefficient matrix in [20] as the coefficient matrix A 1 of order n in the CLMEs equation (1.4), where h = 100/(n + 1) 2 , q = 0.1. A 1 often appears in the differential equations related to control theory. And let A 2 = 1.5 * A 1 , the transition rate matrix is = −0.5 0.5 0.5 −0.5 .
In this example,N = 2, we let n = 10 and Q i i ∈ [1, N ] be the identity matrix. In Table 1, we compare the three algorithms in terms of CPU operation time, iteration steps and iteration error. Fig 1 shows the relationship between the iteration error of different algorithms and the number of iteration steps together for comparison. We plot the relationship between the spectral radius of the iterative matrix of the algorithm (1.4) and different parameters in Fig 2, and then select the optimal parameters of the algorithm (1.4) according to this. The optimal parameters of ISS and AISS iteration methods are selected according to Corollary 1 and Corollary 2.
From Table 1, we can see that in the process of solving Example 1, the operation time of the ISS and the AISS iterative methods is much less than algorithm (1.4), and the operation time of the AISS iterative method is less than that of the ISS iterative method. For the number of iteration steps, the ISS and the AISS iteration methods is also less than the algorithm (1.4), and the AISS iteration method is far less than algorithm (1.4) and ISS iteration method. Through the comparison in Fig 1, we can more intuitively see that, compared with the algorithm (1.4), the ISS and the AISS iteration methods converge to the required accuracy faster, and the  AISS iterative method is faster than the ISS iterative method. It can be seen that compared with the algorithm (1.4), in the process of solving Example 1, the ISS and AISS iteration methods have better effectiveness. Moreover, compared with ISS method, AISS method performs much better. Example 2 In this example, we selected the following matrices as the coefficient matrix of CLMEs (1.4):, In this example, N = 2, n = 10, the transition rate matrix is the same as in example 1 and Q i i ∈ [1, N ] is the identity matrix.   In Example 2, we also compared the convergence rate of the method (1.4), the ISS iterative method and the AISS iterative method, and Table 3 reflects the comparison of CPU operation time, iteration steps and iteration error. Fig3, we shows the relationship between the number of iteration steps and the accuracy of iteration. The optimal parameter of algorithm (1.4) is selected by Fig 4. The parameter selection method of ISS iteration method and AISS iteration method is the same as Example1.
Similarly, in the process of solving Example 2, from Table 2 and Fig. 3, we can see that compared with the algorithm (1.4), the number of iteration steps and operation time used by the ISS and AISS iteration methods are improved, and the AISS iteration method is better than the ISS iteration method. Therefore, in Example 2, the ISS and AISS iteration methods have better effectiveness. Moreover, AISS method performs much better than ISS method.

VI. CONCLUSION
In this paper, based on the previous research on coupled Lyapunov equation, a new implicit iterative algorithm ISS iterative method is proposed using the idea of matrix splitting, and its acceleration algorithm AISS iterative method is proposed using the two-step alternating iteration method. After that, we studied their convergence property and the choice of optimal parameters. Finally, we use the numerical examples to demonstrate the advantages of the ISS and AISS iterative methods compared with the existing algorithms.