Contraction Analysis of Discrete-time Stochastic Systems

In this paper, we develop a novel contraction framework for stability analysis of discrete-time nonlinear systems with parameters following stochastic processes. For general stochastic processes, we first provide a sufficient condition for uniform incremental exponential stability (UIES) in the first moment with respect to a Riemannian metric. Then, focusing on the Euclidean distance, we present a necessary and sufficient condition for UIES in the second moment. By virtue of studying general stochastic processes, we can readily derive UIES conditions for special classes of processes, e.g., i.i.d. processes and Markov processes, which is demonstrated as selected applications of our results.

analyze discrete-time systems with random parameters such as Markov jump systems [28] and the systems with white parameters [29].This is in contrast to a massive amount of researches on discrete-time Markov jump linear/nonlinear systems in the history, e.g., [28], [30]- [32] and recent rapid increase in the number of researches for machine learning to construct stochastic models from discrete-time empirical data, e.g., [33]- [36].When studying stochastic systems, typically we specify the class of stochastic processes into, for instance, i.i.d. and Markovian, which can be viewed as ad hoc approaches because depending on processes, different stability conditions are obtained.For developing unified theory to deal with each process simultaneously, recently the paper [37] gives second moment stability conditions for general stochastic processes in the discrete-time linear case, which contains the existing conditions for i.i.d.[38], [39] and Markovian [28], [40] as special cases.
Inspired by [37], in this paper, we deal with general stochastic processes.To begin with contraction analysis of discrete-time nonlinear stochastic systems, we introduce a new stability notion, uniform incremental exponential stability (UIES) in the pth moment with respect to the Riemannian metric, which reduces to the standard pth moment stability [41], [42] when the distance is Euclidean, and a trajectory is fixed on an equilibrium point.As the first main result of this paper, we provide a sufficient condition for UIES in the first moment.Then, as the second main contribution, focusing on the Euclidean distance, we present a necessary and sufficient condition for UIES in the second moment; second moment stability is stronger than first moment stability.By virtue of developing unified theory for general stochastic processes, we show that specifying processes readily yields UIES conditions for i.i.d.processes or Markov processes.Even UIES conditions in each specialized case are new contributions of this paper on their own, due to lack of contraction theory for discrete-time stochastic systems.
The remainder of this paper is organized as follows.To understand the whole picture of this paper, Section II summarizes contraction analysis of discrete-time deterministic systems with respect to the Euclidean distance [25] and then extends this to a Riemannian metric.Section III shows the discretetime stochastic systems considered in this paper and provides the notion of UIES in the pth moment.Section IV presents the UIES conditions for general stochastic processes, and these conditions are applied to i.i.d.processes and Markov processes in Section V. Some of the proposed stability conditions are applied to stabilizing controller design of a mechanical system with a random parameter and observer design for a Markov jump system in Section VI.Concluding remarks are given in Section VII.All proofs are presented in the Appendix.
Notation The sets of real numbers and integers are denoted by R and Z, respectively.Subsets of Z are defined by Z k0+ := Z ∩ [k 0 , ∞) and Z k0− := Z ∩ (−∞, k 0 ] for k 0 ∈ Z. Another subset of Z is defined by Z [k0,k] := Z ∩ [k 0 , k] for k 0 ∈ Z and k ∈ Z k0+ , where Z [k0,k0] := {k 0 }.The identity matrix is denoted by I irrespective of its size.The set of n × n symmetric matrices is denoted by S n×n , and that of symmetric and positive (resp.semi) definite matrices is denoted by S n×n ≻0 (resp.S n×n 0 ).For P, Q ∈ R n×n , P ≻ Q (resp.P Q) means P − Q ∈ S n×n ≻0 (resp.P − Q ∈ S n×n 0 ).The Euclidean norm of a vector x ∈ R n is denoted by |x|.

II. REVIEWS AND GENERALIZATIONS OF RESULTS FOR DETERMINISTIC SYSTEMS
To understand the whole picture of this paper, we first review results on contraction analysis of discrete-time nonlinear deterministic systems [1], [25].In these literature, the Euclidean distance is used as a metric.In this paper, we show that some sufficiency result can be extended to a Riemannian metric as for continuous-time systems [5].
Consider the following nonlinear deterministic system: where Note that R n is positively invariant.For the sake of notational simplicity, let ψ k (k 0 , z k0 ) denote the solution to the system (1) at k ∈ Z k0+ under the initial condition (k 0 , z k0 ) ∈ Z × R n .Namely, for each As will be clear later, we use the following variational system of (1) along ψ k (k 0 , z k0 ) in contraction analysis: Using variational systems, incremental stability conditions have been developed; this stability notion is defined as follows.Definition 2.1: Let d : R n × R n → R be a distance 1 .The system (1) is said to be uniformly incrementally exponentially stable (UIES) (with respect to d) if there exist a > 0 and λ ∈ (0, 1) such that ⊳ When the distance is Euclidean, the following condition for UIES has been derived [25,Theorem 15], where the condition below is slightly different from the original one, but is equivalent to it.
Proposition 2.2: A system (1) is UIES with respect to the Euclidean distance if and only if there exist c 1 , c 2 > 0, λ ∈ (0, 1), and for all (k 0 , z k0 ) ∈ Z × R n .⊳ Inspired by results for continuous-time systems [5], we generalize the condition (4) to study UIES with respect to a more general Riemannian metric than Euclidean as stated below.This result can be hypothesized from Proposition 2.2, but has not been proven before.More importantly, its proof gives an insight into analysis of stochastic systems, the main interests of this paper.Thus, the proof is also provided in Appendix A.
Theorem 2.3: A system (1) is UIES if there exist c 1 , c 2 > 0, λ ∈ (0, 1), P : R n → S n×n ≻0 of class C 1 , and and ( 5) hold for all (k 0 , z k0 ) ∈ Z × R n .⊳ The objective of this paper is to extend Proposition 2.2 and Theorem 2.3 to stochastic systems introduced in the next section.

III. PROBLEM FORMULATIONS
Hereafter, we focus on the stochastic systems stated in this section.Let ξ := (ξ k ) k∈Z : Ω → (R m ) Z be a stochastic process.Differently from usual analysis, we consider general ξ, i.e., do not focus on specific ξ such as i.i.d. or Markovian.Throughout this paper, we assume that a vectorvalued function f k : R n × R m → R n , k ∈ Z defining the system dynamics satisfies the following assumption.
. ., M } and s : R n × R m → M, respectively, denote a finite family of disjoint subsets and a switching function such that ∪ i∈M S i = R n × R m , and the semi-differentiation ∂s/∂x is well defined as the zero function for each S i , i ∈ M and thus on R n × R m .If fk (x, η, s) is semi-differentiable with respect to x and s, and if fk and its semi-differentiations are piecewise continuous, then fk (x, η, s(x, η)) satisfies Assumption 3.1.Therefore, f k can also be used to describe switched systems.
Definition 3.4: Consider the system (7), and let d : R n × R n → R be a distance.Then, the system is said to be UIES in the pth moment (with respect to d) if there exist a > 0 and λ ∈ (0, 1) such that ⊳ The above stability notion reduces to the standard moment stability if we choose d as the Euclidean distance, and the origin is an equilibrium point, i.e., f k (0, ξ k ) ≡ 0, k ∈ Z.In fact, for (x ′ k0 , x ′′ k0 ) = (x k0 , 0) and the Euclidean distance, ( 11) becomes Especially for p = 2, this property is also called mean square stability [41], [42].However, for p = 1, this is different from mean stability [42]

A. With respect to Riemannian Metrics
In this section, inspired by results for linear stochastic systems [37], we consider extending Proposition 2.2 and Theorem 2.3 to the stochastic systems (7).Since the sufficiency of Proposition 2.2 is a special case of Theorem 2.3, we first focus on deriving the counterpart of Theorem 2.3.The main difference from the deterministic case is that we consider P depending on the stochastic process ξ.To make the arguments of P clear, let us introduce the time shift operator S k : (R m ) Z k+ → (R m ) Z0+ for processes such that ζ 0+ = S k ξ k+ is defined by ζ 0 = ξ k , ζ 1 = ξ k+1 , . . ., where ζ 0+ = S k ξ k+ is F k -measurable.Now, we are ready to present the first main result of this paper.
Theorem 4.1: A system (7) is UIES in the first moment if there exist c 1 , c 2 > 0, λ ∈ (0, 1), P : R n → S n×n ≻0 of class C 1 , and for all (k 0 , x k0 , ξ(k0−1)− ) ∈ Z × R n × Ξ(k0−1)− .⊳ Remark 4.2: From the proof of Theorem 4.1 in Appendix B, one notices that a (non-uniform) IES condition in the first moment can readily be obtained by replacing c 1 , c 2 , λ, and P with those depending on k 0 .By IES in the pth moment at k 0 ∈ Z, we mean that there exist a(k 0 ) > 0 and λ(k 0 ) ∈ (0, 1) such that ⊳ Theorem 4.1 reduces to Theorem 2.3 for the deterministic systems.This can be confirmed by considering a ξ kindependent vector field f k (x k , ξ k ) = g k (x k ).In this case, we can take a ξ k -independent matrix-valued function P .
In Theorem 4.1, we do not restrict the class of stochastic processes ξ into specific ones.Therefore, our framework can handle a variety of systems such as stochastic switching systems mentioned in Remark 3.3 by specifying properties of ξ or further the structure of f k depending on problems.Utility of studying general ξ is illustrated in Section V by showing that restricting ξ into a specific process readily derives UIES conditions for each process.

B. With respect to Euclidean Distances
Theorem 4.1 provides a UIES condition with respect to a general distance.In this subsection, we focus on the Euclidean distance, which corresponds to specifying P in Theorem 4.1 into the identity matrix.In this case, it is possible to obtain a UIES condition for second moment stability, stronger than first moment stability because we can avoid to apply the Cauchy-Schwarz inequality in contrast to the general Rimmanian metric case; for more details, see the proofs in Appendices B and C.Moreover, we also have the converse proof.This can be viewed as a generalization of Proposition 2.2 to the general stochastic system (7).
Theorem 4.3: A system ( 7) is UIES in the second moment with respect to the Euclidean distance if and only if there exist c 1 , c 2 > 0, λ ∈ (0, 1), and P : and ( 13) hold for all (k 0 , x k0 , ξ(k0−1)− ) ∈ Z×R n × Ξ(k0−1)− .⊳ Remark 4.4: A similar remark as Remark 4.2 holds.That is, a necessary and sufficient condition for IES in the second moment with respect to the Euclidean distance at k 0 ∈ Z can readily be derived based on Theorem 4.3 by replacing c 1 , c 2 > 0 and λ ∈ (0, 1) with those depending on k 0 .⊳ For UIES, the condition (13) depends on the convergence rate λ ∈ (0, 1).As in the linear case [37, Lemma 3], we can derive an alternative condition not depending on λ.The proof is similar, and thus is omitted.
Corollary 4.5: Suppose that there exist c 1 , c 2 > 0 and P : Then, for some λ ∈ (0, 1), the condition (13) holds for all In the linear case, Theorem 4.3 reduces to [37,Theorem 3] Then, P can be chosen to be independent of x k .Therefore, ( 13) and ( 14) reduce to for all (k 0 , ξ(k0−1)− ) ∈ Z × Ξ(k0−1)− .This is nothing but a necessary and sufficient condition for uniform exponential stability in the second moment for the general linear stochastic system in [37, Theorem 3].

V. APPLICATIONS
In the previous section, we have presented incremental stability conditions for general stochastic systems (7).In this section, we illustrate utility of the obtained conditions by applying them to specific classes of processes.In particular, we study cases where ξ follows temporally-independent processes or Markov processes.In most of literature of stochastic control, e.g., [31], [32], [38], [40], stability conditions have been separately developed for each special class of processes.By virtue of studying the general stochastic process ξ, conditions for each special case are provided simply by restricting the class of ξ as in [37] about linear systems.Due to the lack of contraction analysis for stochastic systems, the obtained conditions in each special case are new contribution of this paper on their own.In this section, we only consider applying Theorem 4.3, but similar results corresponding to Theorem 4.1 and Corollary 4.5 as well as conditions for IES at k 0 ∈ Z can readily be obtained.

A. Temporally-Independent Processes
In this subsection, we consider ξ satisfying the following assumption.Such ξ is called a temporally-independent process.Assumption 5.1: For ξ = (ξ k ) k∈Z , the random vectors ξ k , k ∈ Z are independently distributed.⊳ Under Assumption 5.1, the conditions ( 13) and ( 14) in Theorem 4.3 are independent of ξ(k0−1)− for each k 0 ∈ Z.Hence, the conditional expectation can be replaced with the (standard) expectation.Then, by defining we have the following corollary of Theorem 4.3 without the proof.
Corollary 5.2: Suppose that Assumption 5.1 holds.A system (7) is UIES in the second moment with respect to the Euclidean distance if and only if there exist c 1 , c 2 > 0, λ ∈ (0, 1), and P : for all (k 0 , x k0 ) ∈ Z × R n .⊳ We further consider a stationary case, i.e., ξ k and f k are independent of k.
Assumption 5.3: The stochastic process ξ is stationary (in the strict sense), i.e., none of the characteristics of ξ k changes with time k.Moreover, none of f k changes with time k, i.e., f0 = f k for all k ∈ Z. ⊳ Note that the stochastic process satisfying Assumptions 5.1 and 5.3 is an i.i.d.process.Under Assumptions 5.1 and 5.3, P in ( 15) can be chosen as a k 0 -independent function.Namely, we have the following corollary without the proof.Corollary 5.4: Suppose that Assumptions 5.1 and 5.3 hold.A system (7) is UIES in the second moment with respect to the Euclidean distance if and only if there exist c 1 , c 2 > 0, λ ∈ (0, 1), and P0 : R n → S n×n ≻0 such that for all x 0 ∈ R n .⊳ Remark 5.5: In Corollary 5.4, we have considered the stationary case.As a more general case, Corollary 5.2 can be specialized to the periodic case where there exists a positive integer N such that f κN +i = f κ+i , i = 0, 1, . . ., N −1, κ ∈ Z and none of the characteristics of ξ κN +i , i = 0, 1, . . ., N − 1 changes with κ ∈ Z.The generalized condition is described by using periodic Pi , i = 0, 1, . . ., N − 1; for more details, see a similar discussion in the linear case [37,Corollary 3].⊳

B. General Markov Processes
In this subsection, we consider the case where ξ is a general Markov process.Assumption 5.6: For each Θ j ⊂ R m , every j ∈ Z (i+1)+ and i ∈ Z, it follows that where P(•|•) denotes the conditional probability.⊳ Assumption 5.6 implies that the conditional expectation E 0 can be simplified as for each (k 0 , ξk0−1 ) ∈ Z × Θ k0−1 , where note that Θ k0−1 is the support of ξ k0−1 .Then, for P in Theorem 4.3, there exists P : for each (k 0 , x k0 , ξk0−1 ) ∈ Z × R n × Θ k0−1 .Therefore, we have the following corollary of Theorem 4.3 for Markov processes without the proof.
Corollary 5.7: Suppose that Assumption 5.6 holds.A system (7) is UIES in the second moment with respect to the Euclidean distance if and only if there exist c 1 , c 2 > 0, λ ∈ (0, 1), and P : for all (k 0 , x k0 , ξk0−1 ) ∈ Z × R n × Θ k0−1 .⊳ In the stationary case, again P can be chosen as a k 0independent function.Namely, we have the following corollary without the proof.

C. Finite-mode Markov Chains
In this subsection, we further consider the case where ξ is a finite-mode Markov chain, which is non-stationary (i.e., non-homogeneous) unless the transition probability is timeinvariant.
Assumption 5.10: The process ξ is given by a finite-mode Markov chain defined on the mode set M := {1, ..., M }, i.e., ξ k can take a value only in M at each k ∈ Z. ⊳ The process ξ satisfying this assumption is a special case of the general Markov process in Assumption 5.6, and the corresponding system (7) can be seen as a stochastic switched nonlinear system with the switching signal s = ξ in Remark 3.3 given by a finite-mode Markov chain.Such a system is nothing but a standard Markov jump nonlinear system, e.g., [31], [32]; also see, e.g., [30] for Markov jump linear systems.This exemplifies the generality of the system class dealt with in this paper.
Let us denote the transition probability from mode i to j by for all i ∈ M.Then, by using the mode dependent function Pi , i ∈ M, Corollaries 5.7 and 5.8 are further simplified as stated below, where the latter is about the stationary (i.e., homogeneous) Markov chain.Corollary 5.11: Suppose that Assumption 5.10 holds.A system (7) is UIES in the second moment with respect to the Euclidean distance if and only if there exist c 1 , c 2 > 0, λ ∈ (0, 1), and Pi : for all (k 0 , x k0 , i) ∈ Z × R n × M. ⊳ Corollary 5.12: Suppose that Assumptions 5.3 and 5.10 hold.A system (7) is UIES in the second moment with respect to the Euclidean distance if and only if there exist c 1 , c 2 > 0, λ ∈ (0, 1), and P0,i : for all (x 0 , i) ∈ R n × M. ⊳ Remark 5.13: Again Corollary 5.11 can be specialized to the periodic case by using periodic Pk,i , k = 0, 1, . . ., N − 1.

⊳
In the linear case, i.e., f0 (x, j) = A j x, the inequality (17) reduces to an inequality that is equivalent to [28,Equation (3.15)].By restricting classes of the processes and systems, we finally establish the connection between our results and the well known condition for Markov jump linear systems.

A. Stabilizing Controller Design for Mechanical Systems
In this subsection, the proposed stability condition for an i.i.d.process is applied to stabilizing controller design.Consider a pendulum controlled by a DC motor: where x p and x i denote the position of the mass and the current of the circuit, respectively.The control input u is the voltage.The parameter ξ 0 is unknown, and suppose that it follows the i.i.d.uniform distribution U [1,2].The other parameters are d = 5, a = 2, k v /L = 2, R/L = 5, and 1/L = 10.We take the state as x := [x p ẋp x i ] ⊤ .Then, the Euler forward discretization of its state-space representation with the sampling period ∆T = 1/20 is This system satisfies Assumptions 5.1 and 5.3, and thus Corollary 5.4 is applicable.
Let us fix P0 (x) on a constant.For stabilizing controller design, ( 16) becomes where K ∈ R 1×3 denotes a feedback gain.Utilizing the Schur complement technique with P0 ≻ 0 and introducing new variables K := K P −1 0 and P0 := P −1 0 yield the following equivalent LMI: where * represents an appropriate matrix.This is an infinite family of linear matrix inequalities (LMIs).
It is well known that an infinite family of LMIs can be reduced to a finite one by a convex relaxation.Let us introduce Then, for each x p ∈ R, there exist θ (ℓ) (x p ) ∈ [0, 1], ℓ = 1, 2 such that (2) .
Therefore, for stabilizing controller design, it suffices to solve the following set of LMIs: where Fig. 1 shows a sample trajectory of the closed-loop system starting from x(0) = [2 0 0] ⊤ .It is confirmed that the closedloop system is stabilized at the origin.In this example, we have assumed that the other parameters than ξ 0 are deterministic.When they follow some probability distributions, controller design can be done similarly by generalizing the systematic methodology for linear systems [44].

B. Observer Design for Markov Jump Systems
For the continuous-time deterministic system, ẋ = f (x), y = Cx, it is known that if there exists a matrix H making ẋ = f (x) + HC x IES, then ẋ = f (x) + H(C x − y) is an observer of the system; see, e.g., [19].This result can be extended to discrete-time stochastic systems, which is used for observer design of a Markov jump system.
Consider the following Markov jump system: where i, j = 1, 2, 3 (i.e., three modes) and This system satisfies Assumptions 5.3 and 5.10, and thus we can utilize Corollary 5.12 for observer design.Let us fix P0,j (x 0 ), j = 1, 2, 3 on constants.For observer design, (17) where H j ∈ R 3 denotes an observer gain at each mode.
Fig. 2 shows a sample trajectory of the system starting from x(0) = [2 2] ⊤ and the trajectories of its mode-dependent observer starting from x(0) = [0 0] ⊤ and its mode-independent Fig. 2. Trajectories of the system and its observers observer starting from x(0) = [0 0] ⊤ .It is confirmed that the trajectories of both observers converge to that of the system, and faster convergence is achieved by the mode-dependent observer.

VII. CONCLUSION
In this paper, we have studied moment UIES for discretetime nonlinear stochastic systems in the contraction framework.In particular, we have presented a sufficient condition for UIES in the first moment with respect to the Riemannian metric and a necessary and sufficient condition for UIES in the second moment with respect to the Euclidean distance.Then, the second moment UIES condition has been applied to i.i.d.processes and Markov processes as specialized applications.Future work includes developing general control/observer design methods, partly illustrated by this paper, in the proposed contraction framework.
(Step 5) We consider integrating both sides of (37) with respect to s in [0, 1].In ( 20) and ( 21), the Riemann integrals are used.For the sake of formality, they need to be replaced with the Lebesgue integrals.To this end, we introduce a measurable space corresponding to s.Let (R, B(R), µ) be the measurable space, where µ is the Lebesgue measure.Note that both (R, B(R), µ) and (Ω, F , P) are complete and σ-finite.Then, the product measurable space naturally induced by the Cartesian product R×Ω, denoted by (R×Ω, L, λ), is complete and σ-finite [43, Theorem 5.1.2and Remark 5.1.2].
According to Remark B.2, φ k , k ∈ Z k0+ and Φ k , k ∈ Z k0+ are both piecewise continuous functions at each k 0 ∈ Z.Note that a piecewise continuous function and stochastic process are both measurable, and the composition of measurable functions is again measurable [43,Proposition 2.1.1].Therefore, in the left-hand side of ( 39), (∂γ Taking the µ-integrations for both sides of (39) yield for each  (21).
and the geodesic is the line segment γ * (s) = (1−s)x ′ k0 +sx ′′ k0 .That is, when P = I, we can directly use (32) for the sufficiency proof, but this is not true for general P .Utilizing (42), we prove Theorem 4.3 below.
Proof: (Sufficiency) If P is identity, (32) reduces to ) into this and taking the µ-integration as in the proof of Theorem 4.1 yield where γ is defined in (38).From (42) and the k0 , the system is UIES in the second moment with respect to the Euclidean distance.
1)− .On the other hand, Theorem 4.1 can be generalized to incremental stability analysis on an open subset D ⊂ R n when f k : D × R m → D, k ∈ Z because D is a (robustly) positively invariant set for such f k .