Optimal Output-Feedback Control Over Markov Wireless Communication Channels

The communication links connecting components of wireless control systems may be affected by packet losses due to time-varying fading and interference. We consider a wireless control network with double-sided packet losses: on the sensor–controller link (sensing link) and the controller–actuator link (actuation link). We model the sensing and actuation links as finite-state Markov channels (FSMCs). One time-step delay affects the actuation link mode observation, while the sensing link mode observation is not affected by any delay. In this article, we solve, as our main contribution, the optimal output-feedback control problem in this FSMC setting (under a TCP-like communication scheme) using two different state estimation techniques, i.e., Luenberger observer and current estimator, comparing the two methodologies and deriving a separation principle for both the cases. We also derive detectability conditions guaranteeing the existence of an optimal observer, either Luenberger or current.

Optimal Output-Feedback Control Over Markov Wireless Communication Channels Anastasia Impicciatore , Yuriy Zacchia Lun , Member, IEEE, Pierdomenico Pepe , Senior Member, IEEE, and Alessandro D'Innocenzo , Member, IEEE Abstract-The communication links connecting components of wireless control systems may be affected by packet losses due to time-varying fading and interference.We consider a wireless control network with double-sided packet losses: on the sensor-controller link (sensing link) and the controller-actuator link (actuation link).We model the sensing and actuation links as finite-state Markov channels (FSMCs).One time-step delay affects the actuation link mode observation, while the sensing link mode observation is not affected by any delay.In this article, we solve, as our main contribution, the optimal output-feedback control problem in this FSMC setting (under a TCP-like communication scheme) using two different state estimation techniques, i.e., Luenberger observer and current estimator, comparing the two methodologies and deriving a separation principle for both the cases.We also derive detectability conditions guaranteeing the existence of an optimal observer, either Luenberger or current.

I. INTRODUCTION
W IRELESS control networks (WCNs) consist of com- putational units, actuators, and sensors connected via wireless communication links that may be affected by packet losses.In wireless control systems literature, the packet dropouts have been modeled either as deterministic (in terms of time averages or worst case bounds on the number of consecutive packet losses; see, e.g., [1], [2], and [3]) or stochastic phenomena.In the stochastic framework, many works in the literature assume that memoryless packet drops and, thus, dropouts are realizations of a Bernoulli process (see [4], [5], [6], and [7]).Other works consider more general correlated (bursty) packet losses and use a transition probability matrix (TPM) of a finitestate stationary Markov chain (see, e.g., [8] and references therein) to describe the stochastic process governing packet dropouts (see [4], [9], and [10]).In these works, WCNs with missing packets are modeled via time-homogeneous Markov jump linear systems (MJLSs) [11].Double-sided packet losses have been already investigated, for instance, in [3] and [12], with arbitrary packet loss process [3], or Markovian [12].These works summarize the packet losses on both links.The significant difficulty of this setting arises from a combined effect of two link packet losses possibly resulting in long periods, in which the controller and the actuator cannot simultaneously receive new data (see also Remark 1).However, a simple Markov chain model for packet losses on wireless channels used in WCNs literature is not exhaustive since the occurrence of packet losses also depends on the operational mode of the communication channel [8].The finite-state stationary Markov channel (FSMC) model approximates the channel mode transitions through a Markov chain and incorporates a specific packet error distribution information into each mode.FSMC is an essential model because wireless communication system designers traditionally use this mathematical abstraction of the wireless channel for modeling error bursts in fading channels to analyze and improve performance measures in the physical or media access control layers.Moreover, several receivers' channel state estimation and decoding algorithms rely upon FSMC models [8].Bursts of packet losses cannot be modeled by Bernoulli processes, which is the main limit of the output-feedback control (OFC) strategy based on the Bernoulli channel.Indeed, the Bernoullian model is less accurate than the FSMC model, and thus, bursts of packet losses may cause unstable behavior without the possibility of recovery, as illustrated in Section VIII.Thus, the existing stabilizability and detectability notions [4] are not suitable for the general FSMC scenario (see Remark 14).This work overcomes this limitation by solving the OFC problem over FSMCs and providing novel stabilizability and detectability conditions.The investigated infrastructure relies on a TCP-like architecture [4], implying that the communication between the controller and the actuators is characterized by acknowledgment (ack) messages.This article generalizes the results in [4] to the FSMC setting also proving that the fundamental separation principle still remains valid when ack messages deliver the state of the channel and outcome of related transmission.Ack messages are crucial here because without them, the separation between estimation and control is impossible even in Bernoullian setting.Concerning the transmission on the actuation link (AL), the controller is the transmitter: specifically, the transmitter cannot know the outcome of the transmission before sending the message.This is the reason why the controller receives the ack message, as well as the current mode of the channel only after a time-step delay [13], while this delay does not affect the sensing link (SL).In modern communication systems, the channel state estimation is always performed through the receiver.Therefore, on the SL, the controller (i.e., the receiver) is able to know the outcome of the transmission and the Markov mode of the channel.The OFC for MJLSs has been investigated in [11] and [14], with the same Markov chain driving both the dynamics of the plant and the one of the observer without any delay in the channel mode observation.Optimal linear-quadratic regulation [15] with one time-step delay on AL mode has been investigated in [10].In [16], the Kalman filter (KF) is adopted for a single simplified Gilbert channel modeled by a Markov chain with two states.This result cannot be applied to general Markov channel scenarios that require 2N modes with N > 2: N channel states result, e.g., from the signal-to-interference-plus-noise ratio (SINR) partitioning, and each state is associated with a binary symmetric channel (see [8]).Thus, 2N modes derive from the general Markov channel mathematical model.Other estimation techniques are H 2 and H ∞ estimation: in [9], suboptimal filters are obtained for the case of cluster availability of the operational modes.It is well known that for the case in which the information on the output of the system and on the Markov chain is available at each time step, the best linear estimator of the state is the KF (see [11,Remark 5.2] and [16]).An offline computation of the KF is inadvisable [17], as discussed more in detail in Section IV (see Remark 9).On the other hand, an online computation of the KF requires a significant computational burden.For this reason, we consider a different class of estimators, for which we can precompute the filtering gains offline.We present two infinite horizon (IH) minimum mean square Markov jump filters [11,Ch. 5.3]: the first one with a Luenberger observer (LO) and the second one using the current estimator (CE) [18,Ch. 8.2.4].These estimators use different communication and computation timing sequences and offer different performance levels (see Section IV).

A. Preliminary Versions
Preliminary parts of this work have been presented at the 58th IEEE Conference on Decision and Control [13] and at the 2021 American Control Conference [19].Specifically, Zacchia Lun and D'Innocenzo [13] have introduced the controllability notion over one step delayed AL mode observation, while Impicciatore et al. [19] concern the OFC with double-sided packet losses and detectability notions for the LO.The improvement with respect to [13] is the double-sided packet losses, while the novelty with respect to [19] is the introduction of the CE together with a comparison between the two methodologies.The CE provides better performance, but it requires more restrictive constraints to be satisfied.Different computation timing sequences are used by the two estimators: the one concerning the CE presents more restrictive physical constraints (see Remark 13).The theoretical existence of these two estimators is a problem addressed using different detectability notions that have been introduced for the FSMC scenario and that are presented in this work with the aim of finding suitable conditions guaranteeing the existence of an observer (either Luenberger or current).Particularly, conditions guaranteeing the weakest detectability are necessary and sufficient, while requirements ensuring the strongest detectability are only sufficient.Moreover, we present the detailed proofs of the separation principle for LO and CE.Finally, we report a more general case study with respect to the one in [19], providing several propagation environments showing in which cases it is possible to conclude the existence of one of the two observers.

B. Article Contribution
The contributions of this article are listed as follows.1) First, the FSMC is introduced into the wireless network control framework.The FSMC is widely used for the analysis and design of telecommunication systems and allows for accurate modeling of errors and bursts of packet losses.
2) The communication timing and computation and transmission delays are explicitly considered.This leads to two different estimation strategies: the LO and the CE, each one with its feasibility conditions.3) The separation principle validity is proved for both the considered estimators in the general FSMC setting.4) Four different detectability notions (presented from the weakest to the strongest one; see Remark 22) are introduced with the aim of providing a suitable theoretical basis for the formal description of the filtering problems.
The aforementioned detectability notions are instrumental for the guarantees of the separation principle for the general FSMC scenario (see Remark 14).5) The presented results are illustrated in a case study concerning an inverted pendulum on a cart described in Section VIII.

C. Article Organization
The rest of this article is organized as follows.Section II presents the WCN scenario and the information flow on the AL and the SL, respectively.Section III describes the optimal OFC problem in our setting.Estimation techniques are described and compared in Section IV, and the corresponding observer stability analysis is provided in Section V [with the solutions of the filtering coupled algebraic Riccati equations (CAREs)].Section VI states the separation principle derived for both the LO and the CE.Section VII presents the mode-independent output-feedback controller with suitable detectability conditions from the weakest to the strongest ones.A numerical case study is shown in Section VIII.Finally, Section IX concludes this article.Proofs of lemmas and theorems are reported in the Appendix.

D. Notation and Preliminaries
In the following, N denotes the set of natural numbers corresponding to the nonnegative integers, R denotes the set of reals, while F indicates the set of either real or complex numbers.The absolute value of a number is denoted by | • |.We recall that every finite-dimensional normed space over F is a Banach space [20] and denote the Banach space of all bounded linear operators of Banach space X into Banach space Y by B(X, Y).We set B(X, X) B(X).O n denotes the vector containing all zeros of length n.I n indicates the identity matrix of size n, while O n represents the matrix of zeros of size n × n.The transposition is denoted by the apostrophe, the complex conjugation by an overbar, the conjugate transposition by superscript * .F n×n * and F n×n + represent the sets of Hermitian and positive semidefinite matrices, respectively.For any positive integers C, r, n, and m, we define the following sets: , and H Cn,+ is the set of all K in H Cn, * , with K m ∈ F n×n + .We set H Cn = H Cn,n .We denote by ρ(•) the spectral radius of a square matrix (or a bounded linear operator), i.e., the largest absolute value of its eigenvalues, and by • either any vector norm or any matrix norm.We denote by ⊗ the Kronecker product defined in the usual way (see, e.g., [21]) and ⊕ the direct sum.Notably, the direct sum of a sequence of square matrices(Φ i ) C i=1 produces a block diagonal matrix having its elements, Φ i , on the main diagonal blocks.Then, tr(•) indicates the trace of a square matrix.For two Hermitian matrices of the same dimensions, Φ 1 and is positive semidefinite (respectively, positive definite).Finally, E(•) stands for the mathematical expectation of the underlying scalar-valued random variable (RV), and R(•) indicates the real part of the elements of a complex matrix.Through this article, we will extensively use the acronyms provided in the following: MS stands for mean-square, MSS stands for MS stable or MS stability, whose formal definition is provided in Section II.Moreover, MSD stands for MS detectability or MS detectable.The formal definition is provided in Section II.

II. PROBLEM FORMULATION
Consider the remote architecture depicted in Fig. 1.The discrete-time equivalent system is G: where the system state x k ∈ F n x and the system output y s k ∈ F n y , k ∈ N, are obtained through an analog-to-digital converter (A/D block in Fig. 1) with sampling period T .For k ∈ N, w k ∈ R n w is a sequence of independent identically distributed Gaussian RVs with zero mean.A, B, G, L, and H are system matrices of appropriate sizes.As in [4], we consider an unstable system state matrix A since otherwise a stabilizing OFC would not be required.G is controlled remotely by a digital outputfeedback controller, which receives the measurements y s k on the wireless SL and sends the control inputs over the wireless AL.The received digital control law u c k ∈ F n u is converted to an analog signal by a digital-to-analog converter (D/A block in Fig. 1) based, for instance, on zero-order hold, so that the analog control input can be applied to the continuous-time process.
Remark 1: Fig. 1 reports the scheme of a WCN infrastructure with possible packet losses occurrence on both the SL and the AL.The main challenge of this scenario arises from a combined effect of two-link packet losses possibly resulting in long periods in which the controller and the actuator cannot simultaneously receive new data.The scheme is a TCP-like communication [4] based on ack messages.Specifically, the controller receives the ack of the transmission on the connection actuators-controller (see Fig. 1).Packet losses over this connection are negligible since the probability of a packet loss for ack messages is very small in practical applications.

A. Wireless Link
This section describes single-hop wireless communication links modeled by FSMCs.The sequence {ν k } k∈N models the packet arrival process on the AL.The value of the RV ν k is zero whenever the control packet is lost and ν k = 1 if the control packet is correctly delivered, i.e., ν k ∈ S ν {0, 1}, for any k ∈ N. Analogously, the sequence {γ k } k∈N describes the packet arrival process on the wireless SL.Particularly, γ k = 0 if the sensing packet is lost and γ k = 1 if it is successfully delivered, i.e., for all k ∈ N, γ k ∈ S γ {0, 1}.The processes ν k and γ k are collections of binary RVs, and the probability of having a packet loss or a correct packet transmission over each link depends on its SINR.The SINR is determined by propagational environment and related physical phenomena [22].The SINR is a stochastic process and can be abstracted by a Markov chain.Each Markov mode is associated with a certain packet error probability (PEP).We consider the stochastic basis (Ω, F, {F k } k∈N , P), where Ω is the sample space, F is the σ-algebra of (Borel) measurable events, {F k } k∈N is the related filtration, and P is the probability measure.SL and AL modes are the output of the Markov chains η : N × Ω → S η ⊆ N and θ : N × Ω → S θ ⊆ N, respectively.Indeed, the Markov modes of {η k } k∈N and {θ k } k∈N belong to finite sets S η = {1, 2, . . ., I} and S θ = {1, 2, . . ., N}, respectively.
Remark 2: Previous works, such as [4] and [5], do not consider the communication channel mode, but actually the receiver has access to this information, by performing a channel state estimation [23].The novelty of this article lies within the OFC in the FSMC setting.
Moreover, the described Markov chains are characterized by time-invariant TPMs P = [p ij ] N i,j=1 (for {θ k }) and Q = [q mn ] I m,n=1 (for {η k }), respectively.Each TPM may be obtained by integrating the joint probability density function of the SINR over two consecutive packet transmissions and over the desired regions [8], [22].The TPM values may also be validated through the empirical data from a measurement campaign for calibrating the theoretical model parameters.The uncertainties in TPM values neglected in this work can then be addressed via a polytopic model (see, e.g., [24] and the references therein).
Remark 3: The network-induced communication delays due to multiple path routing and time-varying processing delays in relay nodes of multihop networks are not an issue for single-hop SL and AL with scheduled medium access considered in this article and extensively used in delay-sensitive control applications relying, e.g., on the low-latency deterministic network mode of IEEE 802.15.4e.
The entries of TPMs P and Q are defined as Since the probability of a packet loss depends on the mode of the Markov chain, the values of ν k and γ k are either 0 or 1 with certain probabilities depending on the current Markov mode.
Remark 4: In this network scenario, uplink and downlink models are split up.This separation already exists in the literature [3], [4].However, unlike the previous literature, we explicitly consider the channel mode (see Remark 2) by providing two independent FSMCs for the SL and the AL, respectively.1) Sensing FSMC: Let y k denote the measurement received by the output-feedback controller at time k ∈ N. The general model for the SL is y k = γ k y s k : the value of the RV γ k when the current Markov mode is η k ∈ S η is a function of η k , and, for notational convenience, we denote it as γ k = γ(η k ).The probability of having a successful packet delivery on the SL depends on the current Markov mode η k = m, i.e., γm ) are the probability that the packet is successfully delivered at time k ∈ N and the likelihood of a packet loss occurrence conditioned to η k = m, respectively.Fig. 2 provides a graphical representation of the FSMC model on the SL.A visual representation of the AL is similar and, thus, omitted for brevity.
Let π m (k) denote the probability P(η k = m), for m ∈ S η , k ∈ N. The variable π m (k) can also be written through the indicator function [11]).We do set π For what concerns the process {γ k }, applying Bayes law, the Markov property, and the independence between {γ k } and {η k }, we obtain, for m, n ∈ S η [19], 2) Actuation FSMC: In the SL, the controller is the receiver and has direct access to the channel information (see Remark 2).For the AL, the controller is the transmitter and can access the actuation channel information by an ack message, as the reader may notice in Fig. 1.Obviously, the ack message is received after the transmission, so there is a one time-step delay.Let u k ∈ F n u denote the control law computed by the controller, and let u c k denote the digital control input received by the D/A block at time k ∈ N. The general model for the AL is and, for notational convenience, we denote it as The probability of the correct packet delivery on AL depends on the current mode of the AL, that is ) are the probability that the packet is correctly delivered at time k ∈ N and the likelihood that the control packet is lost conditioned to θ k = i, respectively.For i ∈ S θ , k ∈ N, the probability P(θ k = i) is denoted by i (k).For , i ∈ S θ , k ∈ N, the joint probability of being in an augmented Markov state Moreover, the quantity ˜ i (k) may be written using the indicator function . The probability ˜ i (k) evolves according to the following equations, for , i ∈ S θ , k ∈ N [13]: Recalling that the availability of AL mode is affected by one time-step delay, that is θ k−1 (see Fig. 1), the aggregated Markov state (θ k , θ k−1 ) is considered [13].This memory introduced by the presented aggregation is fictitious: the aggregated Markov chain satisfies the Markov property of the memoryless chain {θ k }.Moreover, we can compute the probabilities of the joint process (ν k , θ k , θ k−1 ) as in [13].
3) Information Set: The scenario depicted in Fig. 3 shows the information flow of actuation and sensing data between the plant and the controller, under TCP-like protocols, i.e., in the presence of ack messages [4].Transmissions and computations do not happen instantly: as the reader may see in Fig. 3, actuation and sensing transmission times (δ 3 and δ 1 , respectively) are greater than zero, as well as the control law computation time (denoted by δ 2 ) and the ack transmission time δ 4 .Two different scenarios may arise: either the time interval δ 2 needed to the controller for the computations of estimation and control law is comparable to the sampling period T (this may happen when slow computers are used to control high-order systems) or the time needed for the estimation is very small compared to the sampling period [18].The first case is depicted in Fig. 3(a), where the computation time δ 2 is comparable to the sampling period T .The suitable estimation technique in this case is provided by the LO, which requires the measurements up through the previous time instant [18,Ch. 8].By considering the delay δ 1 introduced by the sensing transmission, the controller owns the whole information necessary for the estimation needed in the computation of u k+1 exactly at kT + δ 1 .Formally, the information set available to the output-feedback controller for the computation of u k+1 based on the LO is implies that in the Luenberger-based output feedback, u k+1 does not depend on the most recent observation [18,Ch. 8].Thus, the estimate vector might not be as accurate as the one obtained with the most recent measurement.For high-order systems controlled by slow computers, or whenever the sampling periods are comparable to the computation time, the time interval between the observation instant and the validity time of the control output allows the computer to complete the calculations [18].In many systems, however, the computation time required to evaluate the estimation is quite Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
short compared to the sampling period [see δ 5 in Fig. 3(b)], and the delay of almost a cycle between the measurement and the proper time to apply the resulting control calculation represents an unnecessary waste.Therefore, the controller may exploit the current output measurement to obtain a more accurate state estimation.Fig. 3(b) shows the time diagram of a two-step estimation algorithm: the first step predicts the state estimate based on the measurement from the previous time step, while the following step corrects the predicted estimate by integrating the most recent measurement.The time needed to perform the last step (concerning the estimate correction and control law computations) denoted by δ 5 is contained in δ 2 , and its brevity enables the control law transmission within the proper time window, coherently with the scenario described above of a controller with higher performance [18,Ch. 8].Notably, the current measurement is used within a different estimation technique (hereafter, CE) that provides a more accurate estimated state vector based on the most recent output information.The information set used for computing u k during δ 5 , denoted by F k c , collects the information received up through kT We emphasize that the current control input based on F k c has access to the most recent observation.Exploiting this additional information considerably increases the performance resulting in lower estimation error cost, as explained more in detail in Sections IV and VIII.
Remark 5: The state-system model ( 1) does not explicitly account for the sensing and actuation delays (δ 1 and either δ 2 + δ 3 or δ 5 + δ 3 for the LO and CE, respectively) below one sampling period T .Completely neglecting these delays may reduce the system stability.However, for the state-space-based design, an actuation delay of a fraction of a sampling period corresponds to augmenting the system, while the sensing delay does not influence the sampled value [18,Chs. 4.3.4,8.6].Thus, without loss of generality, we consider the system matrices in (1) as augmented to account for the subsampling period delays.
Remark 6: A natural alternative to the considered estimators is the mode-independent estimator based on the KF described by Schenato et al. [4], which does not require a channel state estimation and, thus, results in a less complex design.However, the estimator in [4] may fail to support a stable OFC over FSMCs, as discussed in Section VIII.The necessary condition for a stable mode-independent estimation and control over fading Markov channels is that the system should behave well, i.e., it should be Strong-MSD and Strong-MS stabilizable, as detailed in Section VII.

B. WCN Model
Given the system described by (1) and actuation and sensing FSMCs, the stochastic system describing the architecture in Fig. 1 can be written as follows: with z k ∈ F n z (needed to define the performance index of the optimal controller), C and D are matrices of appropriate sizes.Remark 7: Both ν k and γ k depend on the corresponding channel mode according to the FSMC model, i.e., γ k = γ(η k ) and ν k = ν(θ k ), respectively (see Section II-A).Therefore, we refer to the system described by (5) as MJLS.
We assume that noise sequence {w k } is independent of the initial state x 0 and the sequences {ν k } and [11]).We assume, without loss of generality, that the system matrices are constant matrices of appropriate sizes [11,Sec. 5.2], such that Similarly to [11,Sec. 5.3], we make the following technical assumptions (with k ∈ N).
. This article aims to solve the OFC problem over FSMCs with two different estimation techniques guaranteeing the IH convergence of the state in MS.This property is known as MSS [11, Def.3.8, pp.36-37] that we present as follows.
Definition 1: The MJLS described by ( 5) is MSS if there exist equilibrium points μ and Q (independent from initial conditions) such that, for any initial condition (x 0 , ν 0 , γ 0 ), the following equalities hold:

III. OUTPUT-FEEDBACK CONTROLLER
This section shows two alternative OFC systems for the problem formalized in Section II.

A. Control Synthesis Based on the LO
Consider the scenario in Fig. 3(a) and the related information set F k l , k ∈ N. The optimal LO-based Markov jump OFC system relying on F k l for the synthesis of u k is with xk being the estimated state obtained by the LO.The controller G l (with optimal matrices and F (θ k−1 ) to be found) should guarantee MSS of the closedloop system (see Definition 1).The sequences of matrices F = [ F ( )] N =1 and B = [ B(n)] I n=1 are the solutions of the optimal control and of the optimal filtering problem, respectively.

B. Output-Feedback Controller With CE
Consider the scenario in Fig. 3(b) and the related information set F k c .The optimal CE-based Markov jump OFC system relying on F k c for the synthesis of u k is Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
with xk and xk are the prediction and correction states at time k ∈ N, respectively, obtained using the CE.The controller G c (with optimal matrices A(γ k , η k ), B(η k ), C(ν k , θ k−1 ), D(η k+1 ), and F (θ k−1 ) to be found) should guarantee the MSS of the closed-loop system (see Definition 1).The sequences of matrices F = [ F ( )] N =1 and D = [ D(n)] I n=1 are the solutions of optimal control and filtering problem, respectively.
Remark 8: Both G l and G c should achieve the MSS of the closed-loop system.The CE provides a valid alternative to the LO, and the proper control strategy should be chosen according to the calculating capacity of the controller.When the computation time δ 5 [see Fig. 3(b)] required for the correction of the predicted estimate is under a certain threshold, the suitable controller is G c ; otherwise, G l should be preferred (see also Remark 9).

C. Linear-Quadratic Regulator
The necessary condition for an optimal IH solution of the wireless control problem is the MS stabilizability with delay.
Definition 2. (MS stabilizability with delay): The system (5) is MS stabilizable with one time-step delayed AL mode observation if, for any initial condition (x 0 , θ 0 ), and for each mode ∈ S θ , there exists a mode-dependent gain F , such that u k = F θ k−1 x k is the MS stabilizing state feedback for (5).
Let F ∈ F n u ×n x , ∈ S θ , denote the optimal modedependent control gain with one time-step delayed operational mode observation of the AL (see [13] for the solution of the IH optimal control problem and [10] for a more general result).For any For l ∈ S θ , the set of equations X l = X l (X) is the set of control CAREs.The necessary condition for the existence of the MS stabilizing solution X ∈ H Nn x ,+ of the control CAREs is the MS stabilizability with delay of system (5) (see Definition 2).If X ∈ H Nn x ,+ is the MS stabilizing solution of the control CAREs, then the state-feedback control input F θ k−1 x k stabilizes the system, with one time-step delay in the observation of the AL mode in the MS sense (see [13]).The optimal control problem solution is obtained by using the linear matrix inequality (LMI) approach [10].The optimized performance index is given by J h = lim sup t→∞ 5), h = l for the LO and h = c for the CE.The performance index achieved by the optimal control law is

IV. ESTIMATION TECHNIQUES
The output-feedback controllers introduced in Section III rely either on the LO (G l ) or on the CE (G c ).The aim of the control law is ensuring the MSS of the closed-loop system.The aim of each estimator is ensuring MSS of the estimation error dynamical system associated with the estimation technique.
Definition 3: The MJLS ( 5) is MSD if there exists an estimator such that the corresponding estimation error system is MSS.
Remark 9: For the case in which the information on the output of the system and on the Markov chain are available at each time step, the best linear estimator of x(k) is the KF (see [11,Remark 5.2]).In offline computations of the KF, the solutions of the difference Riccati equations and of the time-varying Kalman gain are sample-path dependent, and the number of sample paths grows exponentially in time.Thus, KF offline implementation is inadvisable here [17].On the other hand, an online implementation of the KF requires online matrix inversions, which might have a heavy computational burden.Therefore, this work takes into account a different class of estimators with filtering gains precomputed offline.This avoids online matrix inversions and reduces the computational burden.

A. Markovian LO
This subsection briefly recalls the Markovian LO presented in [19], given by Ǧ : with Mm , m ∈ S η , mode-dependent filtering gain obtained as the solution of the Luenberger filtering problem, which relies on the information set F k l .Note that when the controller makes the computations for xk+1 , it knows whether the packets containing the control law u k and the measurement y k have been received or not.Indeed, this information is contained in F k+1 l , which is exploited for computing the proper control input to apply at time k + 1, that is, u k+1 = F θ k xk+1 .Let us define the LO estimation error at time step k ∈ N as ěk x k − xk .The error dynamics are derived as follows:

B. Markovian CE
The CE [18,Ch. 8] over the FSMC results in the following MJLS: with M m , m ∈ S η , mode-dependent filtering gain obtained by solving the CE problem that relies on the information set F k c [18].The variables xk and xk are the predicted and the estimated state vectors at time step k ∈ N, respectively.The CE is a two-step estimation algorithm: the first step computes the prediction xk+1 = Ax k + ν k Bu k based on the measurement from the previous time step, while the following step corrects the predicted estimate by integrating the most recent measurement.The estimated state vector resulting from this correction with y k+1 is xk+1 .
Define the prediction error at time step k ∈ N as e k x k − xk .The resulting estimated state Markov jump system is Remark 10: At time step k + 1, the predicted state xk+1 is corrected exploiting the prediction error e k+1 , through the most recent output measurement.
By substituting xk , obtained from (13), in the prediction, the expression of xk+1 depends on the prediction error, as follows: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Therefore, the prediction error MJLS is given by Define the CE estimation error as êk x k − xk .Consequently Remark 11: In the LO, the estimation error coincides with the prediction error.In the CE, when the prediction error e k converges to zero, by (16), the estimation error goes to zero.Thus, (15) and ( 16) are equivalent at the steady state.
Remark 12: Neither the control input nor the Markov chain {θ k } is involved in the MJLSs (11) and (15).This implies that the optimal mode-dependent LO gain Mm and the CE gain M m , m ∈ S η , can be designed independently from the optimal modedependent control gain F , ∈ S θ .

C. Computation Time
It is well known that the total number of floating-point operations or flops to carry out the presented estimation algorithms may provide a rough estimate of the computation time [25].Given the state estimate vector, the number of flops needed for the evaluation of the control law is O(n u n x ).Moreover, the computational complexity of both the Luenberger and the current state estimation numerical algorithms is the same: O(n 2 x + n x n u + n y + n x n y ).The physical constraint for estimator implementation is obtained comparing δ 2 (the time needed for all the computations leading to the control law) and the sampling time T .If the condition δ 2 < T is satisfied, then the LO represents a viable technique.Under this constraint, if δ 5 (which is shorter than δ 2 as already seen in Section II-A3) is such that the control transmission remains inside the proper time window, the current estimation is feasible and provides a more accurate result.
Remark 13: The physical constraints (concerning the computation time) discussed above provide necessary conditions for implementation.However, taking into account combined packet losses in both communication channels, as well as considering the actuation delay, the IH OFC is not easy to be modeled and formally solved.Trivially, when all the communication is lost, an unstable plant cannot be stabilized remotely.The conditions concerning the theoretical existence of the IH estimators and controllers operating over FSMCs can be based on the MS detectability and stabilizability notions (discussed in the following sections) guaranteeing an MS stable behavior of estimators and controller with precomputed gains.

V. OBSERVER STABILITY ANALYSIS
This section provides the MSD specializations for the LO and the CE, respectively.

A. Operators
We introduce some mathematical preliminaries instrumental for MSS analysis (see [11]).For all S = [S m ] I m=1 , T = [T m ] I m=1 , both in H In x , we specify the inner product as S; T T n (S) where the matrices Γ n1 , Γ n0 , Γ n1 , and Γ n0 are arbitrary matrices in F n x ×n x that will be specialized later in the article, while q mn and γn are those defined by ( 2) and (3), respectively.Define O(•, •) : O n (M, α) Given x , as x .(23) Remark 14: The matrices N and C are designed with the aim of providing a suitable methodology for the test of detectability conditions in Definitions 4 and 5, as will be discussed later.However, even though the aim is the same as in [11], differently from [11], they account for the general FSMC scenario, i.e., they involve the probability γm , m ∈ S η .

B. LO Stability Analysis
The Luenberger stability analysis is based on the IH solution of filtering CAREs, which are derived as on the asymptotic solution of difference Riccati equations and obtained by defining the first and second moments of the error ěk , k ∈ N, as follows: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
for n ∈ S η .Consequently, Define also M [ Mm ] I m=1 , i.e., the sequence of modedependent filtering gains in (10) providing the solution of the LO filtering problem.Hence, we can state the following.
Proof: See the Appendix.
The following definition provides a specialization of Definition 3 for the LO scenario.

Definition 4. (MSD):
The system described by ( 5) is MSD if, for each mode n ∈ S η , there exists a mode-dependent filtering gain Mn ∈ F n x ×n y , such that ρ(V) < 1, V ∈ B(H In x ) defined in (20), for Γ n1 = A + Mn L and Γ n0 = A.
From now on, we refer to Definition 4 when using MSD.Remark 16: By applying the results from [11, Sec.3.4.2], the property provided by Definition 4 is equivalent to the MSS of the error system (11).

C. CE Stability Analysis
Analogous steps for the LO stability analysis are reported in the following.Define for n ∈ S η , k ∈ N, Consequently, E[e k ] and E[e k e * k ] are given by E[e k ] = m=1 be a sequence of mode-dependent filtering gains in (12) providing the solution of the CE filtering problem.The following proposition formalizes the dynamics of the observation error first and second moments.
Proposition 3: Consider the error system described by (15).Then, for all k ∈ N, the following equalities hold: with T , O, and B defined in (18), (22), and (30), respectively, for Proof: See the Appendix.
The following definition adapts Definition 3 to the CE scenario.

Definition 5. (Strict-MSD):
The system described by ( 5) is Strict-MSD if, for each mode n ∈ S η , there exists a modedependent filtering gain M n ∈ F n x ×n y , such that ρ(T ) < 1, with T ∈ B(H In x ) defined in (18), for Γ n1 = A + A M n L and Γ n0 = A.
Proof: See the Appendix.Remark 17: By the results from [11,Sec. 3.4.2]applied to the operator T (with T as in Definition 5), ρ(T ) < 1 is equivalent to the MSS of the error system described by (15).

D. LO Filtering CAREs
The optimal mode-dependent filtering gain of LO results from the optimization of the following performance index: Obtaining the optimal performance index in the Luenberger scenario necessitates dealing with Luenberger filtering CAREs, introduced as follows.

Define for any
n Bn (Y) for n ∈ S η .Consider the set W, defined as follows: The LO filtering CAREs are the set of equations given by (34) The optimal IH mode-dependent filtering gain is obtained from the solution of the following optimization problem: Then, the MS stabilizing filtering gain is given by Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
The maximal solution of (34) and the solution of (35) coincide, as stated in the following theorem.
Theorem 1: Assume that ( 5) is MSD.Then, the following statements are equivalent.
i) There exists Y + ∈ M satisfying (34), such that Y + Y, for all Y ∈ M. ii) There exists a solution Y for the convex programming problem described in (35).
The maximal solution and the MS stabilizing solution of (34) are connected, as stated in the following theorem.
Theorem 2: There exists at most one MS stabilizing solution of (34), which coincides with the maximal solution in M, that is, the solution of the convex programming problem described in (35).
Proof: See the Appendix.
The MS stabilizing filtering gain (36) is computed exploiting the maximal solution of (34), i.e., the solution of (35), as stated in Theorem 2. Consequently, the optimal performance index achieved by the LO is being the maximal solution of (34).The necessary condition for the existence of the MS stabilizing solution of the filtering CAREs is the MSD of system (5).

E. CE Filtering CAREs
The optimal mode-dependent filtering gain of the CE results from the optimization of the following performance index, Remark 18: J * C (computed exploiting the prediction error) can be compared to the Luenberger performance index J * L (computed exploiting the estimation error) because the estimation error for the LO and the prediction error for the CE are equivalent at the steady state (see Remark 11). For The following lemma states the equivalence of the filtering CAREs solutions and the filtering gains, for the LO and the CE.
Lemma 1: The following statements are equivalent.
Moreover, the mode-dependent filtering gain that stabilizes the error system (15) in the MS sense is M n = M n (Z), and the optimal performance index achieved by the CE is

and Y maximal solution of (34).
Proof: See the Appendix.

Remark 19:
The LO and the CE are equivalent from the steady-state point of view, as stated in Lemma 1.However, their difference in performance (indicated by indexes J L and J C ) and physical constraints (see Remark 13) allow for choosing the most suitable estimator for a specific scenario, as shown in Sections II-A3 and IV-C.
Remark 20: If the matrix A is nonsingular, then, from Lemma 1, we may compute the LO filtering gain as Mn = A −1 M n .

VI. SEPARATION PRINCIPLE
In the following, we state the separation principle for the LO and CE scenarios, respectively.

A. LO Separation Principle
Consider the optimal matrices in (8), which we can express as follows: Then, the optimal output-feedback controller (8) coincides with (10), and the closed-loop system dynamics are (37) By recalling the error dynamics described in (11), we write the closed-loop system as follows: Ǧcl : Theorem 3: Given an MJLS described by ( 5) and the LO (10), the following statements are equivalent.i) the dynamics (37) can be made MSS; ii) the MJLS described by ( 5) is both: ii-a) MSD; ii-b) MS stabilizable with one time-step delayed AL mode observation.
Proof: See the Appendix.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

B. CE Separation Principle
Consider the optimal matrices in ( 9), which we assert as follows: Then, ( 12)-( 14) coincide with (9), and the dynamics of the closed-loop system are the following: By recalling the error dynamics described in (15), we write the closed-loop system in a compact form as follows: Remark 21: The matrices Ψ and Γ are upper triangular block diagonal matrices as in [11], i.e., the error dynamics (driven by {η k }) do not depend on the state dynamics (induced by {θ k }).Differently from [11], the closed-loop dynamical matrices Γ and Ψ contain the Markov jumps not only of the Markov chain {η k } (SL) but of the Markov chain {θ k } (AL) too (see the FSMC model in Sections II-A1 and II-A2).Moreover, we consider the mode observation delay affecting the Markov chain {θ k } k∈N .
Theorem 4: Given an MJLS described by (5) and CE (12), the following statements are equivalent.
i) The dynamics (39) can be made MSS; ii) the MJLS described by ( 5) is both: ii-a) Strict-MSD; ii-b) MS stabilizable with one time-step delayed AL mode observation.
Proof: See the Appendix.

VII. MODE-INDEPENDENT OUTPUT-FEEDBACK
Under the conditions presented in this section, the designer can use mode-independent control and filtering gains.The advantage of mode independence concerns the reduced computational burden, especially when the number of modes increases.The strong MS stabilizability (defined in the following) guarantees the existence of a mode-independent control gain, which is MS stabilizing.On the other hand, the following definitions of Strong-MSD and Strong-Strict-MSD provide the basis for deriving sufficient conditions guaranteeing the existence of a mode-independent filtering gain, which makes the estimation error system MSS.Definition 7. (Strong-MS stabilizability): The system ( 5) is Strong-MS stabilizable with one time-step delayed AL mode observation if, for any initial condition (x 0 , θ 0 ), there exists a mode-independent control gain F b ∈ F n u ×n x such that u k = F b x k is the MS stabilizing state-feedback for (5).
The following Strong-MSD and Strong-Strict-MSD notions instead concern the SL.

Definition 8. (Strong-MSD):
The system ( 5) is Strong-MSD if there exists a mode-independent filtering gain M b ∈ F n x ×n y , such that ρ(V) < 1, with V ∈ B(H In x ) defined in (20), for Definition 9. (Strong-Strict-MSD): The system ( 5) is Strong-Strict-MSD if there exists a mode-independent filtering gain M b ∈ F n x ×n y , such that ρ(T ) < 1, with T ∈ B(H In x ) defined in (18), for Γ n1 = A + A M b L, Γ n0 = A, and n ∈ S η .
Proof: See the Appendix.Remark 22: Strong-Strict-MSD implies all the detectability notions concerning the FSMC model.Thus, it is the strongest notion, while MSD is the weakest one.
We introduce the mode-independent output feedback recalling the filtering and control modified algebraic Riccati equations (MARE) reported in the following [4], [26].To this end, define . Consider the sets (41) Under the strong MS stabilizability condition, the modeindependent MS stabilizing control gain exists, and it is given by [13].Moreover, the critical arrival probability on the AL is defined as Remark 23: Strong-MSD condition guarantees the existence of the mode-independent filtering gain, which can be computed as . Moreover, if Strong-Strict-MSD is satisfied, the existence of the CE mode-independent filtering gain is guaranteed.In this case, the filtering gain can be computed as follows: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
The next theorem links the optimal mode-dependent filtering CARE solution and mode-independent solutions of the filtering MARE.Specifically, the solutions of the filtering problem are equivalent under particular conditions.The same holds for the control problem [13,Th. 3].
i) The solution of the filtering MARE provides the modeindependent solution of the filtering CAREs.ii) The solution of the control MARE provides the modeindependent solution of the control CAREs.
Proof: See the Appendix.LMIs guaranteeing MS detectability conditions are presented as follows: . Proposition 6: Consider the MJLS described by ( 5) and the following statements.
Proof: See the Appendix.Consider the following set of LMIs: , W 2 in F n x ×n y , and W 3 in F n y ×n y + .Proposition 7: Consider the MJLS described by ( 5) and the following statements.

VIII. NUMERICAL CASE STUDY
This section presents the wireless OFC of an inverted pendulum on a cart [27], controlled remotely over TCP-like lossy SL and AL.The considered cart and pendulum masses are 0.5 and 0.2 kg, inertia about the pendulum mass center is 0.006 kg • m 2 , distance from the pivot to the pendulum mass center is 0.3 m, and the coefficient of friction for the cart is 0.1.The system state is defined by x = δx, δ ẋ, δφ, δ φ , with δx(t) = x(t) − x , δφ(t) = φ(t) − φ , where x is the cart position, φ is the pendulum angle from vertical, and x and φ are the equilibrium point position and angle, respectively.The designed control law aims to stabilize the pendulum in the upright position corresponding to unstable equilibrium point x = 0 m, φ = 0 rad.The optimal Markov jump output-feedback controllers ( 8) and ( 9) have been applied to the discrete-time linear model derived from the continuous-time nonlinear model by linearization.The state-space model of the system is linearized around the unstable equilibrium point and discretized with sampling period T s = 0.01 s.The obtained system matrices can be found in [19].The process noise is characterized by the covariance matrix The state matrix A is unstable since it has an eigenvalue 1.057, but it is easy to verify that D * D 0, the pair (A, B) is controllable, while (A, L) is observable, so the closed-loop system is asymptotically stable if ν k = 1 and γ k = 1 ∀k.Moreover, the necessary conditions for the existence of the MS stabilizing solution for the control and filtering CAREs are satisfied.FSMC models with TPMs in R 4×4 describe the double-sided packet loss.These channels are obtained by following the systematic procedure in [22] that accounts for path loss, shadow fading, transmission power control, and interference.The partitioning of the SINR range is based on the values of PEP so that each SINR threshold corresponds to a specific PEP value.

A. Detectability Analysis
The proposed methodology is applied to the study of the MSD conditions.Simulation results highlight the existence of a limit case for detectability conditions.When considering the distance between the transmitter-receiver couple of interest d 0 = 17.348 m and distance between the interfering transmitter and the receiver of interest d i,1 = 9.548 m, the resulting SL TPM is given by 0.8855395 0.0184352 0.0603969 0.0356284 0.8825920 0.0187857 0.0617956 0.0368267 0.8820434 0.0188504 0.0620549 0.0370513 0.8806549 0.0190134 0.0627101 0.0376216 The probabilities of receiving the packet in each mode of the SL are denoted by γ1 = [0.005,0.5000509, 0.9237605, 1].Conditions (42) are satisfied.From Proposition 6, the system is MSD and Strict-MSD.As far as strong conditions (43) go, they are not satisfied.From the spectral radius analysis, ρ(V) = ρ(T ) = 0.999999983 with Markovian filtering, and ρ(V) = ρ(T ) = 1.000000074 with the Bernoullian filtering.In this case, the condition γ > γ max from [4, Th. 5.6] is satisfied.However, the system is unstable with the Bernoullian filtering because the system is neither Strong-MSD nor Strong-Strict-MSD.This limit case reveals that the Bernoullian OFC may fail in making the closed-loop system MSS when strong detectability conditions are not satisfied, while the Markovian OFC achieves this aim over the FSMCs.

B. Stabilizability Analysis
The MS stabilizability analysis is presented through a limit case: consider d 0 = 17.348 m and d i,2 = 10 m.Then, the AL  The probabilities of receiving the packet in each mode of the AL are given by ν2 = [0.006,0.5003405, 0.9248986, 1].Thus, ρ( L) = 1.000388084 [with L defined in (44)] using the Bernoullian control gain, while the spectral radius ρ( L) = 0.996248733 with the Markovian mode-dependent control gain.This case highlights that even though the condition ν > ν c from [4, Th. 5.6] is satisfied, the system is unstable with the Bernoullian controller because the system is not Strong-MS stabilizable (recall Definition 7) (see also Remark 24).The Bernoullian control law is not able to make the closed-loop system MSS, while the Markovian control achieves this aim.Table I provides insights on the detectability and stabilizability for each of these cases: the check mark indicates that the notion holds, while the cross mark reveals that its required conditions are not satisfied.
Remark 24: The results presented in this article are more general with respect to the ones by Schenato et al. [4].As also pointed out in the detectability and stabilizability analysis, even though in this example the conditions by Schenato et al. are satisfied, the system is not MSS with the Bernoullian modeindependent controller.This is because Strong MS stabilizability and Strong-MSD are not satisfied.

C. Performance Analysis and Comparison
Consider distances d 0 = 17.348 m, d i,3 = 14 m (corresponding to the case C.D in Table I), and covariance matrix Σ w described before.The performance indexes obtained by the Markovian LO and CE are J * L = 0.0001109 and J * C = 0.0000746, respectively.The performance index obtained by the Bernoullian observer is J * B = 209.8934328.The reported performance indexes highlight the fact that the presented mode-independent estimation techniques are easier to implement, but their average cost is larger than the one obtained by the Markovian filtering.The spectral radii of T and V are the same for both mode-dependent Markovian filters because these estimators are equivalent at the steady state (see Remark 11).However, the advantage of the CE compared to the LO is that it involves the most recent measurement in the estimation, yielding a smaller performance index.Fig. 4 provides the closed-loop mean square state trajectories obtained with 1000 independent trajectories.As the reader may notice, the CE leads to closed-loop mean square state trajectory that remain far below the closed-loop  .This case is reported in Fig. 5 to emphasize the performance differences existing between the LO and the CE.The first difference can be individuated in the resulting performance index J * L = 65 for the LO, and J * C = 43 for the CE.The performance index shows that the cost achieved by the LO is higher with respect to the one achieved by the CE (see also Remark 18).Moreover, Fig. 5 highlights the behavior of the error trajectories for each observer.After the transient, the error trajectories obtained by the CE become smooth faster with respect to the error trajectories obtained by the LO, which takes 20 samples to become smooth.

IX. CONCLUSION
This article presents estimation techniques and detectability conditions for WCNs modeled via MJLSs (under TCP-like communication scheme).The resulting OFC over wireless medium finds applications in industrial automation, telesurgery, smart grids, and intelligent transportation, where communication nonidealities must be considered to guarantee acceptable closed-loop performance.We generalize the results from [4] by using the Markov modeling of the wireless channel and introducing the stabilizability and detectability conditions accounting for the communication link mode (see also Remark 24).As future developments, we plan to investigate the same WCN scenario under a UDP-like communication scheme.

APPENDIX A TECHNICAL PRELIMINARIES
Since, for finite-dimensional linear spaces, all norms are equivalent [28, Th. 4.27] from a topological viewpoint, as vector Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

norms, we use variants of vector p-norms.
For what concerns the matrix norms, we use 1 and 2 norms [29, p. 341], which treat n r × n c matrices as vectors of size n r n c and use one of the related p-norms.The definition of 1 and 2 norms is based on the operation of vectorization of a matrix, vec(•), which is further used in the definition of the operator φ(•), to be applied to any block matrix, e.g., Φ = [Φ m ] C m=1 : φ(Φ) [vec(Φ 1 ), . . ., vec(Φ C )] .The linear operator φ(•) is a uniform homeomorphism, its inverse operator φ−1 (•) is uniformly continuous [30], and any bounded linear operator in B(F Cn r ×n c ) can be represented in B(F Cn r n c ) through φ(•).
Proof of Theorem 1: The implication (i) ⇒ (ii) follows from the Schur complement [11,Lemma 2.23].On the other hand, assume that (ii) holds.By the optimality of the solution of (35) and the MSD of ( 5), (i) follows.Moreover, the solutions of (35) and (34) coincide; see [19] for more details.

Proof of Theorem 2: Assume
is an MS stabilizing solution for filtering CAREs (34), so that system (5) is MSD, with Mn = M n ( Y), n ∈ S η .By some technical results from [19], we have the existence of a maximal solution [19] for more details.
The following result proves the equivalence of the two estimation techniques for the MSS.
Proof of Lemma 1: Assume that the statement Assume that Y ∈ M is the MS stabilizing solution of the filtering CAREs Y = Y(Y).Then, M n (Y) defined by ( 32) is such that the spectral radius ρ(V) < 1, with V ∈ B(H In x ) defined in (20) for , and, consequently, ρ(T ) < 1.Moreover, the optimal performance index achieved by the CE is J * C = I n=1 tr(Z n ).In the following, all mathematical preliminaries and motivations leading to the separation principle are illustrated concerning the output-feedback controller designed with the Markovian LO.A reduced version of the proof is reported in [19].Define, for k ∈ N, , i, j ∈ S θ ,  Proof: From ( 11) and (37), recalling the definition of L in (44), by assumptions (a.2) and (a.3), applying (6) and the independence of sequences θ k and ěk , we have Thus, (45) follows.The proof is complete.
The detailed proof of the separation principle concerning the OFC based on the CE is presented in the following.where L ij and H ij are defined in (44) and (48), respectively.Proof: From ( 15) and (39), recalling L operator in (44), by assumptions (a.2) and (a. i, j ∈ S θ .Thus, there exists W = [W ij ] N i,j=1 , with W ij ∈ F n x ×n x + , satisfying, for i, j ∈ S θ , W ij = lim k→∞ W ij (k + 1).Moreover, by [13, Prop.2], we get lim k→∞ w i (k) = w i ∈ F n x , , i ∈ S θ .Therefore, the closed-loop system is MSS, i.e., (i) holds.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

Fig. 2 .
Fig. 2. FSMC model for SL: the Markov chain η k represents the evolution of the channel, while successful packet delivery and PER come from γm , m ∈ S η .

Fig. 3 .
Fig. 3. Information flow timing between the plant and the controller used for (a) the LO and (b) the CE.
Lemma 5.4 (a)], and the critical observation arrival probability on the SL is denoted by γ c [4, Th. 5.5].By [4, Lemma 5.4, Th. 5.5], ν c and γ c satisfy p min ≤ ν c ≤ p max and p min ≤ γ c ≤ γ max ≤ p max , where Varying distances d i between the interfering transmitter and receiver of interest positioned at d 0 = 17.348 m from its transmitter, we distinguish four cases: C.A (d i ≤ 9.547 m), C.B (d i = 9.548 m), C.C (d i going from 9.549 to 12.100 m), and C.D (d i ≥ 12.101 m).

Fig. 4 .
Fig. 4. Mean square state trajectories in closed-loop in blue obtained with the Markovian LO, red obtained with the Markovian CE, dashed blue obtained with the Bernoullian LO, and dashed red obtained with the Bernoullian CE.

Fig. 5 .
Fig. 5. Estimation error on cart position obtained by Monte Carlo simulations are reported in yellow, the mean error trajectory in red, the maximum error trajectory in blue, and the minimum error trajectory in green.The top right of each panel reports a zoom in for each plot.
of m(k + 1), for k ∈ N. By assumption (a.3), applying the property E[w k ] = O n w , the definitions of transition probability and of γm , m ∈ S η , the expression of m(k + 1) in (31) follows.Consider the definition (29) of Z(k), for k ∈ N. By applying assumption (a.3), the properties E[w k ] = O n w , E[w k w * k ] = I n w in (6), GH * = 0 in (7), definitions of transition probability and of γm , the recursive expression of Z n (k + 1), for m, n ∈ S η , follows.Consider T and O defined in (18) and (22), respectively.By setting Γ m1 = A + A M m L, Γ m0 = A, and M = [ M m ] I m=1 , for m ∈ S η and π(k) = [π m (k)] I m=1 , (31) follows, completing the proof.

TABLE I DETECTABILITY
AND STABILIZABILITY ANALYSIS SUMMARY