Robust Stabilization of a Class of Nonlinear Systems via Aperiodic Sensing and Actuation

This article proposes a framework to design a robust controller for a class of nonlinear networked control systems using aperiodic feedback information. Here, the nonlinearity and parameter variations of system model are considered as sources of uncertainty. To tackle the uncertainty in system dynamics, a linear robust control law is derived by applying the optimal control theory. Two different architectures of closed-loop systems are considered. In the first one, system and controller are not collocated; instead they are interconnected by means of a shared communication network. In the second architecture, system, controller and actuator are all collocated with their respective outputs available at all time—instead, sensors and controller are connected through a shared communication channel. In both architectures, the feedback loop is closed through the network. Owing to its shared nature, the network may suffer from bandwidth limitations. To save the network bandwidth, state and input information are transmitted aperiodically within the feedback loop. With this aim, the paper adopts an event-triggered control technique so as to reduce the transmission overhead. Applying Input-to-State Stability theory, we derive two different event-triggered robust control laws that stabilize the uncertain nonlinear system. Finally, we show that the designed event-triggered controllers satisfy the trade-off between control performance and saving in network bandwidth in the presence of uncertainty. The developed control algorithm is implemented and validated through numerical simulations.


I. INTRODUCTION
Generally, in Cyber-Physical Systems (CPSs) or Networked Control Systems (NCSs) each physical component shares its own local information with other subsystems through a communication network. As a result of the shared nature of the communicating channel, controlling such systems with continuous or periodic control laws require large bandwidth resources [4], [15], [21]. In recent past, an event-triggered control technique has been introduced in [11]- [13], [37], [40], [41] to reduce the information requirements in order to achieve a stable control strategy. Specifically, in the The associate editor coordinating the review of this manuscript and approving it for publication was Azwirman Gusrialdi . event-based control framework, when a prespecified event condition is violated, it determines the sensing and actuation instants at both sensor and actuator ends. This eventtriggering law mainly depends on the system's present state or outputs. In the event-triggered control framework for continuous systems, the key issue is the stringent requirement in continuously monitoring the event condition occurrence. For instance, in [11], [12], the monitoring of the event-triggering condition is conducted periodically. To overcome the need for such a continuous/periodic monitoring, a self-triggered control approach has been developed and reported in [3], [45]. In this self-triggered control approach, the subsequent time instant for event occurrence is determined using the system's state or output information at the previous sampling VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ instant. For both classical event-triggering and self-triggering controls, a reduction in the overall network use can be achieved by increasing the time interval between triggering events. In the specific context of cyber-physical systems (CPSs) and networked control systems (NCSs), the primary role played by aperiodic sensing and actuating for continuous and periodic event-triggered control has been reported in [4], [15], [21].
The key deficiency with classical event-triggered control is the need to have access to an accurate model of the studied system in order to devise the event-triggering rule. In practice, system modeling inevitably simplifies the actual system operation and thereby introduces a certain level of inaccuracy, which have practical implications. It is worth highlighting that there is a vast breadth of problems related to addressing the issue of event-triggering control in the presence of uncertainty. Such uncertainty has several possible origins: nonlinearity, variation in the system's parameters, components unaccounted for in the dynamical model, and pervasive perturbations. These issues thereby necessitate the development of a specific controller. Recently, an attempt has been made to develop both state and output feedback resilient controllers under communication constraint and model uncertainty. Ghodrat and Marquez [9] have proposed an eventtriggered control law for Lipschitz nonlinear systems. In their work, the design of the triggering rule and control law have been carried out concomitantly. Both state and output feedback event-triggered control laws have been developed. To develop output feedback law, they consider an observer dynamics with intermittent measurement. They have shown that the separation principle is satisfied under small sampling threshold between sensors-observer transmission channel. In [25], Liu & Huang have proposed an event-triggered output feedback robust control technique for a class of nonlinear systems. They have solved the global robust output regulation problem for nonlinear systems in the presence of uncertain parameters that belong to some arbitrarily large prescribed compact set. Liu and Jiang [24] discussed the concept of event-triggered robust stabilization of nonlinear systems using the small gain approach. To avoid infinitely fast sampling, they have proposed an Input-to-State Stability (ISS) gain condition and correspondingly an event and self-triggering mechanism subject to external disturbances. Recently, in [42], [43], an event-triggered robust control algorithm has been developed based on aperiodic feedback to deal with the presence of uncertainty, albeit limited to linear systems. Tripathy et al. have adopted an optimal control strategy to design such a robust control law [42], [43]. Originally, this control law has been developed by Lin [22] and Lin and Brandt [23] within the optimal control framework. The nominal dynamic (or a virtual dynamic) has been used to design the control law. To realize the robust control law in [43] and [42], a prior assumption is made in that the system model is considered to be linear in nature. But in practice, most systems are nonlinear. Therefore, considering nonlinear systems is a far more realistic and pertinent control problem. Moreover, extending robust control results mentioned in [42], [43] for a class of nonlinear systems in the presence of bandwidth constraints in the communication channel is not straightforward. Indeed, the design of robust control input depends on results borrowed from the optimal control theory. In general, to design an optimal control law for a nonlinear system, it is essential to solve the Hamilton-Jacobi-Bellman (HJB) equation. Solving HJB is known to be computationally intensive and expensive since it essentially is a partial differential equation (PDE). Researchers have used different techniques to achieve this goal-e.g., neural networks and dynamic programming [1], [2], [6], [44], [48]. Recently, Yang and He [49] adopted an actor-critic based neural-network technique to address the robust stabilization problem of event-triggered nonlinear systems with input constraint. To design such a robust controller, they have solved an infinite-time nonlinear optimal control problem. However, these computation techniques remain computationally demanding. To overcome these challenges, a linear control law is proposed for a class of nonlinear systems, which can withstand uncertainties and limited availability of feedback information. This article considers the input-to-state stability theory [10], [31], [36], [50] for analysis. Various researchers used the ISS theory for analyzing the robustness of eventtriggered linear and nonlinear systems. The ISS theory results for linear system with external disturbance with observerbased output feedback control has been discussed in [50]. Ghodrat & Marquez [10] have applied the ISS theory to derive the event-triggering rule for a class of input-affine nonlinear systems under network constraints. They also showed that the proposed controller ensures stability in the presence of actuator errors and external disturbances.
In this article, an event-triggered robust control algorithm is proposed to stabilize a class of nonlinear systems with aperiodic feedback information. Here, nonlinear systems with parametric uncertainty are considered. An attempt is made to rewrite the system dynamics as a linear model plus uncertainty. With this formulation, the system nonlinearity and parametric variation of the system's model are considered as a source of uncertainty. An event-based linear robust control algorithm is developed to stabilize this class of nonlinear systems with aperiodic feedback information. To regulate the behavior of this system when faced with multiple sources of uncertainty, two different event-based control algorithms are introduced. The first event-triggering rule depends on the error between current and last transmitted state information, whereas the second one uses a nominal model for event generation. Furthermore, for an optimal usage of communication resources in the presence of model uncertainty, a modified optimal control problem has been formulated where both the cost due to the information transmission and system uncertainty are considered. To ensure the closed-loop stability of such systems, a robust control law is computed using the nominal-or a virtual-dynamics and the prior knowledge of the uncertainty bound. Next, the derived controller gain matrix is used to analyze the closed-loop performance. The ISS theory is applied to derive the event-triggering rule. The key contributions of this work are listed below: • A class of nonlinear dynamical systems is considered. The nonlinear component and parameter variations of the system model are treated as a source of matched and mismatched uncertainties. Using the optimal control framework for robust controller design, a linear control law is derived by solving a Linear Quadratic Regulator (LQR) problem. The linear robust control law ensures the closed-loop stability of the original nonlinear system.
• Based on the classical input-to-state stability theory, a novel event-triggering rule is developed to reduce the information required to stabilize this class of systems. The triggering law considers the upper bound of uncertainty such that it can withstand a range of variations for the uncertain parameters.
• We propose an event-triggered robust controller for uncertain systems with optimal event-triggering.
To solve the robust controller and optimal eventtriggering law, a joint optimization problem is formulated by minimizing a cost-function that embodies both control and communication costs for optimal usage of resources. It is shown that the design of robust optimal event-triggered controller using the optimal control framework is split in two sub-problems-the design of robust controller using the linear quadratic regulator (LQR) framework and the optimal event-triggering sequence using dynamic programming.

ORGANIZATION
The paper is organized as follows. In Section II, we present the problem statement and preliminaries, which will be used subsequently to state the results. The proposed concept considers the infinite-horizon cost and a zero-order-hold (ZOH) at the actuator end to realize the control law. Section III and IV present the key contributions of this work-mainly the event-triggering criterion and stability results. The eventtriggering and stability results for mismatched and matched uncertain systems are presented in Sections III and IV respectively. A new ZOH-free robust control law with optimal event-triggering law is also presented in Section IV. The proposed robust control law is derived by minimizing a finite-horizon cost consisting of communication cost and the cost associated with system uncertainty. In Section V, the effectiveness of the developed control algorithm is assessed numerically based on two examples of nonlinear systems. Section VI concludes the paper. Some of the proofs and steps to realize the proposed control laws are included in Appendix.

II. PRELIMINARIES AND PROBLEM FORMULATION
This section mainly presents the problem and briefly describes some preliminaries which are used subsequently in the next sections.

A. NOTATIONS AND DEFINITIONS
The Euclidean norm of a vector x ∈ R n is denoted by x , while R n refers to the vector space of real vectors of dimension n, and by extension, R n×m is the vector space of real-valued n-by-m matrices. The notation R ≥0 refers to the set of non-negative real numbers. The symbols A ≤ 0, A T and A −1 are classically used to specify the negative semidefinite character of a matrix A, its transpose, and its inverse respectively. The symbol I denotes the identity matrix of appropriate dimension. The norm of a matrix A ∈ R n×m is denoted by A and computed as A := sup{ Ax : x = 1}. The maximum (resp. minimum) eigenvalue of a symmetric matrix P ∈ R n×n is λ max (P) (resp. λ min (P)).
A continuous function f : R ≥0 → R ≥0 is said to be class K ∞ if it is strictly increasing and f (0) = 0 and f (s) → ∞ as s → ∞. A function f : R ≥0 → R ≥0 is class K, if it is continuous, strictly increasing and f (0) = 0. A continuous function β(r, s) : R ≥0 × R ≥0 → R ≥0 is a KL function, if it is a class K function with respect to r for a fixed s, and it is strictly decreasing with respect to s when r is fixed [18]. We remark that the definitions used throughout this article are identical to those found in the literature [18], [31], [36].

Definition 1 (Input-to-State Stability): A continuous-time systemẋ
is input-to-state stable (ISS) if there exists a solution x(t), for all admissible inputs u(t) and for all initial values x(0), with β and γ being a KL and K ∞ function, respectively. Definition 2 (ISS Lyapunov Function): A continuously differentiable function V (x(t)) : R n → R is an input-tostate (ISS) Lyapunov function for (1) if there exists class K ∞ functions α 1 , α 2 , α 3 and a class K function γ for all x ∈ R n and u ∈ R m satisfying the following conditions:

B. PROBLEM DESCRIPTION
This article considers a feedback control strategy for networked control systems in the presence of bandwidth constraints in feedback path and parametric uncertainty in system dynamics. To tackle channel constraint in feedback loop in the face of model uncertainty, we formulate a novel eventtriggered robust control algorithm for a class of nonlinear systems. Figure 1 shows the block diagram of the proposed robust control technique. In this diagram, the following elements are clearly appearing: (i) system, (ii) controller, and (iii) a communication network interconnecting the previous two components. The states of the system are measured continuously by the sensors at the system end.
The information from sensors are shared with the controller VOLUME 8, 2020 through a communication network. In between sensor and controller, an event-monitoring unit monitors continuously the occurrence of an event condition. Specifically, when a predefined triggering event occurs, the monitoring unit ensures the proper transmission of the state variable to the controller. This robust control problem is addressed from an equivalent optimal control strategy based on the linear nominal model or a virtual dynamics of the original nonlinear systems. The gain K of the controller and aperiodic state datum, x(t k ), which is obtained from the nonlinear system serves to compute this event-triggering control rule u(t k ) = Kx(t k ) stabilizing the closed-loop system in the presence of uncertainty. Here, the input function is actuated aperiodically at instants t 0 , t 1 , t 2 , · · · , t k , where t k represents the latest such event. A zero-order hold (ZOH) at the actuator end holds the most recent actuated input data until a subsequent triggering event leads to the transmission of another input data. Here, the actuator is assumed to be embedded within the system, with an instantaneous update of the control input at the time of transmission. The primary concern of this article is to propose an event-triggered robust control law that can withstand the system nonlinearity and model uncertainty for a class of nonlinear systems. In general, uncertainty in system dynamics is either matched or mismatched (matched: i.e. uncertainty is in the range space of input matrix [5], [17], [19], [34], mismatched: i.e. uncertainty is not in the range space of input matrix). In this section, we consider mismatched system first and then results of matched systems are reported in Section IV as a special case of the mismatched case.  [39]. In the first architecture, shown in Fig. 1, we assume that sensors and actuators are collocated but the controller is not collocated and it is interconnected through a communication network. In the second architecture, shown in Fig. 2, we consider actuators and controller to be collocated but sensors are spatially distributed and interconnected with the controller via a communication network. This type of NCS architecture has been considered in [27], [28]. A detailed discussion of the second architecture is given in Section IV.

1) SYSTEM DESCRIPTION
Consider a class of nonlinear systems with uncertainty characterized by the following dynamical laẇ where x ∈ R n , u mis ∈ R m are the state and input vectors respectively. The matrices A, B and D are constant matrices with appropriate dimensions. The matrix pair (A, B) is controllable. Two unknown nonlinear functions 1 (x) = D (x) and 2 (x) = Bh(x) are treated as uncertainty sources. Specifically, h(x) corresponds to the uncertainty at the input level, while (x) embodies the uncertainty at the system's level. In general, uncertainty in system dynamics is either matched or mismatched [5], [19]. The system (4) suffers from matched uncertainty if both uncertainties are in the range space of the nominal input matrix B. However in (4), the nonlinear function 1 (x) does not hold the matching condition as D = B, thereby yielding a mismatched case. The uncertainty 1 (x) in (4) can be decomposed into matched and mismatched components: The notation B + is used to represent the pseudoinverse [14] of input matrix B. Unknown functions (x) and h(x) satisfy the following assumptions: Assumption 1: The function (x) is bounded ∀x and the following inequality holds where the positive semi-definite matrix F mis is a priori known.

Assumption 2: The function h(x) is positive semi-definite, h(x) ≥ 0 and there exists a known non-negative function h max (x) such that for all x, h(x) satisfies
The matrix F mis and function h max (x) in (7) and (8) are related with the upper-bound on uncertainties (x) and h(x). In the subsequent sections, these Assumptions will be used to derive the controller gain matrices and stability results.
From [37], the closed-loop system (4) with event-triggered control input u mis (t k ) can be written aṡ where K mis is the controller gain and x(t k ) is the state information of (9) at the k th event-triggering instant. To tackle aperiodic information x(t k ), an error variable e(t) is defined To stabilize (9) in the presence of uncertainty and aperiodic feedback information, the following problem is formulated.
Design the robust state feedback control law (10) to regulate the closed-loop behavior of the event-triggered system (9) such that it is input-to-state stable (ISS) with respect to its measurement error e(t), in the presence of uncertainty (5).

3) PROPOSED SOLUTION
To solve the proposed problem, two different steps are adopted. First, results from the optimal control theory are used to develop a robust control strategy. As a next step, an event-triggering criterion is established to ensure input-tostate stability of (9). This criterion is obtained from assuming the existence of an input-to-state stable Lyapunov function The specific details about the derivation of this criterion are presented in the following Sections. The method to derive the robust controller gains to tackle uncertainty and event-triggering rule to deal with aperiodic feedback are presented next.

III. EVENT-TRIGGERED ROBUST CONTROL
This section describes the steps involved in designing the robust controller and event-triggering law. The controller design steps are discussed first, followed by the theorem associated with the event-triggering condition.

A. CONTROLLER DESIGN
To determine the state feedback gain, this article adopts the emulation approach. That is, initially the gain matrices are derived assuming that feedback information is available continuously. Next, some techniques are developed to take into consideration some network effects. In the following, the controller design process is discussed.
Aim: Design the state feedback controller K mis such that system (4) remains stable in the presence of bounded uncertainties (5).
To solve the above-mentioned robust control problem, an optimal control approach is adopted. The central idea is to design the optimal control input for the linear virtual (or nominal) system that minimizes a modified cost function. The term ''modified'' is used here to characterize the cost function given its dependence on the maximum variation (i.e. upper bound) of uncertainty. Then, it is shown that this derived optimal input is also a robust solution to the original system in the presence of uncertainty. Now, we derive the corresponding virtual system and cost function of the uncertain system (4).
• The virtual dynamical law for system (4) readṡ and the cost function for the mismatched uncertain systems (4) is given by where the matrix F mis is selected such that the inequality (7) holds.
The state feedback control input u mis = K mis x and virtual input v = Lx serve to stabilize (12). The virtual control input v is introduced to consider the mismatched part of the uncertainty. To obtain a robust controller in this optimal control approach, we use the following Lemma stated in [1], [22], [23]. Lemma 1: The optimal control solutions for virtual system (12) with a modified cost function (13) is robust for the original system (4) in the presence of all bounded variations of uncertainties (5).
A proof for Lemma 1 can be found in [1], [22], [23]. Based on this Lemma, the robust controller gain matrices can be obtained by solving a linear-quadratic regulator (LQR) problem. According to the optimal control theory [29], the optimal control signals for (12) minimizing the cost function (13) are given by where P 1 satisfies the following Riccati equation The aperiodic state information x(t k ) and controller gain matrices are used to derive the event-triggered control law, which is discussed next.

B. DESIGN OF EVENT-TRIGGERING LAW
This subsection presents the event-triggering condition and stability results for (9), in the presence of uncertainties (5).
Algorithm 1 reported in Appendix VI-B presents a procedure to realize the proposed control law.
The minimum inter-event time which is the minimum time between two consecutive events has to be always greater than zero; otherwise, the so-called Zeno effect [16] can occur within the system dynamics. In order to prove that τ is always greater than zero, one has to derive its expression. In the following Lemma, we consider the mismatched system (9) and prove that τ is always greater than zero for the event-triggered rule derived in Theorem 1.
Lemma 2: Consider the uncertain system (9). The minimum inter-event time τ for the event-triggered law (17) is where κ 1 = A + BK mis +Bh max K mis + D It is well-known that a system with mismatched uncertainty is difficult to control. In particular, it is hard to ensure the existence of a stabilizing controller satisfying all the conditions stated in Theorem 1. In the next section, we consider the matched uncertain system where uncertainty is in the range space of the input matrix B. These systems form a special case of the mismatched one. The main distinguishing feature is that there always exists a stabilizing controller for matched system while this is not the case for mismatched systems.

IV. NONLINEAR SYSTEM WITH MATCHED UNCERTAINTY
In (4), we consider the uncertainty description (6) which consists of both matched and mismatched components. Now, for a selection of matrix D = B, (4) reduces to a matched system with the following state-space representatioṅ The notations x and u mat1 represent the state vector and control input for (25) respectively. Here, the nonlinear function (x) satisfies the following assumption. Assumption 3: The uncertainty (x) satisfies where F mat is a positive semi-definite matrix.
From (25), it appears that this problem is afflicted by matched uncertainty since both (x) and h(x) are associated with the nominal input matrix B. Using [37], the closed-loop system (25) with event-triggered control input u mat1 (t k ) can be written aṡ where K mat1 is the controller gain and error variable e(t) as defined in (11). Example 1: Euler-Lagrange (EL) systems [7], [33] can be represented as (25), given that its dynamics is governed by where N (q,q) = V (q,q) + F(q) + G(q). The vectors q ∈ R n and τ ∈ R n denote the state variables and generalized forces, respectively. The inertia matrix, Coriolis vector, gravity vector and friction vector are also denoted by M (q) ∈ R n×n , V (q,q), G(q) and F(q) ∈ R n , respectively. As a result of uncertain load variations and unmodeled dissipative effects, the terms M (q) and N (q,q) in (29) 157408 VOLUME 8, 2020 To regulate the closed-loop behavior of (27), the following problem is formulated. P 2 − Problem Statement: Design a robust state feedback control law (28) to regulate the closed-loop behavior of the event-triggered system (27) such that it is input-to-state stable with respect to its measurement error e(t) in the presence of matched uncertainty. The problem is solved using a method similar to the one adopted in Problem P 1 . To this end, we state the following nominal dynamics for system (25) in the presence of uncertaintyẋ and the modified cost function for this matched uncertain system (25) is given by with Q ≥ 0. The matrix F mat ≥ 0 is the upper bound of the uncertainty defined in (26). Similarly, based on Lemma 1, the robust controller gain matrices can be obtained by solving the LQR problem. According to the optimal control theory [29], the optimal control signal for (32) minimizing the cost function (33) is where P 2 satisfies the following Riccati equation To establish the triggering law for (27), we propose the following Corollary. Corollary 1: Let σ ∈ (0, 1) and the optimal controller gain K mat1 derived for the nominal system (32) with cost function (33). The event-triggered control law (28) ensures asymptotic stability of the uncertain system (27) if the control input actuation instant satisfies the following sequence where the variable µ 2 is defined as Proof: The proof of this Corollary is included in Appendix A.
The procedure to realize the control law designed for Problem 2 is presented in Algorithm 1 (see Appendix VI-B).
In the following Lemma, we prove that the event-triggering law (36) ensures that the minimum inter-event time τ is always grater than zero thereby no Zeno effect can occur in the closed-loop system. Lemma 3: Consider the uncertain system (27). The minimum inter-event time τ for the event-triggered law (36) is , ∀ κ 1 > κ 2 (38) where κ 1 = ( A + BK + Bh max K + B F 1 2 ) and κ 2 = ( BK + Bh max K ).
Proof: The proof follows very similar steps as the proof of Lemma 2 and hence is omitted.

A. FINITE-HORIZON ROBUST CONTROL WITH OPTIMAL EVENT-TRIGGERING
So far, the controller design and communication constraint problems have been addressed separately using an emulationbased approach. We first formulated an infinite-horizon optimal control problem and designed the state feedback controller gain. Then, to deal with communication constraints within the feedback loop, an event-triggering law has been derived using the ISS theory. Recently, Molin [26] and Wu et al. [46] addressed the co-design problem for discretetime linear event-triggered systems to derive the controller and an event-triggering law simultaneously. Inspired by the results proposed in [26], [46], in this section, we consider both communication cost and system uncertainty, and propose an optimal control framework jointly optimizing both costscommunication cost and the cost associated with system uncertainty. To derive the results, a finite-horizon optimal control problem for linear systems is proposed. Such a finite-horizon control is considered as it constitutes a more realistic scenario in practical problems. In addition, the approach presented in Section II considered a zero-order hold (ZOH) at the actuator end, such that the last transmitted state and control input were held constant until new information was transmitted (see Figure 1). This forces the system to operate in an openloop manner in between two consecutive events. To avoid this issue, this subsection proposes a ZOH-free robust control technique with optimal event-triggered feedback. The block diagram of the proposed control technique is shown in Figure 2. The state of the uncertain system is measured by the sensors and each sensor has a copy of the nominal model. Originally, the concept of such sensors has been proposed by Garcia and Antsaklis [8] and Montestruque and Antsaklis [27]. The presence of nominal model at the sensor end helps VOLUME 8, 2020 to compute the error between actual state x(t) and nominal state x n (t):ê The variableê(t) measures the deviation of the actual closedloop performance from the nominal behavior of the system. The event-triggering unit computesê(t) and solves an optimization problem considering the communication cost to obtain the optimal transmission sequence. Based on the obtained optimal transmission sequence, the actual state is transferred through the communication channel. A dynamic programming based technique is used to solve the associated optimization problem. In the previous event-triggered control approach stated in Section II, the triggering condition depends on the growth of the error e(t). Here, the time instant t k represents the event-triggering instants as mentioned in Section II. The measurement transmitted to the controllerend remains fixed until new information is received. Yet, here, the nominal model is available at the controller-end, and is used to estimate the nominal behavior of the system. At the event-triggering instant t k , the state of the nominal model within the controller is replaced by the new measurement x(t k ) available from the original uncertain system. The nominal system state is used to compute the control law u mat2 (t) = K mat2 x n (t), where K mat2 is the controller gain. Hence, between two consecutive event-triggering instants, the control input is generated by using the nominal model Now, applying the control input u mat2 in (25), it reduces tȯ whereê(t) is defined in (39). In (40), at every event-triggering instant t k , the nominal state x n (t) is replaced by the original state x(t) and it resets the errorê(t) to zero.

Remark 2: Here, we have used two error variables: e(t) andê(t). The variable e(t) is used to compute the difference between the last transmitted state x(t k ) and current state x(t), that is e(t) = x(t k ) − x(t) where t ∈ [t k , t k+1 ). On the other hand,ê(t) measures the difference between the nominal state x n (t) and the state of uncertain system x(t), that meansê(t) = x n (t) − x(t).
In order to describe the network constraints, we consider a variable δ t , which decides whether the state information is transmitted or not. The variable δ t is defined as The switch of the binary decision variable δ t from 0 to 1 depends on the selection of a particular event-triggering law. Let be a triggering law whose evolution depends on the error variableê(t). The design objective is to define the robust controller K mat2 and the event-triggering law that minimizes a certain cost-functional. With this aim, this article considers the following cost-functional (44) where λ > 0 is a penalty due to any exchange of information between sensor, controller and actuator over the transmission network, and T denotes the final time of execution.
To regulate the state of (41) by event-triggered feedback with the transmission cost T 0 λδ t dt, the following problem is introduced.

2) PROPOSED SOLUTION
The solution to this problem is derived in two steps. First, a robust controller gain is designed for (41), and subsequently an optimal event-triggering law is introduced to reduce the number of data transmission over the network.

3) ROBUST CONTROL LAW
To design the robust controller gain for (41), we adopt the optimal control framework where a finite-horizon optimal control problem is solved for (40) while considering the cost function (44). The robust controller gain K mat2 can be obtained by solving a finite-horizon LQR problem for (40) with the cost-functional (44). Using the optimal control theory [29], the control input is computed as where P(t) is the solution of the following differential Riccati equation (DRE) For simplicity of notation, in what follows, we omit the argument t from P(t). The steps to obtain the numerical solution of (46) are discussed in [30], [32].

4) OPTIMAL EVENT-TRIGGERING LAW
From the event-triggering law (ê(t)), it can be stated that the variableê(t) influences the number of transmissions over the network. In order to design the optimal event-triggering law, it is necessary to define the dynamics ofê(t). Using (40) and (41),ê(t) evolves based on the following dynamicṡ Neglecting the uncertain terms f (x) and h(x), the nominal error dynamics readṡ At the event-triggering instant t k ,ê(t) is zero as the nominal state x n (t) is replaced by actual state x(t). To obtain the optimal event-triggering, the following optimization problem is solved: subject to: (47) The state-dependent variable ξ > 0 is computed from the stability results. The optimization problem defined in (48) can be solved using dynamic programming with discrete approximations [29] which converges to the optimal solution [20], [47]. (44), can be rewritten as (K mat2 x + K mat2ê (t)) T (K mat2 x + K mat2ê (t)) using (39). This helps to rewrite the cost-functional (44). To compute the optimal controller u mat2 (t) for the nominal system, the terms δ t andê(t) can be neglected from the minimization, since δ t is constant and the controller gain design is independent of errorê(t). However, the triggering condition design depends on the variable δ t andê(t), which help to consider the costfunctional (48) to design the triggering law * (ê(t)).
To obtain the robust controller and optimal event-triggering law, the following Theorem is proposed.
Theorem 2: The optimal state feedback gain K mat2 derived in (45) remains robust for the original uncertain system (41) if control inputs are actuated based on the optimal event triggering sequence δ * t obtained from (48). Proof: Consider the Lyapunov function V (x) = x T P(t)x. ThenV is computed aṡ (45) and (46), the above equality gives the following inequality: Using (8) and (26), the inequality (50) reduces tȯ This ensures that the closed-loop system (41) is ISS with the event-triggering law * . The threshold ξ in (49) can be computed from (51) as where µ 3 = σ λ 2 min (Q) 8(1+ h max (x) 2 ) K T mat2 K mat2 2 and σ ∈ (0, 1). The steps to realize the robust control law for (41) with optimal event-triggering law * (ê(t)) are detailed in Algorithm 3 presented in Appendix VI-B.
Remark 4: Computation of δ * (t) is done by solving the optimization problem (48). The symbol T in (48) is used to represent the final time which is selected to be larger than the minimum time between two consecutive events. Furthermore, the variable ξ is not a constant and evolves based on (52).
Remark 5: A similar method as the one mentioned in Appendix VI-A (proof of Lemma 2) can be applied to derive the lower bound of inter-event time for the controller stated in Theorem 2. For matched systems, the expression of the lower bound of inter-event time τ will be similar to the one stated in Lemma 3; but, the coefficients κ 1 κ 2 and scalar µ 2 will be different.

V. SIMULATIONS
This section tests the theoretical results derived in previous sections for two classical nonlinear systems. and (x) = 2w 2 x 1 sin 2 (x 1 ) cos(x 2 ), with w 1 and w 2 being uncertain scalar parameters whose uncertainty can vary in the interval [0, 1]. The upper bound of h max (x) is considered as h max = 2. The controller gain is computed using (34), which minimizes (33). We consider the matrices F mat1 = 4I and Q = 10I . To compute K mat1 , the Riccati equation (35) is solved. The positive definite solution P 2 of (35) is used to compute the optimal input u mat1 = − 10 10.4 x.
To realize the event-triggering sequence (36), the design parameter σ is selected to be 0.6. The numerical simulation runs for 4 time units with the initial condition [0.1, −0.1] T . For all simulations, we extracted 100 random samples of w 1 and w 2 within the interval [0, 1] and tested the performance of the designed controller. Figure 3a shows the convergence of state trajectories for different values of w 1 and w 2 . As it can be seen from Fig. 3a, all states converge to zero for various samples extracted from the set of uncertainty which confirms the robustness of the designed controller. Figure 3c shows the inter-event time of execution instants, and reveals that the number of computed control inputs is drastically reduced, thereby confirming the reduction in the ensuing communication cost. Figure 4 shows that Assumption 1 always holds during the entire run time. A comparative study with the conventional continuous control approach is shown in Table 1. It confirms that the total number of actuations u total for the event-triggered case is far less than that of the continuous control technique. The symbols τ max and τ min denote the maximum and minimum inter-event time of event generation. We have calculated the lower bound of inter-event time τ min for Example 1 using (38). The calculated value of τ min is 0.016 sec which is very close to the numerical one. To realize the optimal event-triggered control approach proposed in Section IV-A, we consider the same example discussed above. The control law is computed for a finitehorizon T = 4 seconds. The control law (42) is computed numerically using the solution of the DRE (46). To obtain the optimal event-triggering law * , the dynamic programming based optimization problem is formulated which generates the optimal triggering instants δ * t . Sensors at the system end transmit state x based on δ * t . The convergence of states with the optimal triggering law * is shown in Fig. 3b. The scalar λ is selected to be 0.4. Figure 3d shows the evolution of the switching variable δ * t for a given run-time. Table 2 compares the total number of transmission between eventtriggered control technique with optimal triggering and the conventional continuous approach. Again, we observe that the total number of transmissions is significantly reduced thereby confirming the efficacy of the proposed approach.

B. EXAMPLE 2
Consider the state-space form of a one-link robot manipulator with revolute joints [35] as an example of a class of nonlinear system (9). It is expressed in the form of (4), with the matrices  (18) is met. To realize the eventtriggering law (17), the scalar µ = 0.018 is computed based on (19).  Figure 5a shows the convergence of state trajectories with the event-triggered actuation. The aperiodic variation of control inputs are shown in Fig. 5b. A zoomed-in view of Fig. 5b is also shown in the same figure to visualize the aperiodic variation of inputs more clearly. The condition (7) is also verified and shown in Figure 6. This proves that Assumption 1 holds for Example 2. A comparative study between continuous and event-triggered control techniques is shown in Table 3. It shows the efficacy of the proposed eventtriggering technique over the continuous one in terms of total number of actuations for a given run time.

VI. CONCLUSION
In this article, we consider a class of nonlinear systems afflicted with matched and mismatched uncertainty. To design adequate and effective event-triggered control laws, we consider both the nonlinearity and parameter variations as a sources of uncertainty. The controller-whose design is based on the linear part of the system-remains robust in the presence of these sources of uncertainty. We propose a linear robust control law derived within the optimal control framework with an infinite horizon cost. Furthermore, the corresponding event-triggering law is also derived while regulating aperiodic feedback information with the goal of saving the network bandwidth. Specifically, for matched uncertain systems, we solve a finite-horizon robust control problem with optimal event-triggering which constitutes a VOLUME 8, 2020 more realistic scenario in practical problems. To this end, we assume that each sensor has a copy of the nominal dynamics and can form an error signal corresponding to the difference between actual and nominal states. To compute the optimal event-triggering law, an optimization problem is solved using dynamic programming. The effectiveness of the designed control laws is illustrated through numerical simulations of two distinct problems.
There are numerous challenges for future research based on the work reported in this article. In particular, considering network-induced uncertainties such as time delays, data packet dropouts, and noise in the transmission channel would be an interesting extension to the current contribution. Furthermore, an output-feedback control law-instead of statefeedback-results in a controller more suitable for practical applications.

A. PROOFS 1) PROOF OF COROLLARY 1
To prove the ISS-stability of uncertain system (27) with control input (28), it is necessary to reformulateV (x) such that it satisfies (3). Consider the Lyapunov function for (27) in the form of a positive smooth function V (x) = x T P 2 x. To ensure the stability of (27),V (x) is recast aṡ The function V (x) is a Lyapunov function for (32) that satisfies the Hamilton-Jacobi-Bellman (HJB) equation where matrix V x denotes ∂V ∂x . For a selection of Lyapunov function V (x) = x T P 2 x, the HJB equation (54) reduces to a Riccati equation (35). The optimal input u mat1 must satisfy (54); that means Using (55) and (56), Eq. (53) is simplified aṡ Now applying (26) in (57) and after further simplification following is achieveḋ λ min (Q) 1 + h max (x) 2 e 2 (58) The inequality (58) ensures the ISS of (27) with respect to measurement error e. From (3) and (58), it is observed that the actuation of control input is solely required upon violation of the event-triggering criterion (36).
From [37] and [38], the computation of inter-event time depends on the evolution of e x . Now considering [37] and using the relation (11) Using, comparison Lemma from [18], the inequality (63) reduces to following equality . (64) From the definition of inter-event time, it should be always bounded by a positive unit of time that means in between two consecutive events (say t k to t k+1 ), the ratio of e x evolves from 0 to µ 1 ∈ R + . This evolution will take a finite amount of time unit. Now to show τ > 0, (64) is solved with a initial condition z(0, z 0 ) = z 0 and the solution z(t, z 0 ) must holds the inequality e x ≤ z(t, z 0 ). To derive τ , (64) is written as .