The Adaptive Distributed Event-Triggered Observer Approach to Cooperative Output Regulation of Discrete-Time Linear Multi-Agent Systems Under Directed Switching Networks

In the paper, we solve the cooperative output regulation problem of discrete-time linear multi-agent systems under directed switching networks. Firstly, a new adaptive distributed event-triggered observer is proposed, which does not require each follower to know the leader matrix information. In addition, we propose relative threshold and novel switching threshold event-triggered strategy, which can be used in different situations according to their respective characteristics. Then, we solve the cooperative output regulation problem for discrete-time linear multi-agent systems under directed switching networks by both distributed state feedback and distributed dynamic output feedback control laws utilizing the adaptive distributed event-triggered observer. Finally, a corresponding numerical simulation on the discrete-time linear multi-agent systems is conducted, and the cooperative output regulation performance is verified.


I. INTRODUCTION
Over the past decade, cooperative output regulation has attracted great interest from many research communities because of their wide applications in various fields such as multi-robot cooperative work, multi-UAV cooperation and control. Some interesting works have been reported in [1]- [5]. The cooperative output regulation problem under the assumption of continuous-time communication networks have been solved. In practical, communication networks have a great burden when there is continuous-time communication among the agents. Therefore, reducing the burden of communication networks become a hot issue in this field when solve the cooperative output regulation problem of multi-agent systems.
Some people applied periodic sampling method to multi-agent systems [6], which reduce the burden of communication networks. Note that, the sampling period must be chosen to unnecessary conservative values to handle in the The associate editor coordinating the review of this manuscript and approving it for publication was Adao Silva . worst case. Recently, due to the use of event-triggered strategy does not need to choose a conservative sampling period to handle the worst case, which has attracted much attention. Various event-triggered control problems [7], [8] in single system have been studied [9]. For a class of linear systems, event-triggered control is introduced systematically. [ 10] uses an event-triggered strategy for a class of disturbed linear systems, which reduces the burden of calculation and communication. [11] further aimed at the stabilization of linear time-invariant systems, and proposed an event-triggered strategy based on output signals. [12] systematically studied event-triggered based on output feedback for a class of uncertain minimum phase linear systems output regulation control problem. Then, the event-triggered control is applied to multi-agent systems [13]- [15]. For example, the consensus problem of a second-order multi-agent system with leader is studied, and a distributed event-triggered control protocol is proposed under a fixed directed topology [16]. Subsequently, based on the event-triggered control strategy, the problem of leader-following consensus in the second-order multiagent system with time delay is studied [17]. The distributed relative threshold event-triggered strategy is adopted in multi-agent systems [18]. A distributed observer based on relative threshold event-triggered strategy is adopted [19] and assumption that each follower to know the leader matrix information.
The event-triggered strategy mentioned above are all multi-agent systems with continuous-time dynamics and the communication topology is fixed. In many practical scenarios, information is obtained by periodic sampling and links among the agents in communication topology may fail or change [20]- [24].
The relative threshold event-triggered strategy has the following two characteristics, when the input signal is assigned a large value, the threshold of the event-triggered can be large, so that the control signal will remain unchanged for a long time; and when the states of system tends to be stable (the control signal is small), precise control is required to ensure the steady state performance of the system, so the threshold of the event-triggered should be small. However, the disadvantage of the relative threshold event-triggered strategy is that the control error due to event-triggered is also large when the input signal is large. Large control errors will produce large step signals to affect the tracking performance of the system. In order to overcome this shortcoming, we propose a novel switching threshold event-triggered strategy, so that the system can not only accurately control the signal acting on the system when the input signal is small, but also the control error caused by the event-triggered is always bounded regardless of the input signal.
In this paper, we have several technical challenges in solving the cooperative output regulation problem for discrete-time linear multi-agent systems under directed switching networks. Firstly, it is not necessary for each follower to know the leader matrix information. Secondly, due to the errors caused by the periodic sampling time and event-triggered of the discrete-time linear multi-agent systems, stability analysis becomes more difficult under the directed switching network. Thirdly, the communication between each agent and its neighbors depends on the event-triggered time, periodic sampling time, and the dwell time of switching topology, which makes it more difficult to exclude Zeno phenomenon.
To overcome such challenges, the main contributions of this paper are follows: 1) In this paper, we remove the assumption that each follower must know the leader matrix information, so that the discrete-time linear multi-agent system can be fully distributed.
2) The adaptive distributed event-triggered observer adopts a relative threshold event-triggered strategy and a novel switching threshold event-triggered strategy, respectively. All these strategies guarantee that multi-agent systems eventually are able to achieve cooperative output regulation.
3) In the proposed event-triggered strategy, the trigger time of each agent is greater than the minimum of the periodic sampling time and the dwell time. Therefore, Zeno phenomenon is strictly excluded for each agent.
The rest of this paper is organized as follows. In section II, we give the preliminaries and problem formulation. In section III, we propose a distributed adaptive observer with two event-triggered strategies. In section IV, We give an analysis of the cooperative output regulation problem. In section V, we give an illustrative example. In section VI, we give a conclusion.

II. PRELIMINARIES AND PROBLEM FORMULATION A. NOTATION AND ALGEBRAIC GRAPH BASICS
In this subsection, we introduce the following notations. Vectors a 1 , . . . , a N satisfies col(a 1 , . . . , a N ) = [a T 1 , . . . , a T N ] T . If matrix S satisfies S > 0 (or S ≥ 0), this means that S is a positive definite (or semi positive definite) matrix. By λ i (S ) is the i th eigenvalue of matrix S . The minimum eigenvalue and maximum eigenvalue of matrix S are λ m (S ) and λ M (S ). σ (S ) denotes the spectrum of S and ρ(S ) denotes spectral radius.

B. PROBLEM FORMULATION
Consider a class of discrete-time linear multi-agent systems in following form and y oi (t) ∈ R p i are the state, control input, error output and measurement output, respectively. The signal v ∈ R q is generated by the following discrete-time exosystem , v r and v d represent the reference input and disturbance. S = [S1 T , . . . , Sq T ] T ∈ R q×q , S1, . . . , Sq are the row vectors of S.
The discrete-time plant (1) and exosystem (2) can be viewed as (N + 1) agents, which form a discrete-time linear multi-agent systems. where (2) as a leader, the N subsystems of (1) as N followers.
To solve problem 1, we need the following assumptions.

III. ADAPTIVE DISTRIBUTED EVENT-TRIGGERED OBSERVER
The distributed observer and control law is proposed in [26] has the following form: where η i ∈ R q is the state. Finally, by proving the following equation the cooperative output of multi-agent systems is obtained. In (5), the assumption that each follower to know the matrix S information, which may not be realistic in some applications. Thus, we will propose a novel adaptive distributed event-triggered observer candidate: are after definition. If there exist some ω 1 , ω 2 > 0 such that, for any initial condition, the solution to (7) (7) is called adaptive distributed event-triggered observer candidate. Because each follower does not need to know the leader matrix S information, and the event-triggered strategy of the information among the followers can effectively reduce the communication times. Therefore, the adaptive distributed event-triggered observer can effectively solve the problem under the limitation of communication bandwidth.
We establish the following lemma to prove that (7) is an adaptive distributed event-triggered observer for the leader system.
Since Assumption 1, for any symmetric and positive semi-definite matrix ∈ R q×q , there satisfying S T MS−M = − , where M ∈ R q×q is a symmetric positive semi-definite matrix.
Consider a function candidate Then, according to (2) and (8) we get the following equation where F(t+1)−F(t) ≤ 0. Thus, the solution to (2) is bounded for any initial condition at any time. Lemma 1: Consider the following discrete-time linear multi-agent systems under directed switching networks. where where Proof: Consider a Lyapunov function candidate where Suppose the topology network is fixed (H ς (t) = H ρ ). By (10) and (12), we have where Then we will prove that the switching topology still satisfies lim t→∞ ξ (t) = 0.
Inspired by [19], it follows from Cauchy's convergence criteria that, for any Using (16) yield that Noting that there are finite switches in [T m , T m+1 ), the constant s m is finite for each m = 0, 1, . . . . Thus, one has

A. RELATIVE THRESHOLD EVENT-TRIGGERED STRATEGY
Inspired by [19], the following trigger mechanism is used to determine the discrete-time linear multi-agent systems (10) triggered time instant t i ϕ .
. τ i is the sampling period of the agent i, which is mainly selected based on empirical data.t i ϕ+1 is determined by the relative threshold event-triggered as follows where According to trigger mechanism (19), > 0 by noting that τ d is the minimum dwell time. So, the Zeno phenomenon is strictly excluded in agent i.
The adaptive distributed event-triggered observer (7) introduces a trigger mechanism (19), which can achieve According to (7) and (20), The error caused by event-triggered is shown below.
Lemma 2: Consider the multi-agent system (2) and the adaptive distributed event-triggered observer (7), . For any S i (0) the following equation is satisfied.
We can also findS2 i (t), . . . ,Sq i (t) under t → ∞ by using the same method as above. So we get (23).
According to (7) and (20), the following equation is defined under the relative threshold event-triggered strategy.
The error caused by event-triggered is shown below.
Lemma 3: Consider the multi-agent system (2) and the adaptive distributed event-triggered observer (7), letη . For any η i (0) the following equation is satisfied.
. Then, the following condition is always satified.

B. SWITCHING THRESHOLD EVENT-TRIGGERED STRATEGY
In this section, we design a novel switching threshold event-triggered strategy.
The relative threshold strategy, when the control signal is close to the origin, more precise control in system application. However, when control signal is excessively large, the system produce extremely large measurement errors, which will lead to instability of system. To overcome this shortcoming, we use the following novel trigger mechanism to determine the discrete-time linear system (10) triggered time instant t i S ϕ .
where τ i andT ϕ are defined the same as in (19). The event-triggered timeť i S ϕ is determined by the following switching threshold event-triggered mechanism.
where K is a user-designed parameter, || Si || ∞ ≤ ω, K h < Si ||K ||,ě i (t) is event-triggered error. when θ i (t) < K is satisfied, the relative threshold event-triggered strategy is adopted to make precise control can be obtained when close to origin; Otherwise when θ i (t) ≥ K is satisfied, the fixed threshold strategy is adopted. The advantage of this switching event-triggered strategy is that no matter how large the amount of control is, the measurement error is within the bounds of a given number at any time.
According to the relationship satisfied by the eventtriggered error in inequality (37), Lemma 2 still holds.
The state η i (t) error caused by event-triggered is shown below.ě The switching threshold event-triggered mechanism is introduced, and the event-triggered error (ě η i (t)) satisfies the following inequality According to the relationship satisfied by the eventtriggered error in inequality (39), Lemma 3 still holds.

IV. COOPERATIVE OUTPUT REGULATION PROBLEM ANALYSIS
In this paper, we adopts a discrete-time linear multi-systems, and assumes that not each follower knows the matrix S of the leader, so we cannot directly use the matrix S to design control law solve the cooperative output problem. Based on the estimated value S i of S, a discrete-time adaptive algorithm is proposed under the relative threshold and the switching threshold event-triggered strategy to calculate the solution of the regulator equation.
For propose, let and where S i (t) is generated by (7), Lemma 4: Under Assumption 4, we can obtain the following equation for any initial condition (0).
, has a bounded solution. In addition, Proof: The regulator equations (3) and (4) can be put in the following form By (45), the regulator equations (3) can be transformed into the following form According to (43), we get T i β i . According to (47), the following equation is obtained.
Since lim t→∞Ẽ T i = 0, both and P will decay to zero exponentially. for any i (0), the solution i (t) to (43) is such that We design the following distributed state feedback control law and distributed dynamic output feedback control law and We have the following result. Theorem 1: Under Assumptions 1-5, let e x =x i − x i , 0 < ω 2 < 1. Then, Problem 1 is solvable by the state feedback control law composed of (7), (43) and (51a). Proof: According to (3) and Assumption 4, we get In addition According to distributed state feedback control law (51a), we get where η i = η i (t),η i =η i (t) Substituting (54) to (52) gives According to Lemma 3 and Lemma 4, lim t→∞ηi = 0 and lim t→∞Kηi (t) = 0 are obtained, which can get lim t→∞ ı 1 (t) = 0. There exists A i + B i K xi is Schur, we get lim t→∞xi (t) = 0 and lim t→∞ũi = 0 hence lim t→∞ e i (t) = 0 by (53).

V. TWO EXAMPLES
In this section, we consider the cooperative output regulation problem of the linear discrete-time multi-agent systems. The four followers are given by (1) with with According to the [27], we get the following exosystem is where τ i = 1s. Assuming that the communication topology among all followers and the leader is descried byḡ ς (t) . The switching signal ς (t) as follows where s = 0, 1, . . .. The four graphs g i , i = 1, 2, 3, 4, are illustrated in Fig. 1    We present two examples to illustrate our design under the strategies of relative threshold event-triggered and switching threshold event-triggered.
Applying distributed dynamic state feedback control law (51a) under the switching period T = 8s, we get the simulation results in Figs. 2-4. Fig. 2 show the tracking errors, Fig. 3 show the estimation errors of the exosystem signal and Fig. 4 show the estimation of the exosystem matrix.
It can be seen from Figs. 2-4 that when the distributed state feedback control laws utilizing the adaptive relative threshold event-triggered observer, the multi-agent system can achieve cooperative output at approximately 90s. Table 1 and Table 2 show that the matrix η 11 and S 11 adopts the relative threshold event-triggered strategy and requires 142 and 38 communication times, respectively, which is much lower than the 300 communication times without the event-trigger strategy.     Applying distributed dynamic output feedback control law (51b) under the switching period T = 16s, we get the simulation results in Figs. 5-7. Fig. 5 show the tracking errors, Fig. 6 show the estimation errors of the exosystem signal and Fig. 7 show the estimation of the exosystem matrix.
It can be seen from Figs. 5-7 that when the distributed dynamic output feedback control laws utilizing the adaptive relative threshold event-triggered observer, the multi-agent system can achieve cooperative output at approximately 60s.      Table 3 and Table 4 show that the matrix η 11 and S 11 adopts the relative threshold event-triggered strategy and requires 146 and 20 communication times, respectively, which is much lower than the 300 communication times without the event-trigger strategy.

B. SWITCHING THRESHOLD EVENT-TRIGGERED STRATEGY
According to (35), we set K = 1.5, K h = 0.1.  Figs. 8,9 show the comparison between the triggering error of the relative threshold event-triggered strategy and the triggering error of the switching threshold event-triggered strategy in the system (60). These simulation results confirm that |η 11 k − η 11 | is less than or equal to 1.5 under the relative threshold event-triggered strategy and |η 11 k −η 11 | is less than or equal to 0.02 under the switching threshold event-triggered strategy, |S 11 k − S 11 | is less than or equal to 0.09 under the relative threshold event-triggered strategy and |S 11 k − S 11 | is less than or equal to 0.005 under the switching threshold event-triggered strategy. where η 11 k, S 11 k represents the value under the event-triggered strategy, η 11 , S 11 represents the value without event-triggered strategy. we can get the η 11 and S 11 error of the switching threshold event-triggered strategy at any time is less than relative threshold event-triggered strategy error via distributed state feedback.
Applying distributed state feedback control law (51a) under the switching period T = 8s, we obtain the simulation results in Figs. 10-12. Fig. 10 show the tracking errors,      It can be seen from Figs. 10-12 that when the distributed state feedback control laws utilizing the adaptive switching threshold event-triggered observer, the multi-agent system can achieve cooperative output at approximately 100s. Table 1, 2 and 5, 6 show the triggering times of the relative threshold event-triggered strategy less than the triggering times of the switching threshold event-triggered strategy in the adaptive distributed event-triggered observer (7) via distributed state feedback.
Figs. 13, 14 show the comparison between the triggering error of the relative threshold event-triggered strategy and the triggering error of the switching threshold event-triggered strategy in the system (60). These simulation results confirm that |η 11 k − η 11 | is less than or equal to 3 under the relative event-triggered threshold strategy and |η 11 k −η 11 | is less than or equal to 0.1 under the switching threshold event-triggered strategy, |S 11 k − S 11 | is less than or equal to 0.18 under the relative threshold event-triggered strategy and |S 11 k − S 11 | is less than or equal to 0.01 under the switching thresh-    old event-triggered strategy. where η 11 k, S 11 k represents the value under the event-triggered strategy, η 11 , S11 represents the value without event-triggered strategy. we can get the η 11 and S 11 error of the switching threshold event-triggered strategy at any time is less than relative threshold event-triggered strategy error via distributed dynamic output feedback.
Applying distributed dynamic output feedback control law (51b) under the switching period T = 16 s, we obtain the simulation results in Figs. 15-17. Fig. 15 show the tracking errors, Fig. 16 show the estimation errors of the exosystem signal and Fig. 17 show the estimation of the exosystem matrix.   It can be seen from Figs. 15-17 that when the distributed dynamic output feedback control laws utilizing the adaptive switching threshold event-triggered observer, the multi-agent system can achieve cooperative output at about approximately 85s. Table 3, 4 and 7, 8 show the triggering times of the relative threshold event-triggered strategy less than the triggering times of the switching threshold event-triggered strategy in the adaptive distributed event-triggered observer (7) via distributed dynamic output feedback.

VI. CONCLUSION
In this paper, an adaptive distributed event-triggered observer is proposed. On this basis, we use the relative threshold and switching threshold event-triggered strategy. When the switching threshold strategy is adopted, quantitative simulation results show that although the trigger error is less than the relative threshold event-triggered strategy, the trigger times increases accordingly. When the relative threshold strategy is adopted, quantitative simulation results show that although the trigger times is less than the switching threshold event-triggered strategy, the trigger error increases accordingly. We use this adaptive distributed event-triggered oberver to solve the cooperative output regulation for discrete-time linear multi-agent systems under directed switching networks. In practical applications, different event-triggered strategies can be selected according to their characteristics.