Can Competition Outperform Collaboration? The Role of Misbehaving Agents

We investigate a novel approach to resilient distributed optimization with quadratic costs in a multiagent system prone to unexpected events that make some agents misbehave. In contrast to commonly adopted filtering strategies, we draw inspiration from phenomena modeled through the Friedkin–Johnsen dynamics and argue that adding competition to the mix can improve resilience in the presence of misbehaving agents. Our intuition is corroborated by analytical and numerical results showing that 1) there exists a nontrivial tradeoff between full collaboration and full competition and 2) our competition-based approach can outperform state-of-the-art algorithms based on weighted mean subsequence reduced. We also study the impact of communication topology and connectivity on resilience, pointing out insights into robust network design.

through massively connected devices in Networked Control Systems are breached, thanks to powerful communication protocols such as 5G, this problem will only gain in importance.

A. Related Literature
The problems above have been extensively studied in literature.A body of work investigates control techniques to overcome fragility of specific applications.Examples are power outage in smart grids [8,9], cascading failures in cyberphysical systems [10]- [13], denial of service [14,15], robot gathering [16], and distributed estimation [17], to name a few.From a methodological perspective, control and optimization literature mostly focuses on robustness of distributed algorithms and control protocols to a fraction of misbehaving agents.This approach can tailor either intentionally malicious agents, such as cyber-attackers, or accidental faults due to, e.g., hardware damage.A fundamental subclass of such approaches is resilient consensus, aimed to enforcing consensus of normally behaving (or regular) agents in the face of unknown adversaries.The consensus problem has been deeply studied in the past decades [18] and underlies a plethora of application domains.In particular, average consensus is a cornerstone in distributed estimation [17] and optimization [19]- [21], management of power grids [22], distributed Federated Learning [23,24], among others.Unfortunately, the standard consensus protocol is fragile and misbehaving agents can arbitrarily deviate the system trajectory.To tame this issue, the most common approaches rely on the filtering strategy referred to as "Mean Subsequence Reduced" (MSR), whereby agents discard suspicious messages (largest and smallest values) from updates [25].The pioneering paper [26] introduced a weighted version (W-MSR) and defined r-robustness of graphs, a suitable index that enables theoretical guarantees for resilient consensus based on W-MSR.Among the many variants and adaptations of W-MSR, [27] studies resilient control for double integrators, [28] tackles mobile adversaries, [29] focuses on leader-follower framework, [30] targets nonlinear systems with state constraints, [31] extends the notion of r-robustness to time-varying graphs, and [32]- [34] consider generic cost functions to achieve resilience in general distributed optimization.
Other approaches in literature do not filter information from neighbors, but explore enhanced capabilities of regular agents.For example, [35] uses a buffer to store all values received from other agents and replaces the thresholding mechanism with a voting strategy followed by dynamical updates, [36] studies algorithmic robustness enabled by trusted agents, [37] Fig. 1: Competition vs. collaboration in distributed quadratic optimization.The global cost etot is the sum of two contributions that reflect two contrasting attitudes of regular agents: e deception is caused by (erroneously) trusting misbehaving agents, which makes them drift away from the nominal average, while econsensus is due to the competition among regular agents, which mitigates misbehaviors but also prevents regular agents from reaching a consensus.The tunable parameter λ ∈ [0, 1] allows regular agents to smoothly transition from full collaboration (λ = 0), where they fully trust all agents in the network, to full competition (λ = 1), where they trust no other agent, producing a rich range of behaviors at local and global scale.
proposes dynamically switching update rule for continuous-time double integrators, and [38]- [40] use stochastic or heuristic trust scores to filter out potentially malicious transmissions, providing probabilistic bounds on detection, convergence, or deviation from average consensus.While such approaches may overcome limitations of MSR-based strategies, they usually require either stronger assumptions on the network (e.g., trusted agents) or burdening local computation or storage resources.

B. Novel Contribution
Despite the success of MSR-based strategies, a critical point is dependence of theoretical guarantees on r-robustness of the underlying graph, which allows regular agents to reach resilient consensus if such an index is large enough.In fact, it is difficult to characterize the steady-state behavior of agents if some minimal robustness is not met.Even though algorithms might practically work, comprehensive theoretical guarantees are still lacking, and also, some applications require more conservative but safer approaches.In particular, while in some cases agents may just agree on a common value, other tasks require average consensus to succeed.Thus, we depart from classical filtering strategies and seek a framework for resilience that can offer theoretical guarantees in a broader sense.
Towards this goal, we set the stage with two key moves.Firstly, rather than finding conditions that enforce a consensus among regular agents, which only indicates if a system is resilient, we aim to measure the level of resilience, which we evaluate through the cost of a distributed optimization problem with quadratic costs.Secondly, we aim to modify the original problem to make it robust to misbehaving agents rather than adapting a consensus protocol.Stepping forward, we propose an update rule based on the celebrated Friedkin-Johnsen (FJ) dynamics [41] to enhance resilience of the addressed distributed optimization problem.The key feature of the FJ dynamics is a tunable parameter λ ∈ [0, 1] that allows to smoothly transition from the regime of full collaboration (λ = 0), where each regular agent equally trusts all agents, to the regime of full competition (λ = 1), where each regular agent regards all others as adversaries.We refer to the regime with λ ∈ (0, 1] as competition-based because regular agents are forced to (partially) mistrust the others.This approach allows us to study resilience variations that arise from different choices of agents that can trust their neighbors or not, a choice that turns out to be crucial if adversaries are present.In fact, we observe a fundamental performance trade-off that we name competitioncollaboration trade-off : in general, the optimal resilient strategy is hybrid, namely each regular agent should partially compete with its neighbors, as depicted in Fig. 1.The global cost (solid blue) is the sum of two conflicting contributions that represent deception due to collaboration with misbehaving agents (dashed red) and inefficiency caused by competition against regular agents (dashed-dotted yellow).To achieve analytical intuition about such a competition-collaboration trade-off, we leverage the social power, a tool drawn from opinion dynamics that sheds light on the twofold effect of the parameter λ used to instantiate the FJ dynamics.
After analytically characterizing the proposed competitionbased protocol, we fix the update rule and shift attention to the network in order to assess how it impacts resilience of regular agents.In particular, we numerically show how network connectivity can mitigate misbehavior and how the performance varies as the network gets sparser or less balanced.In fact, we heuristically observe that not only high connectivity, but also degree balance across agents is useful to tame unknown adversaries, that intuitively can exploit highly connected areas to quickly spread damage at global level.
Besides new results, this article extends the preliminary conference version [42] in two ways.Firstly, we consider a more general prior distribution of the observations of agents.Secondly, we compare our proposed strategy with both standard W-MSR [26] and recently proposed SABA [35].

C. Organization of the Article
We motivate average consensus for distributed optimization in Section II, model a class of adversaries in Section II-A, and introduce the performance metric used to quantify resilience in Section II-B.In Section III, we propose our competitionbased protocol: we introduce the FJ dynamics in Section III-A, compute the cost function in Section III-B, and formally characterize the cost function and its minimizer in Sections III-C and III-D.In Section IV, we report numerical tests that support our analytical intuition.Then, in Section IV-A, we offer analytical insight on the competition-collaboration trade-off using the notion of social power.In Section V, we numerically explore the impact of the communication network on resilience.To evaluate our approach, we perform simulations in Section VI and show that it can outperform MSR-based methods.We conclude by addressing potential avenues for future research in Section VII.

II. SETUP AND PROBLEM FORMULATION
We consider a multi-agent system composed of N agents labeled as the set V = {1, . . ., N }.Each agent i ∈ V carries local information encoded by an observation θ i ∈ R and a variable state x i ∈ R. For notation convenience, we stack all states and observations in the vectors x ∈ R N and θ ∈ R N , respectively.
Within the network, some agents behave according to a control task at hand, while others cannot be controlled and may deviate from the task.We call the former agents regular and the latter agents misbehaving.Because the misbehaving agents cannot be involved in cooperative tasks though their uncontrolled nature, we consider a distributed optimization problem involving only the regular agents.We assume that each regular agent wishes to adjust its state so as to minimize a quadratic mismatch among all observations, where R ⊆ V gathers all regular agents.By straightforward calculations, (II.1) can be rewritten as where R .
= |R| and θR is the average of observations {θ i } i∈R .The distributed optimization task is then given by arg min which is solved if and only if all regular agents reach average consensus among them, i.e., x i = θR for all i ∈ R.
In the nominal scenario where all agents are regular (V = R), the cost (II.3)can be minimized via the consensus dynamics (or consensus protocol) x(k + 1) = W o x(k) where x(0) = θ and W o is a doubly stochastic irreducible matrix that leads agents to average consensus.Interpreting W o as a (weighted) communication matrix, the consensus dynamics allows agent j to communicate its state to agent i if and only if W ij > 0.
However, the standard consensus protocol easily fails in the presence of misbehaving agents [26].We next introduce a misbehavior model that disrupts the nominal protocol.

A. Misbehaving Agents
Misbehaving agents follow state trajectories with no relation to optimization task (II.3) and broadcast potentially misleading information to neighbors.We denote the subset of misbehaving agents by M with M .= |M|, V = M ∪ R, and M ∩ R = ∅.Also, without loss of generality, we label the agents as R = {1, . . ., R} and M = {R + 1, . . ., N }.The vectors x R ∈ R R and x M ∈ R M stack the states of regular and misbehaving agents, respectively, with To address a general scenario and remove dependence on the specific values of observations, we assume that these are drawn from a prior distribution.
Assumption 1 (Distribution of observations).Observations {θ i } i∈V are distributed as random variables with mean E [θ] = 0 and covariance matrix Σ .= E θθ ⊤ ≻ 0. We denote While the standard consensus and Assumption 1 are suited to an ideal scenario, misbehaving agents may disrupt the task (II.3).In the following, we assume that misbehaving agents constantly transmit noisy versions of their observations: (II.4) We refer to the constant input v m as (deception) bias and to the varying input n m (k) as (deception) noise.In words, the deception bias v m makes the observation θ m of the misbehaving agent m an outlier w.r.t. the expected range of values of observations as per Assumption 1. Conversely, the deception noise n m (k) hides the true state of the misbehaving agent from its neighbors, akin purposely injected measurement noise.
Assumption 2 (Misbehavior model).We stack biases in the vector v ∈ R M and noises in the vector n(k) ∈ R M .Further, we set their statics as Remark 1 (Misbehavior vs. intelligent attacks).Assumption 2 is consistent with a portion of the literature on resilient consensus, where algorithms are tested against constant or drifting misbehaving agents that steer their neighbors far off the nominal consensus [28,35,38,40].In our case, misbehaving agents are stubborn on average but behave in a less trivial (noisy) way.On the other hand, smart (malicious) adversaries may need to be contrasted by sophisticated strategies [43,44].This case is outside the scope of this article, where we explore competition as a tool to enhance resilience, and we defer a comprehensive study with intelligent attacks to future work.

B. Performance Metric
In light of problem (II.3) and assuming that the states of regular agents are updated by a control protocol overtime, we use the following performance metric to measure the resilience of the system, which we refer to as (average) consensus error: The error (CE) coincides with the objective cost of the optimization problem (II.3) (up to additive constants that depend only on the observations) averaged over the stochastic elements within the system dynamics, such as observations θ of all agents and deception biases v and noises n(k) of misbehaving agents.
While the standard consensus protocol achieves e R = 0 in the nominal scenario where all agents reach average consensus, the presence of unknown misbehaving agents makes e R grow, degrading the collaborative task (II.3).In the following section, we propose an update protocol that makes regular agents more resilient by decreasing the consensus error e R and hence improving the performance associated with the task (II.3).

III. RESILIENT AVERAGE CONSENSUS A. The Friedkin-Johnsen Dynamics
Because the classical consensus is fragile to misbehaving agents, we look for alternative strategies to minimize (CE).
To this aim, we step back to the optimization problem (II.3) and search for a way to make it more robust to unexpected behaviors.In particular, we modify the local problems associated with each regular agents i ∈ R by integrating the nominal weight matrix W o and adding a regularization term that penalizes deviations from the local observation: (III.1) Assumption 3 (Nominal weights).The matrix W o is irreducible, row stochastic, and W o ii = 0, i ∈ V (no self-loops).The parameter λ ∈ [0, 1] in (III.1)makes the ith agent anchor to its observation θ i , so that large deviations of its state x i from θ i are discouraged.We then let each agent greedily minimize the modified cost (III.1)at step k + 1, which yields the celebrated Friedkin-Johnsen (FJ) dynamics [41]: with x i (0) = θ i .We interpret the rule above as a modified consensus protocol where the agents do not fully align with neighbors but also compete by tracking their own observation.
In particular, we call the parameter λ as competition, referring to the case λ = 0 (equivalent to the consensus protocol) as full collaboration and to the case λ = 1 as full competition.
While the dynamics (FJ) is suboptimal if all agents are collaborative, because it prevents them from reaching a consensus if λ > 0, we use it to make regular agents resilient to unknown misbehaving agents.Intuitively, anchoring a regular agent i ∈ R to its observation θ i prevents the agent from being arbitrarily dragged away by misleading values coming from misbehaving agents.In particular, the latter agents obey (II.4) with no relation to the protocol (FJ) or nominal weights W o .
In the following, we study how the protocol (FJ) improves system resilience.In fact, tuning λ within the interval [0, 1] originates a nontrivial competition-collaboration trade-off : what is the optimal competition λ that makes regular agents most resilient with respect to task (II.3)?Exploring this tradeoff under misbehaving agents is the main matter of investigation of this article.To this aim, we regard the consensus error as function of the competition: this allows us to perform analysis and achieve insight about minimization of e R (λ).
Remark 2 (Connections with game theory and opinion dynamics).The FJ dynamics can be given the following gametheoretic interpretation.The cost (III.1) is interpreted in games as cognitive dissonance, whereby a rational decision-maker gets incentive both in aligning with the neighbors and in following a local rule.Also, the function (III.1) with λ = 0 reduces to the utility used in [45] where the authors analyze the consensus protocol from a game-theoretic perspective.In opinion dynamics, the FJ dynamics is typically used to model prejudice, whereby the opinion of an agent is biased towards a personal belief despite interactions with others.
Remark 3 (Competition for resilience).While most works in the literature regard λ as a model parameter, we purposely design λ in (FJ).The intuition behind this choice, seemingly counterintuitive for a collaborative task, is that introducing some competition among agents can mitigate behaviors that are unpredictable at design stage: rather than addressing the binary property "consensus is (not) achieved" like typical works on resilient consensus, we take a broader viewpoint and interpret resilience as a real quantity measured through the cost (CE).
Remark 4 (Heterogeneous competition).While we focus on a single parameter λ for the sake of analysis, the general FJ model with a different parameter λ i for each agent i ∈ V makes the analysis challenging but does not affect the fundamental system behavior.Designing a parameter λ i for each regular agent i ∈ R to improve performance even further is an important topic, whose investigation is left to future work.

B. Computation of the Consensus Error
We now compute the error (CE) with the steady state induced by the dynamics (FJ).To this aim, it is convenient to write the network dynamics associated with all regular agents.
First, we highlight the interactions of regular and misbehaving agents by partitioning the nominal weight matrix as (III.2) Then, the dynamics of regular agents can be written as follows: where the matrix W encodes the actual interactions (weights) followed within the network and is defined as follows: In particular, W means that the average state of a misbehaving agent is affected by no other agent, according to (II.4).The matrix L is row-stochastic with algebraic multiplicity of the eigenvalue 1 equal to M + 1 and does not induce a consensus.
and let 1 ∈ R R denote the vector of all ones in R R .From (CE) and (III.3)-(III.6), it follows: (III.8)Then, standard calculations allow us to rewrite the consensus error as follows: where κ does not depend on λ and e R,v (λ) .= Tr ΣE ⊤ E , e R,n (λ) .= Tr (P ) , (III.10) where we define The expression (III.9)highlights that the two features of the misbehavior modeled in Assumption 2 generate two different contributions to the consensus error.The error term e R,v is caused by the biased observations of misbehaving agents θ + v that are constantly injected into the dynamics (III.3).Instead, the error term e R,n is produced by the deception noises n(k) that make the steady state drift away.
In words, Lemma 1 implies that setting λ > 0 makes regular agents more resilient to the deception noise as opposed to the standard consensus protocol.This observation relates to [46] where the authors observe that even small perturbations of a row-stochastic matrix W can result in large norm of the matrix difference and change of the Perron-Frobenius eigenvector.

C. The Competition-Collaboration Trade-off
To study how our proposed approach performs in the presence of misbehaving agents, we first confront the two extreme cases of full collaboration and full competition to see when the former approach should be ruled out by default.
Proposition 1 (Full competition vs. full collaboration).In the presence of misbehaving agents, the dynamics (FJ) with λ = 1 yields a smaller error than with λ = 0 if and only if where Proof: [Sketch of proof] The statements follow from manipulations of the consensus errors induced by the two considered instantiations of (FJ).The full derivation is reported in Appendix C.
In words, Proposition 1 implies that the fully competitive approach outperforms the consensus protocol as soon as the misbehavior disturbances are sufficiently intense compared to the prior correlations between regular and misbehaving agents.
After acknowledging that the proposed competition-based approach can be more resilient than the standard consensus protocol in the presence of misbehaving agents, we now turn to study the optimal resilient strategy.In other words, we are interested in choosing λ so as to reduce the consensus error.In particular, we address the optimal competition λ * : (III.12) Such an optimal parameter exists by Weierstrass theorem because e R (λ) is continuous in (0, 1] and has a continuous extension at λ = 0 through the extended continuity of L [50]. The next result describes when the optimal competition is nontrivial, meaning that the regular agents should compete against their neighbors in order to minimize the error (CE).

Sketch of proof:
The result is proven in two phases.Firstly, we show that λ * < 1: we compute the first derivative of e R (λ) at λ = 1 and show that it is positive, hence e R (λ) is strictly increasing in a left neighborhood of λ = 1.Secondly, we show that λ * > 0: we compute the right derivative of e R (λ) as λ → 0 + and show that it is negative, hence the error function is strictly decreasing in a right neighborhood of λ = 0.The detailed calculations are reported in Appendix D.
Intuitively, any optimal parameter λ * is strictly between 0 and 1 if the misbehavior is sufficiently disruptive so that the consensus protocol yields poor performance, similarly to what remarked below Proposition 1, while full competition is never optimal under our standing assumptions.
Remark 5 (Optimal competition with general matrices).Even though we assume regular agents have no self-loops, Theorem 1 holds also if this is relaxed.Further, we numerically show that λ * ∈ (0, 1) if W o is row stochastic and Σ is not diagonal.
Remark 6 (Optimal competition with zero noise).Theorem 1 implies that λ * may be positive even if V and Q are zero.This is indeed consistent with the misbehavior model: not only misbehaving agents corrupt the consensus value through deception bias and deception noise but mostly they behave against the prescribed protocol, so that full collaboration is in general a poor strategy even if v and n(k) are trivial.

D. Performance vs. Misbehavior
We now study how the performance of the dynamics (FJ) varies with deception biases v and deception noises n(k).
We first show an intuitive result: more disruptive misbehavior induce larger consensus errors for every λ.
Proposition 2 (Performance vs. misbehavior).The error e R is strictly increasing with V and with Q w.r.t. the partial order of semi-definite matrices.
Proof: See Appendix F.  We next study what happens to the optimal competition λ * .Intuitively, the more the nominal system behavior is disrupted, the more regular agents should benefit from competing rather than collaborating with (potential) misbehaving neighbors.Formally speaking, this requires λ * to increase with the intensities of deception biases and noises.Such a claim is hard to prove analytically because of the involved structure of the cost function.In particular, studying the second derivative of e R (λ) is complicated by the asymmetric matrix inside the trace of e R,v (λ), and similarly, a unique root of the first derivative of e R (λ) cannot be proved, in general.
Nonetheless, the next results contribute towards our intuition by describing how the minimum points vary with the misbehavior.For convenience, we denote the diagonal elements of the covariance matrices by d m .= V mm and q m .= Q mm .
Proposition 3 (Optimal competition vs. misbehavior).Let λ min be a minimum point of e R (λ), then λ min is strictly increasing with d m , m ∈ M, and with Q w.r.t. the partial order of semidefinite matrices.
Proof: See Appendix G.An immediate consequence of Proposition 3 is that, if there is a unique minimum point for some values of V and Q, then there is a unique minimum point for any "larger" V and Q, which corresponds to λ * .In words, a more disruptive misbehavior force regular agents to progressively become more competitive, in order not to be deceived by misbehaving agents that can draw them away from the nominal average consensus.The next proposition refines to this result by describing the optimal parameter λ * with "extreme" misbehavior.
Proof: See Appendix H.According to intuition, the (trivial) optimal strategy for regular agents is to fully compete when the misbehavior is too disruptive.However, numerical tests in the next section show that λ * is significantly smaller than 1 in several cases.

IV. NUMERICAL EXPERIMENTS
In this section, we perform numerical experiments on the consensus error e R to achieve intuition about the behavior of   FJ dynamics under different topologies and misbehavior, and draw insight about effective choices of the parameter λ.
In Fig. 2, we considered a 3-regular communication graph with 100 agents and uniform weights 1 .The prior covariance Σ was chosen such that, for each agent i, the cross-covariances obeyed an exponential decay, σ ij = 10 −0.2ℓ(i,j) , ℓ(i, j) being the length of a shortest path between i and j, with σ 2 i ≡ 1.Further, we randomly selected one misbehaving agent and varied the intensity of its deception bias d within the range [0, 100], with constant intensity of deception noise q.
Figure 2a shows the error curve as d increases.All curves exhibit a unique minimum point λ * , plotted in Fig. 2b.Further, both error curve and minimum point increase with d, according to Propositions 2 and 3, showing that the competition level needs to grow with the intensity of deception biases.The same qualitative behavior was observed by varying q.
Figure 3 shows the same experiment but with a diagonal covariance matrix Σ.We observe the same monotonic behavior of e R and λ * .Further, we note that the error curve has a convex shape.In fact, even though it was not possible to prove it formally, all tests performed with diagonal covariance matrices resulted in strictly convex error functions from numerical tests.
We next studied what happens when increasing the number of misbehaving agents M .To better visualize changes in the behavior of the system, we fixed the set R to be a network composed of R = 100 regular agents, and added misbehaving agents across the network.Figure 4 shows the error curve when 10 such agents are progressively introduced.In particular, in this example, all misbehaving agents are selected so as to affect different portions of the network, which allows λ * to   have relatively low values, see Fig. 4b.Conversely, we note that, in the opposite scenario, some regular agents may be forced to almost freeze their observations (large λ) to not drive the error too large.Figure 5 shows two cases where the misbehaving agents are connected to the same regular agents.In particular, each couple is added to the neighborhood of one regular agent (e.g., the first two misbehaving agents added to the network are neighbors of agent 1 ∈ R).In this case, λ * increases faster than Fig. 4b, because the regular agents affected by multiple misbehaving need to keep their error small: in other words, they can hardly collaborate because of their misbehaving neighbors.We note that λ * grows faster when observations of regular agents are correlated (Fig. 5a), because such agents can trust that their states may be similar even before starting dynamical updates, and competing is less risky than collaborating.Finally, it is interesting to see that the error behavior observed above is consistent also if W o is only row-stochastic, thus yielding nonzero consensus error even in the nominal scenario.Figure 6 shows consensus error and λ * when each node in the graph has degree 3 or 4 and W o has uniform weights.
Other numerical tests performed with different graphs, observation distributions, and choice of the misbehaving agents show the same quasi-convex behavior of the error function and are omitted in the interest of space.This reinforces and extends the scope of our formal analysis, showing that indeed the competition-collaboration trade-off emerges as a natural resilient mechanism for multi-agent systems.
Remark 7 (Value of optimal λ).A remarkable feature of the FJ dynamics that emerges from the tests above is that λ * is usually small (within the interval [0.1, 0.2] in many cases).This translates into the practical advantage that adding a little competition may be sufficient to get a good level of resilience without forcing too conservative updates by regular agents.

A. Competition-Collaboration Trade-off: Analytical Insight
As mentioned earlier, the consensus error function e R (λ) is hard to study and an exhaustive analysis seems not possible.Some intuition can be achieved from a decomposition that we study next.To keep notation light, we assume a single misbehaving agent (with label m) and a diagonal covariance matrix Σ.Then, we can expand the consensus error as follows: .
(IV.1)In (IV.1),L i ∈ R N is the ith column of L and L −m i ∈ R N −1 is obtained from L i by removing its mth row (corresponding to the misbehaving agent).The error curves are shown in Fig. 7. Equation (IV.1) allows for an intuitive interpretation of the error, which leverages the notion of social power [48,49].
In opinion dynamics, the social power is used to quantify how much the opinion of an agent affects the opinions of all agents.In particular, when opinions evolve according to the FJ dynamics, the element L ij quantifies the influence of agent j on agent i: as L ij increases, agent i is more affected by the initial opinion of agent j.The total social power of agent j is a symmetric and increasing function of all elements {L ij } i∈V . 2orrowing such concepts from opinion dynamics allows us to interpret the two contributions separated in (IV.1).The first, e R,deception , quantifies the impact of the misbehaving agent m on regular agents.The "social power" of m, as quantified through the vector L −m m , depends on the communication matrix W and on the parameter λ.Each coordinate of L −m m decreases with λ, meaning that the influence of the misbehaving agent weakens as regular agents anchor more tightly to their observations, and becomes zero when λ = 1, namely, in the full-competition regime.We formalize this discussion as the following lemma.
Lemma 2. The component e R,deception is decreasing with λ.
Proof: By computing the derivative of L w.r.t.λ, we see that each element of L −m m is nonincreasing with λ.Because L is a nonnegative matrix, this and Lemma 1 yield the claim.See Appendix E for the detailed calculations.
The second contribution e R,consensus measures "democracy" among regular agents, i.e., it is proportional to the mismatch between how much each regular agent affects the others and the ideal value 1 /R, which means that each agent affects all others equally.This cost is zero if and only if the submatrix of L corresponding to interactions among regular agents is the consensus matrix: this can happen only if they do not interact with the misbehaving agent [50], in which case the vector L −m m is zero (the misbehavior has no effect).In this special case, e R,consensus is zero at λ = 0 and increases monotonically as the network shifts from a democratic system where agents fully collaborate (λ = 0) to a disconnected system where agents fully compete (λ = 1).Conversely, with misbehaving agents, e R,consensus has a nontrivial minimizer (zoomed box in Fig. 7).For small λ, the misbehaving agent overrules all interactions and regular agents hardly affect each other.As λ increases, the interactions among regular agents become more relevant, making e R,consensus decrease.However, as λ grows further, the competition among regular agents becomes too aggressive and makes them shift away from an ideal democratic system.
Overall, the error (CE) has two concurrent causes that yield two regimes: collaboration with misbehaving agents is most misleading for small λ, while for large λ the error is mainly due to regular agents that compete against each other and reject useful information shared by neighbors.This matches intuition from (FJ) where λ measures conservatism in agent updates.

V. THE ROLE OF THE COMMUNICATION NETWORK
In the previous sections, we discussed the benefits of using a competition-based approach (FJ dynamics) to tame misbehaving agents.We now shift attention to the communication network, in order to achieve intuition about resilient topologies.In Section V-A, we introduce a second performance metric which we use to evaluate resilience to attacks.In Section V-B, we observe how performance varies with connectivity.

A. Performance Metrics
Besides consensus error, we also aim to assess energy spent to misbehave.To this aim, we interpret (III.3) as a controlled system where the misbehaving agents command the input x M (•).The controllability Gramian in K steps, denoted by W K , is defined for system (III.3)as The controllability Gramian can be used to quantify the control effort: the trace of W K , called controllability index, is inversely related to the control energy spent in K steps (averaged over the reachable subspace), as shown in literature [51]- [53].In words, a small controllability index means that the misbehaviors consume a lot of energy to steer x R across the reachable space, which may be desired to possibly drain out adversarial resources and hamper an external attack.
If M = 1, the controllability index can be written as resembling the consensus error component e R,deception in (IV.1), Both Tr (W K ) and e R,deception are decreasing with λ (i.e., the more competition, the better) and depend on the vectors W k R W M that describe how attacks spread in k steps.The discount factor (1 − λ) k makes the tail of the series in (V.3) negligible, enhancing similarity between those two metrics.Remark 8 (Controllability index).While we use Assumption 2 to compute e R , the controllability Gramian in (V.1) is independent of the trajectory of the system and hence the controllability index evaluates an "average trajectory" of misbehaving agents.

B. Network Connectivity vs. Resilience
We now explore how connectivity of the communication network affects performance and resilience of the dynamics (FJ).While in this sections we attempt to achieve heuristic intuition, an analytical investigation is deferred to future work.To this aim, we fix the parameter λ = 0.1 and numerically evaluate the theoretical performance as the density of the communication network increases.Specifically, for each evaluated network, we assign uniform weights to the links and compute consensus error e R and controllability index Tr (W K ) (where K is the reachability index) selecting some agents as misbehaving according to either of the following two cases: • the worst-case misbehaving agent, i. • five misbehaving agents randomly drawn from V. We consider three common classes of graphs: regular graphs with degree ∆, Erdös-Rényi random graphs, where a link between any two nodes exists with probability p, and random geometric graphs, where nodes are randomly placed in [0, 1] 2 and any two nodes are linked if their distance is not greater than a radius ρ.While regular graphs induce a doubly stochastic matrix even with simple uniform weights, this is generally not true for the other graphs.Hence, to evaluate the consensus error e R , we considered both the deviation from the nominal average defined in (CE) and the deviation from the consensus value computed from the left Perron eigenvector of the nominal weight matrix W o .Given that the results were qualitatively equal, we report only the first case in the interest of space.
We consider networks with N = 100 agents and compute the performance for each network (i.e., a combination of class of graph and density parameter) by averaging over 1000 random graphs for the worst-case misbehaving agent and over 5000 random graphs for the random selection of misbehaving agents.The results are shown in Figs. 8 to 10, with the consensus error on the left and the controllability index on the right.The main insight is that, on average, increasing the graph connectivity mitigates attacks with respect to both metrics.Intuitively, this is because high degrees mean many interactions among regular agents that the misbehaving agent cannot control directly.The only remarkable difference is noted in random geometric graphs with the worst-case misbehaving agent (topleft box in Fig. 10), for which increasing the radius from 0.35 to 0.5 also increases the consensus error.This might be due to the formation of hubs, that is, densely connected areas that emerge and become denser as the radius increases, which an adversary can exploit to quickly spread damage to a large portion of the network.Notably, this phenomenon is absent both for the same class of graphs with random selection of misbehaving agents (bottom-left box of Fig. 10) and in the case of Erdös-Rényi random graphs (Fig. 9), which also typically feature some dense areas -even though not with the small world structure typical of random geometric graphs, see Figs. 17a and 20a.A deeper study of this phenomenon is an interesting direction of future research.
Besides density and number of links, an aspect that also seems to play a role in resiliency is degree balance among nodes.This can be somehow deduced by the plots referred to the same selection strategy of misbehaving agent: for example, with worst-case misbehaving agents, regular graphs exhibit the smallest costs, random geometric graphs -where usually nodes have similar number of neighbors -yield worse performance, and Erdös-Rényi random graphs -where both highly connected and almost isolated nodes coexist -have the largest costs.
To more carefully investigate how performance varies with degree balance, we consider almost-regular graphs, namely, where nodes have degree either ∆ or ∆ − 1 for some ∆.This corresponds to "middle-ways" between ∆and (∆ − 1)-regular graphs, which could be ideally placed between two consecutive ticks (degrees) ∆ and ∆ − 1 on the x-axis of Fig. 8.
More specifically, starting from a ∆-regular graph, we iteratively remove one edge at a time so as to minimize performance degradation while selecting the worst-case misbehaving agent at each time.This amounts to removing the edge e that solves where E is the set of edges (nonzero elements of W ) and we set W with uniform weights after each removal.To get almost-regular graphs, we remove at most one edge per node.show the performance obtained starting from a 4-regular graph with 50 nodes (100 edges in total, corresponding to the rightmost point in the plots) and gradually pruning edges according to (V.6)-V.7 (proceeding leftwards on the x-axis).Also, performance with a 3-regular graphs obtained by removing perfect matchings from the initial 4-regular graphs are shown for comparison. 3Remarkably, performance degrades At each iteration, one edge is removed so as to minimize performance degradation according to (V.6)-(V.7)while enforcing that each node has degree either three or four.At the last iteration (leftmost diamonds), most or all nodes have degree three, with possibly a few nodes left with degree four.The red squares show the performance metrics for a 3-regular graph obtained by removing a perfect matching (set of edges) from the initial 4-regular graph.(almost) monotonically for both performance metrics as edges are removed.This may be explained by a combination of lower connectivity and degree unbalance, which allows the adversary to exploit highly connected agents to make more effective damage against low-connected regular agents.Interestingly, while the consensus error increases smoothly as edges are removed, the controllability index exhibits "jumps".This is evident with large λ, as Fig. 12 shows.Such a behavior suggests the presence of critical subsets of edges and might give indication about critical links to be kept or removed.
Further, in almost all tests (not shown here in the interest of space), the 3-regular graph obtained by removing a perfect matching yielded better performance compared to the last edge removal (leftmost marker on the blue curve).This suggests that increasing connectivity may not be beneficial if it entails less degree balance: in Fig. 12, the 3-regular graph reduces both the consensus error and the controllability index w.r.t. the last graphs obtained by pruning edges (leftmost markers), which have one node with degree 4 and all others with degree 3.In particular, the latter metric is reduced by 22% and is comparable to graphs having most nodes with degree 4.However, as shown in Fig. 11, a regular graph of degree ∆−1 obtained by removing a perfect matching (not related to performance metrics) from a ∆-regular graph may yield worse performance than almostregular graphs.This gives further insight: an arbitrary edge selection may perform substantially worse compared to a taskrelated strategy.

VI. COMPARISON WITH EXISTING LITERATURE
In this section, we test our proposed protocol and compare its performance with other approaches in the literature.
Many techniques have been proposed to mitigate misbehaving agents.However, they usually focus on reaching a generic consensus, possibly while keeping the states of regular agents within a safe region (usually defined by initial conditions), and do not consider performance of average consensus, which here key to the distributed optimization task, as argued in Section II.Indeed, most resilient consensus strategies aim to make the regular agents agree on, e.g., a common location (such as in robot gathering) in the face of misleading interactions, but need not relate the consensus value to the initial locations.
We compare two strategies: Weighted Mean Subsequence Reduced (W-MSR) [26] and Secure Accepting and Broadcasting Algorithm (SABA) [35].As noted in Section I-A, many resilient algorithms adapt W-MSR to specific applications and enjoy the same guarantees.W-MSR suffers from two main limitations related to r-robustness, which is the cornerstone of all theoretical analysis.First, while sufficient conditions for resilient consensus are clear, there is little clue about necessary conditions.This translates into an unknown behavior of the system if r-robustness does not hold.While r-robustness has proved a good characterization for update rules based on W-MSR, it raises practical limitations.On the one hand, the communication network may be fixed but not robust enough.On the other hand, checking r-robustness is computationally intractable for large-scale networks [32].Thus, in some cases, for example with a sparse structure, a more conservative behavior with provable performance bounds may be preferred.Also, W-MSR requires to estimate the number of misbehaving agents affecting the network.This may be an issue: if the estimate is too low, regular agents may be deceived and average consensus disrupted, whereas, if it is too high, the updates may be too conservative, possibly preventing convergence.Further, misbehaviors could happen in a time-varying fashion and make the r-robustness fail at times, yielding poor performance overall.SABA does not estimate the number of misbehaving agents, but stores all received values in a buffer and processes them with a voting strategy.However, this design may impose impractical memory requirements, and the convergence of SABA is still ensured under a minimal r-robustness.
In the next simulations, we consider N = 100 agents interacting through sparse communication networks, whose low connectivity hampers W-MSR and SABA, and matrices W o with homogeneous weights.As performance metric, we computed the objective cost of the distributed optimization task (II.3), which equals e R up to additive constants, cf.Section II-B.The observations are drawn as θ ∼ N (0, 0.1I) and each misbehaving agent m is assigned a deception bias v m ∈ [2,6].For each scenario, we chose the parameter λ by selecting the minimizer of the theoretical error e R (λ) with V = 5I M .
Figure 13 illustrates a network where agents interact on a 3-regular graph (Fig. 13a) with two misbehaving agents (red triangles).Importantly, 3-regular graphs are not r-robust enough to tolerate misbehaving agents, and therefore theoret-   ical guarantees of MSR-based approaches do not hold.We implement W-MSR assuming that each regular agent has at most one misbehaving neighbor, because larger values make updates trivial, i.e., x i (k) ≡ x i (0).Such limitations allow dynamics (FJ) to outperform both W-MSR and SABA, as shown in Fig. 13b.
In our second experiment, we use a denser, regular graph with degree ∆ = 4 as communication network with six misbehaving agents (Fig. 14a).However, some misbehaving agents communicate with the same regular agents (e.g., the two in the bottomright portion of the graph), making this scenario challenging for W-MSR and SABA whose r-robustness requirement suffers the sparse communication graph.While both SABA and W-MSR perform poorly (Fig. 14b), our approach mitigates the attacks by setting λ at a suitably large value.
In Fig. 15, we consider a network where nodes have degree three or four (Fig. 15a) and W is row stochastic.In this case, one may question whether a doubly-stochastic matrix could improve performance of the standard consensus protocol, in    Fig. 17: Comparison among consensus, FJ, W-MSR [26], and SABA [35] with Erdös-Rényi random graph with p = 4 /N and ten misbehaving agents.
light of its optimality under nominal conditions.However, in the presence of misbehaving agents, standard consensus always converges to the average of the misbehaving states regardless of weights in W (cf. Assumption 2 and (C.2) in Appendix).Conversely, Fig. 15b shows that dynamics (FJ) is robust against misbehaving agents even though it cannot retrieve the optimal solution under nominal conditions.In Figs.16 and 17, we simulate the protocols over two Erdös-Rényi random graphs with link probability p = 3 /N and p = 4 /N, respectively (hence, each agent has (N − 1)p neighbors on average), and ten misbehaving agents (10% of the total number of agents).Note that the matrix W is row stochastic.In both cases, the dynamics (FJ) tames the numerous attacks better than the confronted approaches.
Finally, we address random geometric graphs with several radii (0.15 in Fig. 18, 0.20 in Fig. 19, and 0.25 in Fig. 20) and increasing amounts of misbehaving agents to overcome the higher density of the network.Also in this case, the matrix W is row stochastic.Interestingly, W-MSR is rather challenged  Fig. 20: Comparison among consensus, FJ, W-MSR [26], and SABA [35] with random geometric graph with ρ = 0.25 and twenty misbehaving agents.
by this class of graphs, yielding large costs.On the other hand, the dynamics (FJ) again manages to keep the error small compared to the other algorithms.Remark 9 (Advantages of FJ dynamics).The experiments above highlight some advantages of the proposed approach.Firstly, the tunable parameter λ makes the algorithm flexible, because it can smoothly adapt to a different attack intensity while still providing decent performance.Further, while the optimal parameterization requires exact knowledge of the adversary, which may not be reasonably assumed, yet our proposed approach proves pretty robust to the choice of a specific λ, as shown in Figs.2-6 where the error is kept small around λ * .This also holds with row-stochastic matrices, enabling simple weighing rules to be locally implemented.In contrast, in other approaches the cost function may be highly sensitive to some design parameters, e.g., the estimated number of misbehaving agents in W-MSR.Further, most works in the literature do not describe the system behavior when resilient consensus is not guaranteed.In fact, they usually either ensure that the states of the agents remain inside the convex hull of the initial conditions (which may be equivalent to setting λ = 1 in (FJ)), or let agents reach consensus but potentially be steered far away from initial conditions [38].Finally, computational complexity and memory requirements are minimal, which is typically desired for resource-constrained devices.

VII. CONCLUSION AND FUTURE WORK
In this article, we have proposed a competition-based protocol based on the Friedkin-Johnsen dynamics to mitigate a class of misbehavior that disrupts a quadratic distributed optimization task.We have presented formal results and numerical experiments on performance and optimal parametrization, and showed that our approach can outperform state-of-theart algorithms.Further, we have discussed the competitioncollaboration trade-off with analytical arguments that are insightful towards a deeper understanding of the fundamental properties of the system in the presence of misbehaviors.Finally, we have addressed network design and explored how resilience relates to graph connectivity, looking both at the optimization performance and at the energy spend to misbehave.
This approach opens several avenues for future research.Firstly, it is desirable to address an effective design of parameters λ i 's in the realistic case where knowledge about the attack is scarce.This may also involve online reweighing of protocol parameters, for example in the realm of recent work where weights are updated via trust information or evidence theory [38,40].
Secondly, the more general and challenging scenario of distributed optimization should be addressed.In this case, a common approach is to alternate local descent steps to consensus updates to steer all agents towards a common point [19,21].Here, the task-tailored descent steps may critically impact performance even if consensus steps are made resilient.
A third research avenue involves zero-sum games to model interactions among agents [43,44].In particular, in asymmetric zero-sum games, one player has more knowledge than the other, which is a suitable model for worst-case attacks.In this case, a relevant challenge is determining the optimal strategies for both players, to ultimately resilient algorithms in the presence of intelligent adversaries.
Finally, it is interesting to deeply investigate the design of the communication network.While graph robustness to node or edge failures has been extensively addressed [54]- [57], the novel element given the dynamics (FJ) calls for a tailored investigation as heuristically motivated in Section V. Also, in the spirit of a graph-theoretic approach, a comparison between classical centrality measures and worst-case attacks may be useful to get insight about agents that deserve higher attention.

A. Useful Lemmas
In this Appendix, we report some standard facts in linear algebra that will be used in the following proofs.

B. Proof of Lemma 1
We use the implicit function theorem to prove that each diagonal element P ii of P is strictly decreasing with λ.Let where Q = W M QW ⊤ M .The implicit function theorem holds for the solutions of g i (λ, P ii ) = 0 with λ ∈ (0, 1): for i ∈ R, Making dependence on λ explicit and for λ ∈ (0, 1), we get Finally, from e R,n (λ) = i∈R P ii (λ) and linearity of the derivative, it follows that e R,n (λ) is decreasing.

C. Proof of Proposition 1
In this and all following proofs, the constant κ in (III.9) is neglected for the sake of simplicity.
We first compute the consensus error with λ = 1: With λ = 0, the average steady-state consensus value is determined by the biased observations of malicious agents, i.e., xR = θM .

D. Proof of Theorem 1
From (III.9), we get where Lemmas A.1 and A.2 were used and κ > 0.

1) Part one: λ
2) The argument of the trace in (D.2) has expression where j * .= arg max j∈R\{i} σ ij and m * .= arg max m∈M σ im .Inequality (D.6) can be split into the following two cases.
(D.8) The final inequalities in (D.7)-(D.8)follow from Σ ≻ 0 and the Gershgorin circle theorem that imply σ 2 i > σ ij ∀i, j ∈ V.It follows that the derivative (D.2) is positive and the consensus error (CE) is increasing in a left neighborhood of 1.By continuity of (D.1), the minimum points satisfy λ * < 1.
2) Part two: λ * > 0: From Lemma 1, the error term e R,n (λ) has negative right derivative at λ = 0.By continuity of the derivative of e R,v (λ), we can compute the following limit: Matrix Γ can be computed from the spectral decomposition of W .In particular, its elements are finite, Γ 1 is nonnegative, and Γ 2 is nonpositive (details in Appendix E).Putting together (D.9) and Lemma 1, the right derivative of e R (λ) at λ = 0 is negative if and only if the following inequality holds,

E. Computation of Matrix Γ
We now show how to derive Γ from W and discuss the sign of its elements.For the sake of simplicity, we assume that the nominal weight matrix W o is symmetric, which implies that both W o and W are diagonalizable.If W is not diagonalizable, a similar derivation (with more tedious but conceptually identical calculations) can be carried out by considering the Jordan canonical form.This is because a straightforward extension of Lemma A.3 shows that W and Γ share the same (chain of) generalized eigenvectors.
Computation of Γ.The derivative of L is (Lemma A. In particular, the dominant eigenvector v W = 1 (associated with λ W = 1) is in the kernel of dL /dλ for any λ.As for the other eigenvectors, by letting λ go to zero in (E.2), one gets Finally, the eigendecomposition of Γ is obtained from eigenvectors v W and eigenvalues (1 − λ W ) −1 , plus the kernel.
Sign of Γ 1 and Γ 2 .As regards Γ 1 , note that the upperleft block in W is identically zero, and that L is a stochastic matrix for any value of λ: hence, as λ becomes larger than zero, (some) elements in L 1 become positive, and thus their derivative at λ = 0 + is also positive.
As for Γ 2 , define the following block partitions, with W 1 , L 1 ∈ R R×R and W 2 , L 2 ∈ R R×M .Then, it holds which implies, for any λ ∈ (0, 1), In particular, the limit of the derivative of element L im at λ = 0 + is nonpositive in virtue of the theorem of sign permanence.

F. Proof of Proposition 2
Dependence on V .Note that e R,n is independent of V .From (III.9), we highlight the contribution of v to the error e R as follows: e R,v = Tr L 2 V L ⊤ 2 + κ, (F.1) where κ does not depend on V and we use the block partition The matrix L 2 is positive, see [50] and discussion in Section IV-A).Then, if 2 and hence the trace in (F.1) is strictly increasing with V .Dependence on Q.Note that e R,v is independent of Q.Let P 1 and P 2 denote the solutions of (III.6) with Q = Q 1 and Q = Q 2 , respectively.If Q 1 ≻ Q 2 , then Q1 ≻ Q2 and it is known that P 1 ≻ P 2 , from which the claim follows.
and the argument of the trace in (G.1) is where N is the nonpositive matrix given by and κ does not depend on d m .Note that N mm ̸ = 0 because the opposite implies that the mth malicious agent has no interactions with regular agents.It follows that for any λ < 1 there exists d m ≥ 0 such that (H.1) is negative, which is given by the following inequality: The claim follows by combining (H.3) with Proposition 3. Dependence on Q.We use (B.3) to highlight q m in (D.1): (H.4) where κ does not depend on q m .Also, P is increasing with q m and thus [W R P 1 W ⊤ R ] ii also is.Hence, for any λ < 1, there exists q m ≥ 0 such that (H.4) is negative, which is given by the following inequality: Optimal λ as a function of d.

Fig. 2 :
Fig. 2: FJ dynamics consensus error with 3-regular graph, exponential decay of observation covariances, and one misbehaving agent.The arrow shows how the error curve varies as the intensity d of the deception bias increases.
Optimal λ as a function of d.

Fig. 4 :
Fig. 4: FJ dynamics consensus error with 3-regular graph and diagonal prior covariance matrix Σ.The arrow on the left box shows how the error varies as the number of misbehaving nodes M increases (with R = 100).

Fig. 5 :
Fig. 5: Optimal λ as a function of M with d = 10.Each pair of misbehaving agents affects the same regular agent (e.g., the first two belong to N 1 ).

Fig. 11 :
Fig. 11: Consensus error (left) and controllability index (right) for almost-regular graphs starting from a 4-regular graph with λ = 0.2.Edge removal proceeds from right (initially, all 100 edges are present) towards left.At each iteration, one edge is removed so as to minimize performance degradation according to (V.6)-(V.7)while enforcing that each node has degree either three or four.At the last iteration (leftmost diamonds), most or all nodes have degree three, with possibly a few nodes left with degree four.The red squares show the performance metrics for a 3-regular graph obtained by removing a perfect matching (set of edges) from the initial 4-regular graph.

G. Proof of Proposition 3
Dependence on V .In the following, we make the dependence of the error e R on d m explicit.Let us compute the partial derivative of the error first w.r.t.λ and then w.r.t.d m :∂ 2 e R (λ, d m ) ∂d m ∂λ = 1 λ Tr L d Σ(d m ) dd m L ⊤ I − W ⊤ L ⊤ S ⊤ R S R .
left block is a negative matrix for all λ ∈ (0, 1), and is the zero matrix for λ = 1.Hence, the derivative of the consensus error w.r.t.λ (D.1) is strictly decreasing with d m for any λ ∈ (0, 1).By continuity of (D.1), the minimum points of e R (λ) are strictly increasing with d m .Dependence on Q.Note that e R,v is independent of Q.We consider the derivative of e R,n w.r.t.λ:de R,n (λ) dλ = − i∈R 2(1 − λ) [W R P W ⊤ R ] ii + Qii 1 − (1 − λ) 2 W 2 ii .(G.6) Let Q 1 ≻ Q 2 , then it holds P 1 ≻ P 2 , which implies [W R P 1 W ⊤ R ] ii > [W R P 2 W ⊤ R ] ii ∀i ∈ R. Further, it holds Q1 ≻ Q2 and [ Q1 ] ii > [ Q2 ] ii ∀i ∈ R.By combining such two facts, we conclude that (G.6) is strictly decreasing.The statement follows by the same argument of the case above.H. Proof of Proposition 4 Dependence on V .We expand (D.1) to highlight d m :

i∈R 2 ( 1 2 ii + q m Q2 im 1 − 2 ii 5 )
− λ) [W R P W ⊤ R ] (1 − λ) 2 WThe claim follows by combining (H.5) with Proposition 3. Luca Ballotta received the Master's degree in Automation Engineering and the Ph.D. degree in Information Engineering from the University of Padova, Italy, in 2019 and 2023, respectively.He is currently a research fellow at the University of Padova, Department of Information Engineering.He was Visiting Student at the Massachusetts Institute of Technology in 2020 and 2022.He was awarded with the Young Author Prize at the 2020 IFAC World Congress.His research interests include multiagent systems and networked control systems under resource constraints, resilient distributed optimization, and learning-based safe control.