Belief Control Strategies for Interactions over Weakly-Connected Graphs

In diffusion social learning over weakly-connected graphs, it has been shown recently that influential agents shape the beliefs of non-influential agents. This paper analyzes this mechanism more closely and addresses two main questions. First, the article examines how much freedom influential agents have in controlling the beliefs of the receiving agents, namely, whether receiving agents can be driven to arbitrary beliefs and whether the network structure limits the scope of control by the influential agents. Second, even if there is a limit to what influential agents can accomplish, this article develops mechanisms by which they can lead receiving agents to adopt certain beliefs. These questions raise interesting possibilities about belief control over networked agents. Once addressed, one ends up with design procedures that allow influential agents to drive other agents to endorse particular beliefs regardless of their local observations or convictions. The theoretical findings are illustrated by means of examples.


I. INTRODUCTION AND MOTIVATION
Several studies have examined the propagation of information over social networks and the influence of the graph topology on this dynamics [2]- [28]. In recent works [27]- [29], an intriguing phenomenon was revealed whereby it was shown that weakly-connected graphs enable certain agents to control the opinion of other agents to great degree, irrespective of the observations sensed by these latter agents. For example, agents can be made to believe that it is "raining" while they happen to be observing "sunny conditions". Weak graphs arise in many contexts, including in popular social platforms like Twitter and similar online tools. In these graphs, the topology consists of multiple sub-networks where at least one sub-network (called a sending sub-network) feeds information in one direction to other network components without receiving back (or being interested in) any information from them. For example, a celebrity user in Twitter may have a large number of followers (running into the millions), while the user himself may not be tracking or following any (or only a small fraction) of these users. For such networks with weak graphs, it was shown in [28], [29] that, irrespective of the local observations sensed by the receiving agents, a sending sub-network plays a domineering role and influences the beliefs of the other groups in a significant manner. In particular, receiving agents can be made to arrive at incorrect inference decisions; they can also be made to disagree on their inferences among themselves.
The purpose of this article is to examine this dynamics more closely and to reveal new critical properties, including the development of control mechanisms. We have three main contributions. First, we show that the internal graph structure connecting the receiving A short version of this work appears in the conference publication [1]. This work was supported in part by NSF grants CCF-1524250 and ECCS-1407712. A. H. Sayed  agents imposes a form of resistance to manipulation, but only to a certain degree. Second, we characterize the set of states that can be imposed on receiving networks; while this set is large, it turns out that it is not unlimited. And, third, for any attainable state, we develop a control mechanism that allows sending agents to force the receiving agents to reach that state and behave in that manner.

A. Weakly-Connected Graphs
We start the exposition by reviewing the structure of weak graphs from [27]- [29] and by introducing the relevant notation. As explained in [27], a weakly-connected network consists of two types of subnetworks: S (sending) sub-networks and R (receiving) sub-networks. Each individual sub-network is a connected graph where any two agents are connected by a path. In addition, every sending subnetwork is strongly-connected, meaning that at least one of its agents has a self-loop. The flow of information between S and R subnetworks is asymmetric, as it only happens in one direction from S to R. Figure 1 shows one example of a weakly-connected network. The two top sub-networks are sending sub-networks and the two bottom sub-networks are receiving sub-networks. The weights on the connections from S to R networks are positive but can be arbitrarily small. Observe how links from S−subnetworks to R−subnetworks flow in one direction only, while all other links can be bi-directional. We index the strongly-connected sub-networks by s = {1, 2, · · · , S}, and the receiving sub-networks by r = {S + 1, . . . , S + R}. Each sub-network s has Ns agents, and the total number of agents in the S sub-networks is denoted by NgS. Similarly, each sub-network r has Nr agents, and the total number of agents in the R sub-networks is denoted by NgR. We let N denote the total number of agents across all sub-networks, i.e., N = NgS + NgR, arXiv:1801.05479v2 [cs.MA] 5 Nov 2018 and use N = {1, 2, · · · , N } to refer to the indexes of all agents. We assign a pair of non-negative weights, {a k , a k }, to the edge connecting any two agents k and . The scalar a k represents the weight with which agent k scales data arriving from agent and, similarly, for a k . We let N k denote the neighborhood of agent k, which consists of all agents connected to k. Each agent k scales data arriving from its neighbors in a convex manner, i.e., the weights satisfy: Following [27], [29], and without loss in generality, we assume that the agents are numbered such that the indexes of N represent first the agents from the S sub-networks, followed by those from the R sub-networks. In this way, if we collect the {a k } into a large N × N combination matrix A, then this matrix will have an upper block-triangular structure of the following form: The matrices {A1, · · · , AS} on the upper left corner are leftstochastic primitive matrices corresponding to the S stronglyconnected sub-networks. Likewise, the matrices {AS+1, · · · , AS+R} in the lower right-most block correspond to the internal weights of the R sub-networks. We denote the block structure of A in (2) by: Notation: We use lowercase letters to denote vectors, uppercase letters for matrices, plain letters for deterministic variables, and boldface for random variables. We also use (.) T for transposition, (.) −1 for matrix inversion, and and for vector element-wise comparisons.

II. DIFFUSION SOCIAL LEARNING
In order to characterize the set of attainable states, and to design mechanisms for belief control over weak graphs, we need to summarize first the main finding from [29]. The work in that reference revealed the limiting states that are reached by receiving agents over weak-graphs. An expression was derived for these states. Once we review that expression, we will then examine its implications closely. In particular, we will conclude from it that not all states are attainable and that receiving sub-networks have an inherent resistance mechanism. We characterize this mechanism analytically. We then show how sending sub-networks can exploit this information to control the beliefs of receiving agents and to sow discord among the agents.
Thus, following [29], we assume that each sub-network is observing data that arise from a true state value, denoted generically by θ • , which may differ from one sub-network to another. We denote by Θ the set of all possible states, by θ • s the true state of sending sub-network s and by θ • r the true state of receiving sub-network r, where both θ • s and θ • r are in Θ. At each time i, each agent k will possess a belief µ k,i (θ), which represents a probability distribution over θ ∈ Θ. Agent k continuously updates its belief according to two information sources: 1) The first source consists of observational signals {ξ k,i } streaming in locally at agent k. These signals are generated according to some known likelihood function parametrized by the true state of agent k. We denote the likelihood function by L k (.|θ • r ) if agent k belongs to receiving sub-network r or L k (.|θ • s ) if agent k belongs to sending sub-network s.
2) The second source consists of information received from the neighbors of agent k, denoted by N k . Agent k and its neighbors are connected by edges and they continuously communicate and share their opinions.
Using these two pieces of information, each agent k then updates its belief according to the following diffusion social learning rule [2]: In the first step of (4), agent k updates its belief, µ k,i−1 (θ), based on its observed private signal ξ k,i by means of the Bayesian rule and obtains an intermediate belief ψ k,i (θ). In the second step, agent k learns from its social neighbors through cooperation.
A consensus-based strategy can also be employed in lieu of (4), as was done in the insightful works [3], [30], although the latter reference focuses mainly on the problem of pure averaging and not on social learning and requires the existence of certain anchor nodes. In this work, we assume all agents are homogeneous and focus on the diffusion strategy (4) due to its enhanced performance and wider stability range, as already proved in [2] and further explained in the treatments [31], [32]. Other models for social learning can be found in [4], [5], [7], [12], [18], [33], [34].
When agents of sending sub-networks follow this model, they can learn their own true states. Specifically, it was shown in [2], [29] that for any agent k that belongs to sending sub-network s. Result (5) means that the probability measure concentrates at location θ • s , while all other possibilities in Θ have zero probability. On the other hand, agents of receiving sub-networks will not be able to find their true states. Instead, their beliefs will converge to a fixed distribution defined over the true states of the sending sub-networks as follows [29]. First, let collect all beliefs from agents that belong to sub-network s, where the notation ks(n) denotes the index of the n-th agent within subnetwork s, i.e., and n ∈ {1, 2, · · · , Ns}. Likewise, let collect all beliefs from agents that belong to sub-network r, where the notation kr(n) denotes the index of the n-th agent within subnetwork r, i.e., and n ∈ {1, 2, · · · , Nr}. Furthermore, let collect all beliefs from all S−type sub-networks. Likewise, let collect the beliefs from all R−type sub-networks. Note that these belief vectors are evaluated at a specific θ ∈ Θ. Then, the main result in [28], [29] shows that, under some reasonable technical assumptions, it holds that where W is the NgS × NgR matrix given by: and I is the identity matrix of size NgR. The matrix W has nonnegative entries and the sum of the entries in each of its columns is equal to one [27]. Expression (12) shows how the beliefs of the sending sub-networks determine the limiting beliefs of the receiving sub-networks through the matrix W . We can expand (12) to reveal the influence of the sending networks more explicitly as follows. Let w T k denote the row in W T that corresponds to receiving agent k and partition it into sub-vectors as follows 1 : where the {N1, N2, . . . , NS} are the number of agents in each subnetwork s ∈ {1, 2, . . . , S}. Then, according to (12), we have Note that this relation is for a specific θ ∈ Θ. Let us focus on the case when θ = θ • s , assuming it is the true state parameter of the s-th sending network only. We know from [2] and (5) that each agent in the sending sub-network s will learn its true state θ • s . Therefore, from (10), where 1N s denotes a column vector of length Ns whose elements are all one. Similarly, 0Ns denotes a column vector of length Ns whose elements are all zero. Combining (15) and (16) we get This means that the likelihood of state θ • s at the receiving agent k is equal to the sum of the entries of the weight vector, w k,Ns , corresponding to sub-network s. More generally, for any other state parameter θ ∈ Θ, its likelihood is given from (12) by Result (18) means that the belief of receiving agent k will converge to a distribution defined over the true states of the sending sub-networks, which we collect into the set: Expression (12) shows how the limiting distributions of the sending sub-networks determine the limiting distributions of the receiving sub-networks through the matrix W T . In other words, it indicates how influential agents (from within the sending sub-networks) can control the steady-state beliefs of receiving agents. Two critical questions arise at this stage: (a) first, how much freedom do influential agents have in controlling the beliefs of the receiving agents? That is, can receiving agents be driven to arbitrary beliefs or does the network structure limit the scope of control by the influential agents? and (b) second, even if there is a limit to what influential agents can accomplish, how can they ensure that receiving agents will end up with particular beliefs?
Questions (a) and (b) raise interesting possibilities about belief (or what we will sometimes refer to as "mind") control. In the next sections, we will address these questions and we will end up with the conditions that allow influential agents to drive other agents to endorse particular beliefs regardless of their local observations (or "convictions").

III. BELIEF CONTROL MECHANISM
Observe from expression (18) that the limiting beliefs of receiving agents depend on the columns of W = TSR (I − TRR) −1 . Note also that the entries of W are determined by the internal combination weights within the receiving networks (i.e., TRR), and the combination weights from the S to the R sub-networks (i.e., TSR). The question we would like to examine now is that given a set of desired beliefs for the receiving agents, is this set always attainable? Or does the internal structure of the receiving sub-networks impose limitations on where their beliefs can be driven to? To answer this useful question, we consider the following problem setting. Let q k (θ) denote some desired limiting distribution for receiving agent k (i.e., q k (θ) denotes what we desire the limiting distribution µ k,i (θ) in (18) to become as i → ∞). We would like to examine whether it is possible to force agent k to converge to any q k (θ), i.e., whether it is possible to find a matrix TSR so that the belief of receiving agent k converges to this specific q k (θ).

A. Motivation
In this first approach, we are interested in designing TSR while TRR is assumed fixed and known. This scenario allows us to understand in what ways the internal structure of the receiving networks limits the effect of external influence by the sending sub-networks. This approach also allows us to examine the range of belief control over the receiving sub-networks (i.e., how much freedom the sending sub-networks have in selecting these beliefs). Note that the entries of TSR correspond to weights by which the receiving agents scale information from the sending sub-networks. These weights are set by the receiving agents and, therefore, are not under the direct control of the sending sub-networks. As such, it is fair to question whether it is useful to pursue a design procedure for selecting TSR since its entries are not under the direct control of the designer or the sending sub-networks. The useful point to note here, however, is that the entries of TSR, although set by the receiving agents, can still be interpreted as a measure of the level of trust that receiving agents have in the sending agents they are connected to. The higher this level of confidence is between two agents, the larger the value of the scaling weight on the link connecting them. In many applications, these levels of confidence (and, therefore, the resulting scaling weights) can be influenced by external campaigns (e.g., through advertisement or by way of reputation). In this way, we can interpret the problem of designing TSR as a way to guide the campaign that influences receiving agents to set their scaling weights to desirable values. The argument will show that by influencing and knowing TSR, sending agents end up controlling the beliefs of receiving agents in desirable ways. For the analysis in the sequel, note that by fixing TRR and designing TSR, we are in effect fixing the sum of each column of TSR and, accordingly, fixing the overall external influence on each receiving agent. In this way, the problem of designing TSR amounts to deciding on how much influence each individual sub-network should have in driving the beliefs of the receiving sub-networks.

B. Conditions for Attainable Beliefs
Given these considerations, let us now show how to design TSR to attain certain beliefs. As is already evident from (18), the desired belief q k (θ) at any agent k needs to be a probability distribution defined over the true states of all sending sub-networks, We assume, without loss of generality, that the true states of the sending sub-networks are distinct, so that |Θ • | = S. If two or more sending sub-networks have the same true state, we can merge them together and treat them as corresponding to one sending sub-network; although this enlarged component is not necessarily connected, it nevertheless consists of strongly-connected elements and the same arguments and conclusions will apply.
We collect the desired limiting beliefs for all receiving agents into the vector: which has length NgR. Then, from (12), we must have: Evaluating this expression at the successive states where Q is the S × NgR matrix that collects the desired beliefs for all receiving agents. Using (13), we rewrite (23) more compactly in matrix form as: Therefore, given Q and TRR, the design problem becomes one of finding a matrix TSR that satisfies (24) subject to the following constraints: The first condition (25) is because the entries on each column of A defined in (3) add up to one. The second condition (26) ensures that each element of TSR is a non-negative combination weight. The third condition (27) takes into account the network structure, where t SR,k represents the column of TSR that corresponds to receiving agent k, and t SR,k (j) represents the j th entry of this column (which corresponds to sending agent j-see Fig. 2). In other words, if receiving agent k is not connected to sending agent j, the corresponding entry in TSR should be zero. It is useful to note that condition (25) is actually unnecessary and can be removed. This is because if we can find TSR that satisfies (24), then condition (25) will be automatically satisfied. To see this, we first sum the elements of the columns on the left-hand side of (24) and observe that We then sum the elements of the columns on the right-hand side of (24) to get This is because 1 T S Q = 1 T N gR since the entries on each column of Q add up to one. Thus, equating (28) and (29), we find that (25) must hold. The problem we are attempting to solve is then equivalent to finding TSR that satisfies (24) subject to TSR 0 (30) t SR,k (j) = 0, if receiving agent k is not connected to sending agent j To find TSR that satisfies (24) under the constraints (30)-(31), we can solve separately for each column of TSR. Let t RR,k and q k , respectively, denote the columns of TRR and Q that correspond to receiving agent k. Then, relations (24) and (30)- (31) imply that column t SR,k must satisfy: subject to The problem is then equivalent to finding t SR,k for each receiving agent k such that t SR,k satisfies (32)- (34). For Q to be attainable (i.e., for the beliefs of all receiving agents to converge to the desired beliefs), finding such t SR,k should be possible for each receiving agent k. However, finding t SR,k that satisfies (32) under the constraints (33)-(34) may not be always possible. The desired belief matrix Q will need to satisfy certain conditions so that it is not possible to drive the receiving agents to any belief matrix Q. Before stating these conditions, we introduce two auxiliary matrices. We define first the following difference matrix, which appears on the right-hand side of (24) -this matrix is known: Note that V has dimensions S ×NgR. The k−th column of V , which we denote by v k appears on the right-hand side of (32), i.e., The (s, k)−th entry of V is then: Each (s, k)−th entry of V represents the difference between the desired limiting belief at θ • s of receiving agent k and a weighted combination of the desired limiting beliefs of its neighboring receiving agents. We remark that this sum includes agent k if t RR,k (k) is not zero. Similarly, it includes any receiving agent if t RR,k ( ) is not zero. In this way, the sum runs only over the neighbors of agent k, because any agent that is not a neighbor of agent k has its corresponding entry in t RR,k as zero.
Let C denote an S × NgR binary matrix, with as many rows as the number of sending sub-networks and as many columns as the number of receiving agents. The matrix C is an indicator matrix that specifies whether a receiving agent is connected or not to a sending sub-network. The (s, k)−th entry of C is one if receiving agent k is connected to sending sub-network s; otherwise, it is zero. We are now ready to state when a given set of desired beliefs is attainable.
and only if, the entries of V will be zero wherever the entries of C are zero, and the entries of V will be positive wherever the entries of C are one.
Before proving theorem 1, we first clarify its statement. For Q to be achievable, the matrices V and C must have the same structure with the unit entries of C translated into positive entries in V . This theorem reveals two possible cases for each receiving agent k and gives, for each case, the condition required for the desired beliefs to be attainable.
In the first case, receiving agent k is not connected to any agent of sending sub-network s (the (s, k)−th entry of C is zero). Then, according to Theorem 1, receiving agent k achieves its desired That is, the cumulative influence from the agent's neighbors must match the desired limiting belief.
In the second case, receiving agent k is connected to at least one agent of sending sub-network s (the (s, k)−th entry of C is one). Now, according to Theorem 1 again, receiving agent k achieves its desired limiting belief Proof of Theorem 1: We start by first proving that if Q is attainable, then V and C have the same structure. If Q is attainable, then there exists t SR,k for each receiving agent k that satisfies (32)- (34). Using the definition of E in (23), the s−th row on the left-hand side of (32) is: where Is represents the set of indexes of sending agents that belong to sending sub-network s. Expression (40) represents the sum of the elements of the block of t SR,k that correspond to sending subnetwork s. Therefore, if Q is attainable, then the s−th row of (32) satisfies the following relation: From this relation, we see that if agent k is not connected to any agent in sub-network s, then j∈Is t SR,k (j) = 0 which implies that v k (s) is zero. On the other hand, if agent k is connected to sub-network s, then j∈Is t SR,k (j) > 0 which implies that v k (s) > 0. In other words, C and V have the same structure.
Conversely, if C and V have the same structure, then it is possible to find t SR,k for each receiving agent k that satisfies (32)- (34). In particular, if agent k is not connected to sub-network s, then the (s, k)-th entry of C is zero. Since C and V have the same structure, then v k (s) = 0 . By setting to zero the entries of t SR,k that correspond to sending sub-network s, relation (41) is satisfied. On the other hand, if agent k is connected to sub-network s (connected to at least one agent in sub-network s), then the (s, k)-th entry of C is one. Since C and V have the same structure, we get v k (s) > 0. Therefore, since the entries of t SR,k must be non-negative, we first set to zero the entries of t SR,k that correspond to agents of sub-network s that are not connected to agent k and the remaining entries can be set to non-negative values such that relation (41) is satisfied. That is, if C and V have the same structure, then Q is attainable.
We next move to characterize the set of solutions, i.e., how we can design t SR,k assuming the conditions on V are met.

C. Characterizing the Set of Possible Solutions
In the sequel, we assume that the conditions on V from Theorem 1 are satisfied. That is, if receiving agent k is not connected to subnetwork s, then v k (s) = 0. Otherwise, v k (s) > 0. The desired beliefs are then attainable. This means that for each receiving agent k, we can find t SR,k that satisfies (32)- (34). Many solutions may exist. In this section, we characterize the set of possible solutions.
First of all, to meet (31), we set the required entries of t SR,k to zero. We then remove the corresponding columns of E, and label the reduced E by E k . Similarly, we remove the zero elements of t SR,k and label the reduced t SR,k by t SR,k . On the other hand, if agent k is not connected to some sub-network s, then the corresponding row in E will be removed and E k will have fewer number of rows, denoted by S . Without loss of generality, we assume agent k is connected to the first S sending sub-networks. We denote by N k s the number of agents of sending sub-network s that are connected to receiving agent k and by N k gS the total number of all sending agents connected to agent k. The matrix E k will then have the form (this matrix is obtained from E by removing rows and columns with zero entries; the resulting dimensions are now denoted by S and N k gS ): Note that if receiving agent k is connected to all sending subnetworks, then E and E k will have the same number of rows, S = S.
In the case where agent k is not connected to some sub-network s, condition (38) should be satisfied, and the corresponding row in q k − Qt RR,k should be removed to obtain the reduced vector q k − Q t RR,k . We are therefore reduced to determining t SR,k by solving a system of equations of the form: subject to We can still have some of the entries of the solution t SR,k turn out to be zero. Now note that the number of rows of E k is S (number of sending sub-networks connected to k), which is always smaller than or equal to N k gS . Moreover, the rows of E k are linearly independent and thus E k is a right-invertible matrix. Its right-inverse is given by [35]: Therefore, if we ignore condition (44) for now, then equation (43) has an infinite number of solutions parametrized by the expression [35]: where y is an arbitrary vector of length N k gS . We still need to satisfy condition (44). Let and note that where v k (i) represents the i th entry of vector v k . Likewise, and if we partition y into sub-vectors as then expression (46) becomes: This represents the general form of all possible solutions, but from these solutions we want only those which are nonnegative in order to satisfy condition (44). From (51), the vector t SR,k is partitioned into multiple blocks, where each block has the form: We already have from the conditions of attainable beliefs (39) that v k (s) > 0. Therefore, we can choose y N k s as zero or set it to arbitrary values as long as (52) stays non-negative. We also know that for the beliefs to be attainable, we cannot have v k (s) < 0. Otherwise, no solution can be found. Indeed, if v k (s) < 0, then to make (52) nonnegative, we would need to select y N k s such that: However, there is no y N k s that satisfies this relation because if we sum the elements of the vector on the left-hand side of (53), we obtain: While if we sum the elements of the vector on the right-hand side of (53), we obtain: This means that we cannot find t SR,k such that t SR,k 0 when any of the entries of v k or q k − Q t RR,k is negative.
In summary, we have established the validity of the following statement.
Theorem 2. Assume receiving agent k is connected to N k s agents in sending sub-network s. If v k (s) > 0, then all possible choices for the weights from sending agents in network s to receiving agent k are parameterized as: where y N k s is an arbitrary vector of length N k s chosen so that (56) stays non-negative.

D. Enforcing Uniform Beliefs
In this section, we explore one special case of attainable beliefs, which is driving all receiving agents towards the same belief. In this case, Q is of the following form: for some column q that represents the desired limiting belief (the entries of q are non-negative and add up to one). We verify that the conditions that ensure that uniform beliefs are attainable by all receiving agents. In this case, v k is of the following form: and the (s, k)-th entry of V is: Now we know that 1 − 1 T N gR t RR,k > 0 when agent k is connected to at least one agent from any sending sub-network, and that 1 − 1 T N gR t RR,k = 0 when it is not connected to any sending sub-network. In the second case where 1−1 T N gR t RR,k = 0, expression (59) implies that v k (s) = 0 for any s. Therefore, in this case, we have agent k not connected to any sending sub-network s and v k (s) = 0 for any s, and condition (38) is satisfied. In the first case where 1−1 T N gR t RR,k > 0 (i.e., agent k is connected to some sending sub-networks but not necessarily to all of them), expression (59) implies that v k (s) > 0 no matter whether agent k is connected or not to sending subnetwork s. However, when agent k is not connected to sending subnetwork s, condition (38) requires that v k (s) = 0 for agent k to achieve its desired belief at θ • s . In summary, we arrive at the following conclusion.
Lemma 1. For the scenario of uniform beliefs to be attainable, agent k should be connected either to all sending sub-networks or to none of them.
We provide in reference [arXiv] two numerical examples that illustrate this construction.

E. Example 1
Consider the network shown in Fig. 3. It consists of N = 8 agents, two sending sub-networks and one receiving sub-network, with the following combination matrix: We assume that there are 3 possible states 1 is the true event for the first sending sub-network, θ • 2 is the true event for the second sending sub-network, and θ • 3 is the true event for the receiving sub-network.
Let us first design TSR so that all receiving agents' beliefs converge to the same belief over {θ o 1 , θ o 2 }, say: We determine the columns of TSR one at a time. Starting with agent 6, we focus on the first column of TSR. The vector v6 defined in (47) is given by (58) for the case of uniform beliefs. Therefore, Thus, according to (51), where y is an arbitrary vector of length 3. Note that t SR,6 represents respectively the coefficients of agents 2, 3 and 4 that are linked to agent 6. It follows that To verify that the beliefs of the receiving agents converge in this case to the desired belief, we compute the matrix W T from (13): Likewise, we can compute the belief at θ • 2 for each receiving agent at steady state, by taking the second block in the agent's corresponding row and summing its elements: with a negative first entry. Therefore, the desired belief for agent 7 cannot be attained.

F. Example 2
Consider now the network shown in Fig. 4 with the following combination matrix Let us consider the case where we want to design TSR so that the desired limiting beliefs are as follows: for each receiving agent k, we obtain: To verify that the beliefs of the agents converge in this case to the desired belief, we compute W T from (13) and use (18)

IV. JOINT DESIGN OF T RR AND T SR
In the previous sections, we analyzed the conditions that drive receiving agents to desired beliefs. The approach relies on determining the entries of the weighting matrix TSR from knowledge of Q (the desired beliefs) and TRR (the internal weighting structure within the receiving sub-networks). We saw how there is limitation to where the beliefs of receiving agents can converge. In particular, the internal combination of receiving sub-networks contribute to this limitation. We now examine the problem of designing TSR and TRR jointly, to see whether by having more freedom in choosing the coefficients of TRR, we still encounter limitations on how to influence the receiving agents. We assume that we know the number of receiving sub-networks and the number of agents in each of these sub-networks. Using (24), we have Therefore, given Q (the desired limiting beliefs of the receiving agents), the design problem becomes one of finding matrices TSR and TRR that satisfy (87) subject to the following constraints: In the last condition (90), we are requiring T RR,k (j) to be strictly positive if receiving agent j feeds into k. This is in order to avoid solutions where the receiving sub-networks become unconnected. For instance, consider the example shown in Fig. 5. This figure shows a case where agent k is connected to all sending sub-networks, and it depicts only the incoming links into agent k. Let us assume that the desired limiting belief for agent k is Then a possible solution to (87) is to assign zero as weights for the data originating from its receiving neighbors, 0.1 for the data received from sending agent 1, and 0.9 for the data received from sending agent 2. Then, for this example, so that (87) is satisfied. However, this solution affects the connectedness of the receiving sub-network of agent k, because there will be no path that leads to this agent. To find TSR and TRR satisfying (87)-(90), we can solve separately for each of their columns. If it is possible to find a solution for each column, then Q is attainable. We explore next the possibility of finding solutions for each column. Similarly to the previous section, t SR,k and t RR,k respectively represent the columns of TSR and TRR that correspond to receiving agent k, and t SR,k (j) and t RR,k (j) respectively represent the j−th entries of this t SR,k and t RR,k . Also q k denote the column of Q that corresponds to receiving agent k. Then, relations (87) and (89)-(90) imply that the columns t SR,k and t RR,k must satisfy: subject to Since the connections within the sending and receiving networks are known, but not the combination weights TSR and TRR whose values we are seeking, we can then set to zero the entries of t SR,k and t RR,k that correspond to unlinked agents. We remove these zero entries and relabel the vectors as t SR,k and t RR,k . We also remove the corresponding columns of E and Q, and label the modified E and Q by E k and Q k . We are therefore reduced to determining t SR,k and t RR,k by solving a system of equations of the form: subject to Formulation (97)-(100) has the following interpretation. After some sufficient time i ≥ I, we know that the beliefs of all agents will approach some limiting beliefs, and based on the results of the previous work [29], the belief update (4) approaches for i ≥ I, This means that: In other words, if we want the beliefs of the receiving agents to converge to some belief vector q, then we need to make sure that these desired beliefs satisfy the relationship: for any θ ∈ {θ • 1 , · · · , θ • S } and for all receiving agents k. In other words, given the set of desirable beliefs, we would like to know if it is possible to express the desired limiting belief for each receiving agent k as a convex combination of the limiting beliefs of its receiving neighbors and the limiting beliefs of the sending agents to which agent k is connected. If this is possible for each agent k, then Q is attainable, i.e., all receiving agents can reach their desired limiting beliefs. This is precisely what the formulation (97)-(100) is attempting to enforce, by finding suitable coefficients such that (103) is satisfied. Finding t SR,k and t RR,k that satisfy (97) and constraints (98)-(100) might not be always possible. Since each agent k can be connected to all sending sub-networks, or to some of them or to none of them, the matrix E k that appears in (97) will have a different form for each of these cases, which will affect the possibility of finding a solution. Before analyzing how the three possible cases affect the possibility of finding a solution, we summarize first the results: 1) Agent k is connected to all sending sub-networks: the problem reduces to finding t RR,k that satisfies (112a) and (112b, which always has a solution; 2) Agent k is connected to some sending sub-networks: the problem reduces to finding t RR,k that satisfies conditions (120a)-(120c), which may not always have a solution; 3) Agent k is not connected to any sending sub-network: the problem reduces to finding t RR,k that satisfies conditions (127a)-(127c), which may not always have a solution. Note that relations (112a) and (120a) are what condition (39) required when we wanted to design TSR, for the case where agent k is connected to sending sub-network s, when TRR was given. Similarly, relations (120b) and (127a) are what condition (38) required when we wanted to design t SR,k , for the case where agent k is not connected to sending sub-network s, when TRR was given. In the earlier section, we had to make sure that the given TRR satisfies (38) and (39) for Q to be attainable. Here, we are designing for TRR as well, and we need to make sure that the entries we choose satisfy these conditions. We now analyze each case in detail.

Case 1: Agent k is connected to all sending sub-networks
We discuss first the case where agent k is connected to at least one agent from each sending sub-network. In this case, E k will have the following form: and relation (97) is then: where q k(j) (θ • s ) represents the desired limiting belief at θ • s for the j th receiving neighbor of agent k, and N k gR is the total number of receiving agents that are neighbors of agent k. The problem here is to find t SR,k and t RR,k that satisfy (105) subject to the constraints (98)-(99). It is useful to note that if we can find t SR,k and t RR,k that satisfy (105), then condition (98) will be automatically satisfied. To see this, we first sum the elements of the vector on the left-hand side of (105) and observe that The matrix B was introduced in (93). This is because 1 T S B k = 1 T since the entries on each column of B k add up to one. We then sum the elements of the vector on the right-hand side of (105) to get Thus, equating (106) and (107), we obtain (98). The problem we are attempting to solve is then equivalent to finding t SR,k and t RR,k that satisfy (105) subject to Now, note that (105) consists of S equations and note that the number of variables (i.e., the total number of entries of t SR,k and t RR,k ) is greater than the number of equations. Each equation relates the entries of t SR,k that correspond to agents of one of the sending sub-networks to all entries of t RR,k . In particular, the equation that corresponds to sending sub-network s has the following form: Equation (110) shows how the entries of t SR,k that correspond to agents of sending sub-network s, are related to the entries of t RR,k through the values of the desired beliefs at θ • s . Therefore, the set of all possible solutions to (105) consist of vectors whose entries satisfy (110) for each s. In other words, by arbitrarily fixing the entries of t RR,k , we compute the entries of t SR,k using (110) for each s to obtain a solution to (105). This is because (105) is made of S equations that only indicate how the entries of t SR,k that correspond to each sending sub-network s are related to t RR,k without having any additional equation for the entries of t RR,k . Note that it does not matter how the individual entries of t SR,k that correspond to subnetwork s are chosen as long as their sum satisfies (110). However, in the problem we are trying to solve, we are not interested in the entire set of solutions to (105). This is because we have two additional constraints (108) and (109). Therefore, in our problem we cannot arbitrarily fix the entries of t RR,k to any values as we need to also satisfy (108) and (109). Constraint (108) implies that (110) should be non-negative for each sending sub-network s, i.e., Therefore, the problem reduces to finding t RR,k that satisfies: If it possible to find t RR,k that satisfies (112a) and (112b), then t SR,k can de determined using (110) and therefore a solution for agent k is found. Finding t RR,k that satisfies (112a) and (112b) is always possible. By appropriately attenuating the entries of t RR,k , we can have the right-hand side of (112a) smaller than q k (θ • s ). For instance, one solution is to assign the same value k > 0 to all entries of t RR,k . Then from (112a), we have for each s: which means that k should be chosen so that: We mentioned that, after finding t RR,k that satisfies (112a) and (112b), t SR,k can be determined using (110). We can alternatively express the solutions of t SR,k using the same approach of the previous section. This is because after choosing the entries of t RR,k , the problem is now similar to the previous problem of finding t SR,k while t RR,k is given. Therefore, the solutions for t SR,k can be also given by (51). Note that (51) is expressed in terms of v k to take into account that agent k may not be connected to some sending subnetworks, in the earlier section. Since in this case we are focusing on agent k connected to all sending sub-networks, the solution for t SR,k is given by (51 where v k is used instead of v k . In summary, when agent k is connected to all sending subnetworks, the problem can have an infinite number of solutions. We first find t RR,k that satisfies (112a) and (112b). Then, the entries of t SR,k are nonnegative values chosen to satisfy (110). In other words, when a receiving agent k is under the direct influence of all sending sub-networks, it is relatively straightforward to affect its beliefs, especially since the influence from its receiving neighbors can be attenuated as much as needed through the choice k .

Case 2: Agent k is connected to some sending sub-networks
We now consider the case where agent k is influenced by only a subset of the sending networks. Without loss of generality, we assume it is connected to the first s sending sub-networks. In this case, E k will have the following form: and relation (97) becomes: The problem now is to find t SR,k and t RR,k that satisfy (116) subject to constraints (98)-(99). As before, if we can find t SR,k and t RR,k that satisfy (116), then condition (98) will be automatically satisfied. Note now that (116) consists of s equations that relate the entries of t SR,k to the entries of t RR,k , and S − s equations that involve the entries of t RR,k . Therefore, any vector that satisfies (116) will have the following property: but only for s ≤ s . In other words, the entries of t SR,k that correspond to sub-network s ≤ s are expressed in terms of t RR,k through (117). In addition, and differently from case 1, any solution to (116) should also satisfy: for any s > s . Likewise, constraint (108) implies that (117) should be non-negative for each sending sub-network s where s ≤ s , i.e., for any s ≤ s . Therefore, the problem reduces to finding t RR,k that satisfies: If it possible to find t RR,k that satisfies (120a)-(120c), then t SR,k can de determined using (117) or alternatively using (51). However, in contrast to the case studied in the previous case, finding t RR,k that satisfies conditions (120a)-(120c) may not be always possible. For instance, consider agent k shown in Fig. 6, which is connected to only the first sending sub-network but not to the other two sending sub-networks. Let us consider its desired limiting belief as while the desired limiting beliefs for its neighbors are: Then, from (120a), we should have: Solving (124) and (125) gives the following solution: α2 = 0.3462 and α3 = 0.6923. However, 0.2α2 +0.1α3 = 0.1385, which violates (123). Still, we can have cases where all conditions (120a)-(120c) can be met (we are going to provide one example in a later section), then in these cases, we choose t SR,k according to (117). We observe from this case that the fewer the sending networks that influence agent k, the harder it is to affect its limiting belief. This emphasizes again the idea that the structure of the receiving sub-networks helps in limiting external manipulation.

Case 3: Agent k is not connected to any sending sub-networks
When agent k is not connected to any sending sub-network, relation (97) reduces to: The problem is then to find t RR,k that satisfies: This problem might not have an exact solution. For instance, we discuss two examples in Appendix A of [arXiv paper], where in the second example, we have an agent that is not connected to any sending sub-network and its desired belief cannot be expressed as a convex combination of the desired beliefs of its neighbors.

Comment and analysis
Since the problem of finding TSR and TRR satisfying (87)-(90) is separable, we studied the possibility of finding a solution for each column of TSR and TRR. We analyzed the problem for 3 cases and discovered that for the first case (when agent k is connected to at least one agent from each sending sub-network), problem (97)-(100) always has a solution. That is, if an agent k is connected to all sending sub-networks and given knowledge of the limiting beliefs of its neighbors, we can always find the weight combination for agent k such that (103) is satisfied. For the second case (when agent k is connected to some sending sub-networks) and the third case (when agent k is not connected to any sending sub-network), we found out that problem (97)-(100) might not always have a solution, i.e., it is not always possible to satisfy (103). These scenarios reinforce again the idea that the internal structure of receiving agents can resist some of the external influence. However, for Q to be achievable (i.e., for the beliefs of all receiving agents converge to the desired beliefs), a solution must exist for each agent k. If the desired limiting belief of any receiving agent cannot be written as a convex combination of the limiting beliefs of its neighbors (i.e., a solution cannot be found for problem (97)-(100)), the whole scenario is not achievable. Even if it is possible for agent k to find its appropriate weights t SR,k and t RR,k , finding this solution is based on the knowledge of the desired limiting beliefs of its neighbors. However, if one of the receiving neighbors cannot reach its desired belief, agent k will not be able anymore to reach its desired belief. Therefore, for Q to be attainable, a solution for problem (97)-(100) must exist for each receiving agent k. If Q is not attainable, then the desired scenario should be modified to an attainable scenario, by taking into consideration the limitation provided by the internal connection of the receiving sub-networks. Or an approximate leastsquares solution for the weights can be found. That is, we can instead seek to solve subject to The last condition can be relaxed to the following: where 0 < k < 1. Clearly, when we solve problem (128)-(132), this does not mean that the objective function (128) will be zero at this solution. Note further that the optimization problem (128)-(132) is a quadratic convex problem: its objective function is quadratic, and it has a convex equality constraint (129) and inequality constraints (130) and (132). The inequality constraints are element-wise, i.e., t RR,k (j) ≥ k for all j, which can be equivalently written as e T j t RR,k ≥ k for all j where ej is a vector where all its elements are zero expect for the j th element that is one. In this way, the problem becomes a classic constrained convex optimization problem, which can be solved numerically (using for instance interior point methods).

V. SIMULATION RESULTS
We illustrate the previous results with the following simulation example. Consider the social network shown in Fig. 7 which consists of N = 23 agents. We assume that there are 3 possible events 1 is the true event for the first sending sub-network, θ • 2 is the true event for the second sending sub-network, and θ • 3 is the true event for the receiving sub-network. We further assume that the observational signals of each agent k are binary and belong to Z k = {H, T } where H denotes head and T denotes tail.
Agents of the first sending sub-network are connected through the A weakly-connected network consisting of three subnetworks.
following combination matrix: The matrices TSR and TRR are going to be designed so that the desired limiting beliefs for receiving agents are as follows: In other words, the weights are going to be designed so that θ • 1 and θ • 2 are almost equally probable for the receiving agents. This illustrates the case when the receiving agents listen to two different perspectives from two media sources that are trustworthy for them, which leaves them undecided regarding which true state to choose.
The likelihood of the head signals for each receiving agent k is selected as the following matrix:

Design and Result Simulation
To achieve Q1, we design TSR and TRR using the results in the previous section. The details of the numerical derivation are omitted for brevity. The non-zero weights in TSR are shown in Fig. 8, and TRR is given as follows: We run this example for 7000 time iterations. We assigned to each agent an initial belief that is uniform over Figures  9 and 10 show the evolution of µ k,i (θ • 1 ) and µ k,i (θ • 2 ) of agents in the receiving sub-network. These figures show the convergence of the beliefs of the agents in the receiving sub-networks to the desired beliefs in Q1. Figure 8 illustrates with color the limiting beliefs of receiving agents.

VI. CONCLUSION
In this work, we characterized the set of beliefs that can be imposed on non-influential agents and clarified how the graph topology of these latter agents helps resist manipulation but only to a certain degree. We also derived design procedures that allow influential agents to drive the beliefs of non-influential agents to desirable attainable states. Example I: Cases 1 and 2 (k is influenced by sending networks) Consider the network shown in Fig. 11. It consists of N = 8 agents, two sending sub-networks and one receiving sub-network, with the following combination matrix: We assume that there are 3 possible states 1 is the true event for the first sending sub-network, θ • 2 is the true event for the second sending sub-network, and θ • 3 is the true event for the receiving sub-network. Let us consider the case where we We start with agent 6. After eliminating entries to satisfy the sparsity in the connections, we are reduced to finding t SR,6 and t RR,6 that satisfy Agent 6 is connected to the two sending sub-networks (case 1). Therefore, the problem has a solution, where t SR,6 (α1, α2 and α3) can be expressed in terms of t RR,6 (α4 and α5). More precisely, from (142) and (110), we have: According to (111), to ensure that α1, α2 and α3 can be chosen as nonnegative numbers, the scalars α4 and α5 should be chosen to satisfy Note that what matters for scalars α1 and α2 (the weights with which the data received from sending sub-network 1 is scaled) is that their sum should be equal to 0.2 − 0.3α4 − 0.5α5 according to (144). In other words, when a receiving agent is connected to many agents from the same sending sub-network, it does not matter how much weight is given to each of these agents as long as the sum of these weights takes the required value. This is because the beliefs of agents of the same sending sub-networks will converge to the same final distribution. An alternative way to express (144) is to set α1 and α2 to the following: This choice of β ensures that α1 and α2 are non-negative and less than 0.2 − 0.3α4 − 0.5α5. Moreover, we can check from (148) and (149) that their sum satisfies (144). Therefore, the solution has the following form: For example, one solution is to assign the same value 6 for α4 and α5. Then, from (152) and any vector that satisfies (167) has the following form: where 0.8γ2 + 0.7γ3 + 0.5γ4 = 0.5 Now to ensure that γ1 is non-negative, γ2, γ3 and γ4 should be chosen as follows (as in (119)): 0.2γ2 + 0.3γ3 + 0.5γ4 ≤ 0.5 Therefore, a solution in this case should satisfy (171) subject to 0.8γ2 + 0.7γ3 + 0.5γ4 = 0.5 (174) 0.2γ2 + 0.3γ3 + 0.5γ4 ≤ 0.5 (175) γ2 > 0, γ3 > 0, γ4 > 0 For this example, finding γ2, γ3 and γ4 that satisfy (174)-(175) is always possible. To see this, for any choice of γ2, γ3 and γ4 that satisfy (174), condition (175) is automatically satisfied. Indeed, if (174) is satisfied then 0.5γ2 + 0.5γ3 + 0.5γ4 ≤ 0.8γ2 + 0.7γ3 + 0.5γ4 = 0.5 (177) =⇒ 0.5(γ2 + γ3 + γ4) ≤ 0.5 =⇒ γ2 + γ3 + γ4 ≤ 1 Therefore, γ2 + γ3 + γ4 − 0.8γ2 − 0.7γ3 − 0.5γ4 ≤ 1 − 0.5 (179) =⇒ 0.2γ2 + 0.3γ3 + 0.5γ4 ≤ 0.5 For instance, one possible choice for γ2, γ3 and γ4 that satisfies (174) To verify that the beliefs of the receiving agents converge to the desired beliefs, we compute W T from (13) and use (18)  Consider the network shown in Fig. 12, with the following combination matrix: A weakly connected network consisting of three subnetworks. In this case, agent 8 is not influenced by any sending network.
What is different now is that agent 8 does not have is not connected to agent 3 (that is, agent 8 is not connected to any sending network). We are still assuming in this example that we have the same desired limiting beliefs: we verify the limiting beliefs of the agents as follows. We compute W T from (13) and use (18)  It is expected that the beliefs of agents 6 and 7 would not converge to the desired beliefs, because the belief of agent 8 cannot converge to its desired belief, which will definitely affect the limiting beliefs of agents 6 and 7. We know that agent 8 will not converge to its desired limiting belief because [0.5;0.5] cannot be obtained by any convex combination of [0.2;0.8] and [0.3;0.7] (its neighbors' limiting beliefs, (103)).