Maximizing Social Welfare Subject to Network Externalities: A Unifying Submodular Optimization Approach

We consider the problem of allocating multiple indivisible items to a set of networked agents to maximize the social welfare subject to network effects (externalities). Here, the social welfare is given by the sum of agents' utilities and externalities capture the effect that one user of an item has on the item's value to others. We provide a general formulation that captures some of the existing resource allocation models as a special case and analyze it under various settings of positive/negative and convex/concave externalities. We then show that the maximum social welfare (MSW) problem benefits diminishing or increasing marginal return properties, hence making a connection to submodular/supermodular optimization. That allows us to devise polynomial-time approximation algorithms using the Lovász and multilinear extensions of the objective functions. More specifically, we first show that for negative concave externalities, there is an <inline-formula><tex-math notation="LaTeX">$e$</tex-math></inline-formula>-approximation algorithm for MSW. We then show that for convex polynomial externalities of degree <inline-formula><tex-math notation="LaTeX">$d$</tex-math></inline-formula> with positive coefficients, a randomized rounding technique based on Lovász extension achieves a <inline-formula><tex-math notation="LaTeX">$d$</tex-math></inline-formula> approximation for MSW. Moreover, for general positive convex externalities, we provide another randomized <inline-formula><tex-math notation="LaTeX">$\gamma ^{-1}$</tex-math></inline-formula>-approximation algorithm based on the contention resolution scheme, where <inline-formula><tex-math notation="LaTeX">$\gamma$</tex-math></inline-formula> captures the curvature of the externality functions. Finally, we consider MSW with positive concave externalities and provide approximation algorithms based on concave relaxation and multilinear extension of the objective function that achieve certain desirable performance guarantees. Our principled approach offers a simple and unifying framework for multi-item resource allocation to maximize the social welfare subject to network externalities.


I. INTRODUCTION
externality (also called network effect) is the effect that one user of a good or service has on the product's value to other people.Externalities exist in many network systems such as social, economic, and cyber-physical networks and can substantially affect resource allocation strategies and outcomes.In fact, due to the rapid proliferation of online social networks such as Facebook, Twitter, and LinkedIn, the magnitude of such network effects has been increased to an entirely new level [1].Here are just a few examples.
Allocation of Networked Goods: Many goods have higher values when used in conjunction with others [2].For instance, people often derive higher utility when using the same product such as cellphones (Figure 1).One reason is that companies often provide extra benefits for those who adopt their products.Another reason is that the users who buy the same product can share many benefits, such as installing similar Apps or sending free messages.Such products are often referred to as networked goods and are said to exhibit positive network externalities.Since each individual wants to hold one product and has different preferences about different products, a natural objective from a managerial perspective is to assign one product to each individual to maximize social welfare subject to network externalities.In other words, we want to maximize social welfare by partitioning the individuals into different groups where the members of each group are assigned the same product.
Cyber-Physical Network Security: An essential task in cyber-physical security is that of providing a resource allocation mechanism for securing the operation of a set of networked agents (e.g., servers, computers, or data centers) despite external malicious attacks [3].One way of doing that is to allocate a security resource to each agent (e.g., by installing one type of antivirus software on each server).Moreover, by extending the security resources to include the "non-secure" resource, we may assume that an agent who is not protected uses the non-secure recourse.Since the agents are interconnected, the compromise of one agent puts its neighbors at higher risk, and such a failure can cascade over the entire network.As a result, deciding what security resource (including the non-secure resource) is assigned to an agent will indirectly affect all the others.Therefore, an efficient allocation of security resources among the agents who experience network externalities is a major challenge in network security.
Distributed Congestion Networks: There are many instances of networked systems, such as transportation [4] or data-placement networks [5], [6], in which the cost of agents increases as more agents use the same resource.For instance, in data-placement networks such as web-caches or peer-topeer networks, an important goal is to store at each node (agent) of a network one copy of some file (resource) to minimize the sum of agents' delay costs to access all files [7].As more agents store the same file, the data distribution across the network will be less balanced, hence increasing the delay cost for obtaining some files [6], [7].Similarly, in transportation networks, as more drivers (agents) use the same path (resource), the traffic congestion on that path will increase, hence increasing the travel time and energy consumption for the drivers (Figure 1).Therefore, a natural goal is to assign each driver to one path to minimize the overall congestion cost in the network [4], [8].Such network effects are often referred to as negative externalities and have been studied under the general framework of congestion games in both centralized or game-theoretic settings [4], [7]- [11].
Motivated by the above, and many other similar examples, our objective in this paper is to study allocation problems when agents exhibit network externalities.While this problem has been significant in the past literature [2], [12]- [14], these results are mainly focused on allocating and pricing of copies of a single item; other than a handful of results [7], [15], [16], the problem of maximizing the social welfare by allocating multiple items subject to network externalities has not been well-studied before.Therefore, we consider the more realistic situation with multiple competing items and when the agents in the network are demand-constrained.Moreover, we consider both positive and negative externalities with linear, convex, and concave functions.Such a comprehensive study allows us to capture more complex situations, such as traffic routing, where the change in an agent's cost depends nonlinearly (e.g., using a polynomial function [4]) on the number of agents that use the same route.

A. Related Work
There are many papers that consider resource allocation under various network externality models.For example, negative externalities have been studied in routing [4], [8], facility location [7], [17], welfare maximization in congestion games [9], and monopoly pricing over a social network [14].On the other hand, positive externalities have been addressed in the context of mechanism design and optimal auctions [2], [16], congestion games with positive externalities [9], [15], and pricing networked goods [12], [18].There are also some results that consider unrestricted externalities where a mixture of both positive and negative externalities may exist in the network [9], [19].However, those results are often for simplified anonymous models in which the agents do not care about the identity of others who share the same resource with them.One reason is that for unrestricted and non-anonymous externalities, maximizing the social welfare with n agents is n 1−ϵ -inapproximable for any ϵ > 0 [9], [15].Therefore, in this work, we consider maximizing social welfare with non-anonymous agents but with either positive or negative externalities.
Optimal resource allocation subject to network effects is typically NP-hard and even hard to approximate [2], [7], [9], [20].Therefore, a large body of past literature has been devoted to devising polynomial-time approximation algorithms with a good performance guarantee.Maximizing social welfare subject to network externalities can often be cast as a special case of a more general combinatorial welfare maximization problem [21].However, combinatorial welfare maximization with general valuation functions is hard to approximate to within a factor better than √ n, where n is the number of items [22].Therefore, to obtain improved approximation algorithms for the special case of welfare maximization with network externalities, one must rely on more tailored algorithms that take into account the special structure of the agents' utility functions.
Another closely related problem on resource allocation under network effects is submodular optimization [23], [24].The reason is that utility functions of the networked agents often exhibit diminishing return property as more agents adopt the same product.That property makes a variety of submodular optimization techniques quite amenable to design improved approximation algorithms.While this connection has been studied in the past literature for the special case of a single item [2], it has not been leveraged for the more complex case of multiple items.Unlike earlier literature [15], [16], [20], our first contribution is to show that multi-item welfare maximization under network externalities can be formulated a special case of minimum submodular cost allocation (MSCA) problem [25], [26].In MSCA, we are given a finite ground set V and k nonnegative submodular set functions f i , i = 1, . . ., k, and the goal is to partition V into k (possibly empty) sets S 1 , . . ., S k such that the sum Theorem 2] used Lovaśz extension and the Kleinberg-Tardos (KT) rounding scheme of [27] to develop an O(log(|V |))-approximation algorithm for MSCA with monotone submodular cost functions.In general, MSCA is inapproximable within any multiplicative factor even in very restricted settings [28], and for monotone submodular cost functions, the poly-logarithmic approximation factor is the best one can hope for (as nearly matching logarithmic lower bounds are known [28]).Therefore, instead of adapting this general framework naively to our problem setting, which can only deliver a poly-logarithmic approximation factor, as our second contribution, we exploit the special structure of the multi-item welfare maximization to obtain constant factor approximation algorithms using refined analysis of the KT randomized rounding.We should mention that there could be alternative reductions between multi-item welfare maximization and special cases of MSCA, such as submodular multiway partition [26].However, we believe that our concise reduction is very natural and requires solving a small-size concave program, which can potentially be applied to more general problems in the above category.
A further generalization of MSCA has been studied in the past literature under the framework of multi-agent submodular optimization (MASO) [29], in which given submodular cost functions f i , i = 1, . . ., k, the goal is to solve min k i=1 f i (S i ), subject to the constraint that the disjoint union of S i , i = 1, . . ., k, must belong to a given family F of feasible sets.When F = {V }, where V is the ground set, MASO reduces to the MSCA, and thus all the inapproximability results for MSCA also apply to MASO.Finally, an extension of MASO has been studied under multivariate submodular optimization (MVSO) [30], in which the objective function has a more general form of f (S 1 , . . ., S k ), where f captures some notion of submodularity across its arguments.Instead of using these general frameworks naively as a black-box, we will leverage the special structure of the agents' utility functions and new ideas from submodular optimization to devise improved approximation algorithms for maximizing the social welfare subject to network externalities.

B. Contributions and Organization
We first provide a general model for the maximum social welfare problem with multiple items subject to network externalities and show that the proposed model subsumes some of the existing ones as a special case.We then show that the proposed model can be formulated as a special case of multi-agent submodular optimization.Leveraging this connection and the special structure of agents' utility functions, we devise unified approximation algorithms for the multiitem maximum social welfare problem using continuous extensions of the objective functions and refined analysis of various rounding techniques such as KT randomized rounding and fair contention resolution scheme.While some of such rounding algorithms were developed for applications such as metric labeling, our work is the first to show that variants of these techniques can be used effectively to analyze the multi-item maximum social welfare problem subject to network externalities.Our principled approach not only recovers or improves the state-of-the-art approximation guarantees but also can be used for devising approximation algorithms with potentially more complex constraints.
The paper is organized as follows.In Section II, we formally introduce the multi-item maximum social welfare problem subject to network externalities.In Section III, we provide some preliminary results from submodular optimization for later use.In Section IV, we consider the problem of maximum social welfare under negative concave externalities and provide a constant-factor approximation algorithm for that problem.In Section V, we consider positive polynomial externalities as well as more general positive convex Fig. 2.An instance of the MSW with n = 6 agents and m = 2 items: blue item (i = 1) and red item (i = 2).Each layer represents the directed influence graph between the agents for that specific item.The influence weights are captured by a1 jk , a 2 jk , ∀j, k.If there is no edge between two agents j and k in an item layer i, it means that a i jk = 0. Note that each agent can adopt at most one item.In the above figure, each of agents j and k is allocated a red item.externality functions and devise improved approximation algorithms in terms of the degree of the polynomials and the curvature of the externality functions.Finally, we extend our results to devise approximation algorithms for positive concave externality functions in Section VI.We conclude the paper by identifying some future research directions in Section VII.

C. Notations
We adopt the following notations throughout the paper: For a positive integer n ∈ Z + we set [n] := {1, 2, . . ., n}.We use bold symbols for vectors and matrices.For a matrix x = (x ji ) we use x j to refer to its jth row and x i to refer to its ith column.Given a vector v we denote its transpose by v ′ .We let 1 and 0 be column vectors of all ones and all zeros, respectively.

II. PROBLEM FORMULATION
Consider a set [n] = {1, . . ., n} of agents and a set [m] = {1, . . ., m} of distinct indivisible items (resources).There are unlimited copies of each item i ∈ [m]; however, each agent can receive at most one item.For any ordered pair of agents (j, k) and any item i, there is a weight a i jk ∈ R indicating the amount by which the utility of agent j gets influenced from agent k, given that both agents j and k receive the same item i.In particular, for j = k, the parameter a i jj ≥ 0 captures intrinsic valuation of item i by agent j.If a i jk ≥ 0, ∀i, j, k with j ̸ = k, we say that the agents experience positive externalities.Otherwise, if a i jk ≤ 0, ∀i, j, k with j ̸ = k, the agents experience negative externalities.We refer to Figure 2 for an illustration of network externality weights. 1  Let S i denote the set of agents that receive item i in a given allocation.For any j ∈ S i , the utility that agent j derives from such an allocation is given by f ij k∈Si a i jk , where Depending on whether the functions f ij are linear, convex, or concave, we will refer to them as linear externalities, convex externalities, or concave externalities.In the maximum social welfare (MSW) problem, the goal is to assign at most one item to each agent in order to maximize the social welfare.
In other words, we want to find disjoint subsets S 1 , . . ., S m of agents such that We note that [n] \ ∪ m i=1 S i is the set of agents that do not receive any item.Such agents are assumed to derive zero utility and hence contribute zero to the objective function (1).Now let us define binary variables x ji ∈ {0, 1}, where x ji = 1 if and only if item i is assigned to agent j.Using the fact that f ij (0) = 0 ∀i, j, the MSW (1) can be formulated as the following integer program (IP): In particular, the IP (2) can be written in a compact form as where for any i ∈ [m], we define x i to be the binary column vector x i = (x 1i , . . ., x ni ) ′ , and f i : {0, 1} n → R is given by We note that the objective function in IP ( 3) is separable across variables Example 1: For the special case of linear functions f ij (y) = y ∀i, j, the objective function in (1) becomes i (j,k)∈Si a i jk , hence recovering the optimization problem studied in [15].We refer to such externality functions as linear externalities.
Example 2: Let G = ([n], E) be a fixed directed graph among the agents, and denote the set of in neighbors of agent j by N j .In a special case when each agent treats all of its in neighbors equally regardless of what item they use (i.e., for every item i we have a i jk = 1 if k ∈ N j and a i jk = 0 otherwise), the objective function in (1) becomes i j∈Si f ij |S i ∩ N j | .Therefore, we recover the maximum social welfare problem studied in [16].For this special case, it was shown in [16, Theorem 3.9] that when the externality functions are convex and bounded above by a polynomial of degree d, one can find an 2 O(d) -approximation for the optimum social welfare allocation.In this work, we will improve this result for the more general setting of (1).

III. PRELIMINARY RESULTS
This section provides some definitions and preliminary results, which will be used later to establish our main results.We start with the following definition.
Definition 1: Given a finite ground set N , a set function f : 2 N → R is called submodular if and only if Equivalently, f is submodular if for any two nested subsets A ⊆ B and any i / ∈ B, we have

A. Lovász Extension
Let N be a ground set of cardinality n.Each real-valued set function on N corresponds to a function f : {0, 1} n → R over the vertices of hypercube {0, 1} n , where each subset is represented by its binary characteristic vector.Therefore, by abuse of notation, we use f (S) and f (χ S ) interchangeably where χ S ∈ {0, 1} n is the characteristic vector of the set where θ ∈ [0, 1] is a uniform random variable, and x θ for a given vector x ∈ [0, 1] n is defined as: and x θ i = 0, otherwise.In other words, x θ is a random binary vector obtained by rounding all the coordinates of x that are above θ to 1, and the remaining ones to 0. In particular, f L (x) is equal to the expected value of f at the rounded solution x θ , where the expectation is with respect to the randomness posed by θ.It is known that the Lovász extension f L is a convex function of x if and only if the corresponding set function f is submodular [31].This property makes the Lovász extension a suitable continuous extension for submodular minimization.

B. Multilinear Extension
As mentioned earlier, the Lovász extension provides a convex continuous extension of a submodular function, which is not very useful for maximizing a submodular function.For the maximization problem, one can instead consider another continuous extension known as multilinear extension.The multilinear extension of a set function f : 2 N → R at a given vector x ∈ [0, 1] n , denoted by f M (x), is given by the expected value of f at a random set R(x) that is sampled from the ground set N by including each element i to R(x) independently with probability x i , i.e., One can show that the Lovász extension is always a lower bound for the multilinear extension, i.e., f L (x) ≤ f M (x), ∀x ∈ [0, 1] n .Moreover, at any binary vector x ∈ {0, 1} n , we have In general, the multilinear extension of a submodular function is neither convex nor concave.However, it is known that there is a polynomial-time continuous greedy algorithm that can approximately maximize the multilinear extension of a nonnegative submodular function subject to a certain class of constraints.That result is stated in the following lemma.
Lemma 1: [32, Theorems I.1 & I.2] For any nonnegative submodular function f : 2 N → R + , downmonotone solvable polytope2 P ⊆ [0, 1] n , there is a polynomial-time continuous greedy algorithm that finds a point x * ∈ P such that f M (x * ) ≥ 1 e f (OP T ), where OP T is the optimal integral solution to the maximization problem max x∈P∩{0,1} n f M (x).If in addition, the submodular function f is monotone, the approximation guarantee can be improved to f M (x * ) ≥ (1 − 1 e )f (OP T ).According to Lemma 1, the multilinear extension provides a suitable relaxation for devising an approximation algorithm for submodular maximization.The reason is that one can first approximately solve the multilinear extension in polynomial time and then round the solution to obtain an approximate integral feasible solution.

C. Fair Contention Resolution
Here, we provide some background on a general randomized rounding scheme known as fair contention resolution that allows one to round a fractional solution to an integral one while preserving specific properties.Intuitively, given a fractional solution to a resource allocation problem, one ideally wants to round the solution to an integral allocation such that each item is allocated to only one agent.However, a natural randomized rounding often does not achieve that property as multiple agents may receive the same item.To resolve that issue, one can use a "contention resolution scheme," which determines which agent should receive the item while losing at most a constant factor in the objective value.
More precisely, suppose n agents compete for an item independently with probabilities p 1 , p 2 , . . ., p n .Denote by A the random set of agents who request the item in the first phase, i.e., P(i ∈ A) = p i independently for each i.In the second phase, If |A| ≤ 1, we do not make any change to the allocation.Otherwise, allocate the item to each agent i ∈ A who requested the item in the first phase with probability Note that for any A ̸ = ∅, we have i∈A r iA = 1, so that after the second phase, the item is allocated to exactly one agent with probability 1.The importance of such a fair contention resolution scheme is that if the item was requested in the first phase by an agent, then after the second phase, that agent still receives the item with probability at least 1 − 1 e .More precisely, it can be shown that [33]: Lemma 2: [33, Lemma 1.5] Conditioned on agent k requesting the item in the first phase, she obtains it after the second phase with probability exactly

IV. MSW WITH NEGATIVE CONCAVE EXTERNALITIES
In this section, we consider maximizing the social welfare with negative concave externalities and provide a constant factor approximation algorithm by reducing that problem to submodular maximization subject to a matroid constraint.
Lemma 3: Given nondecreasing concave externality functions f ij : R → R, intrinsic valuations a i jj ≥ 0, ∀i, j, and negative externality weights a i jk ≤ 0, ∀i, j ̸ = k, the objective function in (1) is a submodular set function.
Proof: Let f i (S i ) = j∈Si f ij k∈Si a i jk , and note that the objective function in (1) can be written in a separable form as m i=1 f i (S i ).Thus, it is enough to show that each f i is a submodular set function.For any A ⊆ B, ℓ / ∈ B, we can write The first inequality holds by the monotonicity of functions f ij and by A ⊆ B, ℓ / ∈ B (note that since a i jk ≤ 0, j ̸ = k, each of the summands in the first summation is nonpositive).The second inequality in (5) follows from concavity of the functions f ij .More precisely, given any j ∈ A, let k∈B\A a i jk = d, k∈A∪{ℓ} a i jk = p, and k∈A a i jk = q, where we note that d ≤ 0 and p ≤ q.By concavity of which is exactly the second inequality in (5).
Let us now consider the MSW with negative concave externalities.However, to assure that the maximization problem from the lens of approximation algorithm is well-defined, we assume that for any feasible assignment of items to the agents, the objective value in (1) is nonnegative.Otherwise, the maximization problem may have a negative optimal value, hence hindering the existence of an approximation algorithm.In fact, if a feasible allocation (S 1 , . . ., S m ) returns a negative objective value, then by unassigning all the items, one can obtain the trivial higher objective value of 0. Therefore, without loss of generality, we may restrict our attention to allocation profiles for which the objective value (1) is nonnegative.
Theorem 1: There is a randomized e-approximation algorithm for the MSW (1) with negative concave externalities.
Proof: Let us consider the IP formulation (3) for the MSW and note that by Lemma 3, the objective function f (x) = i f i (x i ) is a nonnegative and submodular function.Here, x can be viewed as an n × m matrix whose ith column is given by x i .Using separability of f (x), the multilinear relaxation for IP (3) is given by where we have relaxed the binary constraints x i ∈ {0, 1} n to x i ≥ 0. The feasible set P = {x : is clearly a down-monotone polytope as x ∈ P and 0 ≤ y ≤ x, implies y ∈ P.Moreover, P is a solvable polytope as it contains only m + n linear constraints and a total of mn variables.Therefore, using Lemma 1, one can find, in polynomial time, an approximate solution x * to (6) such that f M (x * ) ≥ 1 e f (OP T ), where OP T denotes the optimal integral solution to IP (3).
Next, we can round the approximate solution x * to an integral one x by rounding each row of x * independently using the natural probability distribution induced by that row.More precisely, for each row j (and independently of other rows), we pick entry (j, i) in row j with probability x * ji and only round that entry to 1 while setting the remaining entries of row j to 0. Such a rounding sets at most one entry in each row of the rounded solution to 1 because i x * ji ≤ 1.Since the rounding is done independently across the rows, for any column i, the probability that the jth entry is set to 1 is x * ji , which is independent of the other entries in that column.Therefore, xi represents the characteristic vector of a random set R(x * i ) ⊆ [n], where j ∈ R(x * i ) independently with probability x * ji .Moreover, although the rounded solution x is correlated across its columns, because the objective function f (x) is separable across columns, using linearity of expectation and regardless of the rounding scheme we have . Thus, by definition of the multilinear extension, we have Remark 1: The constraints in P define a partition matroid.Subsequently, one can replace the independent rounding scheme in Theorem 1 by the pipage rounding scheme [34,Lemma B.3] and obtain the same performance guarantee.However, due to the special structure of P, such a complex rounding is not necessary, and one can substantially save in the running time using the proposed independent rounding.
For the special case of negative weights a i jk ≤ 0, ∀j ̸ = k and linear externality functions f ij (y) = y, ∀i, j, the objective function in IP (3) becomes where are n × n weight matrices with nonnegative diagonal entries (due to intrinsic valuations) and negative off-diagonal entries.Applying Theorem 1 to this special case gives an e-approximation algorithm for the MSW with negative linear externalities, which answers a question posed in [20,Section 7.11].In particular, if we further assume that the influence weight matrices A i , i ∈ [m] are diagonally dominant, i.e., n k=1 a i jk ≥ 0, ∀i, j, then the submodular objective function m i=1 x ′ i A i x i will also be monotone.In that case, using the second part of Lemma 1 one can obtain an improved approximation factor of 1 − 1 e .

V. MSW WITH POSITIVE MONOTONE CONVEX EXTERNALITIES
In this section, we consider positive monotone convex externalities and develop polynomial-time approximation algorithms for the maximum social welfare problem.We first state the following lemma that is a counterpart of Lemma 3 to the case of positive convex externalities.
Lemma 4: For positive weights a i jk ≥ 0 and nondecreasing convex externality functions f ij : R + → R + , the objective function in ( 1) is a nondecreasing and nonnegative supermodular set function.
Proof: As in Lemma 3, if we define f i (S i ) = j∈Si f ij k∈Si a i jk , it is enough to show that each f i is a monotone supermodular set function.The monotonicity and nonnegativity of f i immediately follows from nonnegativity of weights a i jk , and monotonicity and nonnegativity of f ij , ∀j ∈ [n].To show supermodularity of f i , for any A ⊆ B, ℓ / ∈ B and similar to Lemma 3, we can write where the first inequality holds by the monotonicity of functions f ij and by a i jk ≥ 0, A ⊆ B, and the second inequality follows from convexity of the functions f ij .
Next, let us consider the IP formulation (3) for the MSW, where As each f ij is a convex and nondecreasing function, using Lemma 4, each f i is a monotone nonnegative supermodular function.Now let ) equals to the Lovász extension of the objective function in (3), which is also a concave function.Therefore, we obtain the following concave relaxation for the IP (3) whose optimal value upper-bounds that of (3).
A. Positive Polynomial Externalities of Bounded Degree Here, we consider convex externality functions that can be represented by polynomials of the form In particular, we show that a slight variant of the randomized rounding algorithm derived from the work of Kleinberg and Tardos (KT) for metric labeling [27] provides a d-approximation for the IP (3) when applied to the optimal solution of the concave program (9).The rounding scheme is summarized in Algorithm 1.The algorithm proceeds in several rounds until all the agents are assigned an item.At each round, the algorithm selects a random item I ∈ [m] and a random subset of unassigned agents S θ I ⊆ [n] \ S, and assign item I to the agents in the set S θ I .
Algorithm 1 Iterative KT Rounding Algorithm • Let x be the optimal solution to the concave program (9).
• During the course of the algorithm, let S be the set of allocated agents and S i be the set of agents that are allocated item i. Initially set S = ∅ and S i = ∅, ∀i.
x ji ≥ θ}, and update S i ← S i ∪ S θ i and S ← S ∪ S θ i .• Return S 1 , . . ., S m .
In the following lemma, we show that if the externality functions f ij can be represented (or uniformly approximated) by nonnegative-coefficient polynomials of degree less than d, then the expected utility of the agents assigned during the first round of Algorithm 1 is at least 1 d of the expected value that those agents fractionally contribute to the Lovász extension objective function.
Lemma 5: Assume each externality function f ij is a polynomial with nonnegative coefficients of degree less than d.
, where x is an m × n feasible solution to (9) whose ith column equals x i .Moreover, let A = S θ I be the random (possibly empty) set of agents that are selected during the first round of Algorithm 1.Then, where f L (x | Ā ) denotes the value of the Lovász extension f L (•) when its argument is restricted to the rows of x corresponding to the agents j ∈ Ā = [n] \ A. 3Proof: First, we note that where we recall that x θ ji = 1 if and only if j ∈ S θ i .Since each x θ ji is a binary random variable, we have As each f ij is a polynomial with nonnegative coefficients of degree less than d, after expanding all the terms in ( 8), there are nonnegative coefficients b i j1,...,jr such that Note that we may assume f ij does not have any constant term as it does not affect the MSW optimization.Taking expectation from the above relation, we obtain {x j ℓ i }.
For any ℓ ∈ [r], with some abuse of notation, let ) denote the j ℓ -th row of the solution x, and define f L (x j1 , . . ., x jr ) = i b i j1,...,jr min ℓ∈[r] {x j ℓ i } to be the restriction of f L to the rows x j ℓ , ℓ ∈ [r].Using the above expression we have Using (11), we note that a tuple of rows x j1 , . . ., x jr contribute exactly f L (x j1 , . . ., x jr ) to the objective f ) if at least one of the agents j ℓ , ℓ ∈ [r] belong to S θ i , and contribute 0, otherwise.Therefore, if A = S θ I is the random set obtained during the first round of Algorithm 1, using linearity of expectation, we can write where the first inequality holds because by feasibility of the solution x, we have i max ℓ∈[r] {x j ℓ i } ≤ i r ℓ=1 x j ℓ i ≤ r, and the second inequality holds because the terms f L (x j1 , . . ., x jr ) are nonnegative.Combining relations (10) and ( 12) completes the proof.
Theorem 2: Assume each externality function f ij is a polynomial with nonnegative coefficients of degree less than d.Then Algorithm 1 is a d-approximation algorithm for the MSW (1).
Proof: We use an induction on the number of agents to show that the expected value of the solution returned by Algorithm 1 is at least ), and x is the optimal solution to (9).Without loss of generality, we may assume that the random set A that is selected during the first round of Algorithm 1 is nonempty.Otherwise, no update occurs, and we can focus on the first iterate in which a nonempty set is selected.
The base case in which there is only n = 1 agent follows trivially from Lemma 5, because nonemptyness of A implies Ā = ∅, and thus . Now assume that the induction hypothesis holds for any set of at most n − 1 agents.Given an instance with n agents, let A = S θ I be the nonempty random set of agents that are selected during the first round of Algorithm 1.Moreover, let S 1 , . . ., S m be the (random) sets returned by the algorithm when applied on the remaining agents in Ā.As | Ā| ≤ n − 1, using induction hypothesis on the agents Ā, we have where is a variable restricted only to the agents in Ā, and the second inequality holds because x | Ā is a feasible solution to the middle maximization.(Recall that x | Ā is the portion of solution x when restricted to rows j ∈ Ā.) Now, we have where the first inequality uses the superadditivity of f i due to supermodular property (that is f i (P ∪Q) ≥ f i (P )+f i (Q) for any P ∩ Q = ∅), and the last inequality holds by (13) and Lemma 5.
Corollary 1: For the special case of positive linear externalities f ij (y) = y, ∀i, j, one can take d = 2, in which case Algorithm 1 is a 2-approximation algorithm.Interestingly, derandomization of Algorithm 1 in this special case recovers the iterative greedy algorithm developed in [15,Theorem 4], which first solves a linear program relaxation for MSW and then rounds the solution using an iterative greedy algorithm.
Remark 2: For convex polynomial externalities of degree at most d, the d-approximation guarantee of Theorem 2 is an exponential improvement over the 2 O(d) -approximation guarantee given in [16,Theorem 3.9].

B. Monotone Convex Externalities of Bounded Curvature
In this part, we provide an approximation algorithm for the MSW with general monotone and positive convex externalities.Unfortunately, for general convex externalities, the Lovász extension of the objective function does not admit a closed-form structure.For that reason, we develop an approximation algorithm whose performance guarantee depends on the curvature of the externality functions.
Definition 2: Given α ∈ (0, 1), we define the α-curvature of a nonnegative nondecreasing convex function h h(y) .Remark 3: Using monotonicity of h, we always have γ h α ∈ [0, 1].In particular, for any monotone k-homogeneous convex function h(αy) ≥ α k h(y), we have γ h α ≥ α k .It is worth noting that [35] also develops a curvaturedependent greedy approximation algorithm for maximizing a nondecreasing submodular function subject to a matroid constraint.However, the definition of curvature in [35] is different from ours as it looks at the maximum normalized growth rate of the overall objective function f as a new element is added to the solution set.Moreover, here we are looking at supermodular maximization (or submodular minimization) that behaves completely different in terms of approximability and solution method.In fact, for the case of submodular maximization, Theorem 1 already provides a curvature-independent e-approximation algorithm.
Using Lemma 4 the MSW (1) with monotone convex externality functions can be cast as the supermodular maximization problem (3).Relaxing that problem via Lovász extension, we obtain the concave program (9), whose optimal solution, denoted by x, can be found in polynomial time.We round the optimal fractional solution x to an integral one x using the two-stage fair contention resolution scheme.It is instructive to think about the rounding process as a twostage process for rounding the fractional n × m matrix x.In the first stage, the columns are rounded independently, and in the second stage, the rows of the resulting solution are randomly revised to create the final integral solution x, satisfying the partition constraints in (3).The rounding algorithm is summarized in Algorithm 2. Using Jensen's inequality, we can lower-bound the expected objective value of x as where the inner expectation in the first equality is with respect to θ −i = (θ i ′ , i ′ ̸ = i) and the randomness introduced by the contention resolution in the second phase.Let 1 {•} denote the indicator function.Then, for any i, j, k, we have Substituting the above relation into (14), and using the monotonicity of f ij together with Definition 2, we can write Therefore, the expected value of the rounded solution is at least γ 1 4 times the optimal Lovász relaxation, which completes the proof.

C. A Numerical Example
Here, we provide a numerical experiment to verify the performance guarantee of the algorithms developed in this section.In our numerical experiment, we fix the number of items to m = 10, and the externality functions to be linear f ij (y) = y, ∀i, j.As a result, the objective function for MSW can be written as f (x) = We generate 40 different instances as the number of agents increase from n = 10 to n = 50.Given an instance with n agents, we generate the weight matrices A i ∈ {0, 1} n×n by randomly selecting 10 rows in A i and uniformly setting one of the elements in that row to 1, and the remaining elements of that row to 0. The expected objective value of Algorithm 1, Algorithm 2, and the optimal IP (3) are illustrated in Figure 3, where the x-axis corresponds to different instances of n = 10, . . ., 50, and the y-axis shows the expected objective value.While in this specific example Algorithm 1 mostly outperforms Algorithm 2, however, as can be seen, the expected objective value of both algorithms is close to the optimal IP objective value.In particular, for all the instances, Algorithm 1 achieves at least 1 2 of the optimal objective value.

VI. MSW WITH POSITIVE MONOTONE CONCAVE EXTERNALITIES
In this section, we extend our results to approximate MSW with nondecreasing positive concave externality functions.Unfortunately, for positive concave externalities, Lemmas 4 and 3 do not hold, and the objective function in (3) is no longer supermodular or submodular.For that reason, we cannot directly use the Lovász or multilinear extensions to solve or approximate the continuous relaxation of the MSW.To address this issue, in this section, we take two different approaches based on a combination of the ideas that have been developed so far.Each method is suitable for a particular subclass of concave functions and together provides a good understanding of how to solve MSW with positive concave externalities.

A. Positive Concave Externalities of Small Curvature
For simplicity and without any loss of generality, throughout this section we assume that the influence weights are normalized such that n k=1 a i jk = 1, ∀i, j.Otherwise, we can scale and redefine the externality functions as f ij (y) ← f ij (( n k=1 a i jk )y).Note that such a scaling preserves concavity and monotonicity, and for the new externalities we have f ij : [0, 1] → R + .The following proposition provides a performance guarantee for approximating MSW with positive concave externalities, which is particularly effective for concave externalities of small curvature.
, where the sup is over all random variables X ∈ [0, 1].Then, the MSW with positive concave externalities admits a 4βapproximation algorithm.
Proof: Let us consider the IP (2) for the MSW with positive concave externality functions f ij , and note that x ji x ki = min{x ji , x ki } for any two binary variables x ji , x ki ∈ {0, 1}.By replacing this relation into the objective function of IP (2) and relaxing the binary constraints, we obtain the following concave relaxation for MSW: x ji ≤ 1 ∀j, where the concavity of the objective function follows from the concavity of f ij and the concavity of n k=1 a i jk min{x ji , x ki }.Therefore, one can solve ( 16) in polynomial time to obtain an optimal fractional solution x.Using this solution as an input to Algorithm 2 we obtain a feasible integral solution x, whose expected objective value can be lower-bounded as where the first inequality uses the definition of β, the second inequality uses (15), and the last inequality follows from concavity of f ij and the fact that f ij (0) = 0.

B. Multilinear Extension for Positive Concave Externalities
This final section provides an alternative approach based on the multilinear extension to solve the MSW subject to positive concave externalities approximately.Let us again consider the IP formulation (2).By defining new binary variables y i jk = x ji x ki , we can rewrite IP (2) as x ji ≤ 1 ∀j, y i jk = x ji x ki ∀i, j, k, x ji , y i jk ∈ {0, 1} ∀i, j, k.Now we can write where y * is the solution obtained from (18) using the continuous greedy algorithm, and OP T is the optimal integral solution to (18).Here, the third equality holds by integrality of y, and the third inequality holds by the property of the pipage rounding that rounds a fractional solution y * to an integral one y without decreasing the multilinear objective value f M .Finally, the last inequality uses Lemma 1. ■ Remark 4: In fact, one can bound the number of skipped elements c in Algorithm 3. As a naive upper bound, we note that selecting each new element into S can eliminate the possibility of choosing at most 2n other elements into S. Since Y has at most n 2 m elements, this gives an upper bound of c ≤ nm 2 .However, in practice, we observed that the value of c in Algorithm 3 is much smaller than this naive upper bound.Although this bound depends polynomially on n and m, since we are working with general positive concave externalities (submodular) functions, we believe that in the worst-case scenario, any approximation algorithm should have a polynomial or logarithmic dependence on these parameters.Nevertheless, improving this dependence on the parameters n and m is an interesting future research direction.

VII. CONCLUSIONS
We studied the maximum social welfare problem with multiple items subject to network externalities.We first showed that the problem could be cast as a multi-agent submodular or supermodular optimization.We then used convex programming and various randomized rounding techniques to devise improved approximation algorithms for that problem.In particular, we provided a unifying method to devise approximation algorithms for the multi-item allocation problem using the rich literature from submodular optimization.Our principled approach not only recovers or improves some of the existing algorithms that have been derived in the past in an ad hoc fashion, but it also has the potential to be used for devising efficient algorithms with additional complicating constraints.
This work opens several avenues for future research.It is interesting to extend our results by incorporating extra constraints into the MSW problem.For instance, in cyberphysical network security, resources tend to be limited and only a constrained subset of agents could have access to security resources.Moreover, it would be interesting to see if the approximation factors developed in this work can be improved or whether matching hardness lower bounds can be established.Finally, one can study a dynamic version of the MSW where the influence weights or the externality functions may change over time.

Fig. 1 .
Fig. 1.The left figure shows an instance of network goods with four different cellphone products.Individuals tend to buy a product that is adopted by most of their friends.The middle figure shows networked servers that are highly interconnected and an adversary who has compromised one of them and hence influences all others.The right figure illustrates the GPS map of a traffic network.As more drivers use the same road, they will negatively influence each others' travel time.

Fig. 3 .
Fig. 3. Illustration of the performance of Algorithms 1 and 2 for positive linear externalities.

Proposition 4 :
For nondecreasing concave externalities f