Utility Maximization for Large-Scale Cell-Free Massive MIMO Downlink

We consider the system-wide utility maximization problem in the downlink of a cell-free massive multiple-input multiple-output (MIMO) system whereby a very large number of access points (APs) simultaneously serve a group of users. Specifically, four fundamental problems with increasing order of user fairness are of interest: (i) to maximize the average spectral efficiency (SE), (ii) to maximize the proportional fairness, (iii) to maximize the harmonic-rate of all users, and lastly (iv) to maximize the minimum SE of all users, subject to a sum power constraint at each AP. As the considered problems are non-convex, existing solutions normally rely on successive convex approximation to find a sub-optimal solution. More specifically, these known methods use off-the-shelf convex solvers, which basically implement an interior-point algorithm, to solve the derived convex problems. The main issue of such methods is that their complexity does not scale favorably with the problem size, limiting previous studies to cell-free massive MIMO of moderate scales. Thus the potential of cell-free massive MIMO has not been fully understood. To address this issue, we propose a unified framework based on an accelerated projected gradient method to solve the considered problems. Particularly, the proposed solution is found in closed-form expressions and only requires the first order oracle of the objective, rather than the Hessian matrix as in known solutions, and thus is much more memory efficient. Numerical results demonstrate that our proposed solution achieves the same utility performance but with far less run-time, compared to other second-order methods. Simulation results for large-scale cell-free massive MIMO show that the four utility functions can deliver nearly uniformed services to all users. In other words, user fairness is not a great concern in large-scale cell-free massive MIMO.


I. INTRODUCTION
Multiple-input multiple-output (MIMO) is the underlying technology in the physical layer of many modern wireless communications standards.The use of multiple antennas at transceivers can offer high data rates and high reliability by exploiting spatial and diversity gains [2]- [4].To meet a set of requirements for 5G networks, MIMO has evolved into so-called massive MIMO where a very large number of antennas are deployed at each base station (BS) to serve many users at the same time [5], [6].In particular, massive MIMO has been implemented in the first version of 5G NR [7].Since 5G still follows the conventional design of a cellular network like its predecessors, inter-cell interference remains a fundamental problem, and thus massive MIMO cannot be unlocked to its full potential [8].
There are two types of massive MIMO in terms of the service area: colocated massive MIMO and distributed massive MIMO.For the former, all the antennas are placed in a small area and therefore, the processing complexity requirement is very low.For the latter, on the other hand, the antennas are distributed to serve a relatively much larger area.These systems are more diverse against the shadow fading and they have a large coverage area [9].There is no doubt that the distributed massive MIMO is better than colocated massive MIMO but due to the more processing complexity and high cost requirements [10], the scalability remains an active area of research in distributed systems.
Cell-free massive multiple-input multiple-output (MIMO) was introduced in [11] as a major leap of massive MIMO technology to overcome the inter-cell interference which is the main inherent limitation of cellular-based networks.In cell-free massive MIMO, many access points (APs) distributed over the whole network serve many users in the same time-frequency resource.
There are no cells, and hence, no boundary effects.Unlike colocated massive MIMO, each AP in cell-free massive MIMO is equipped with just a few antennas.But an important point is that when the number of APs is very large, cell-free massive MIMO is still able to exploit the favorable propagation and channel hardening properties, like colocated massive MIMO.
In particular, with favorable propagation, APs can use simple linear processing techniques to combine the signals in the uplink, and precode the symbols in the downlink.With channel hardening, decoding the signals using the channel statistics (large-scale fading coefficients) can provide good performance.In addition, all resource allocations (i.e.power control, user scheduling or AP selections) can be done over the large-scale fading time scale [12], [13].
Note that in some propagation environments, the level of channel hardening in cell-free massive MIMO is lesser than that in colocated massive MIMO [14].
The research on cell-free massive MIMO is still in its infancy and thus deserves more extensive and thorough studies.We discuss here some of the noticeable and related studies in the literature.
In [11], Ngo et al. considered the problem of minimum rate maximization to provide uniformly good services to all users.The problem was then solved using a bisection search and a sequence of second-order cone feasibility problems.In [15], Nguyen et al. adopted zero-forcing precoding and studied the energy efficiency maximization (EEmax) problem.In this work, an iterative method based on successive convex approximation (SCA) was derived.In [16], both the maxmin fairness and sum-rate maximization problems for user-centric cell-free massive MIMO were considered and solved by SCA.The SCA-based method was also used in [12] and [17] to solve the EEmax and max-min fairness power controls with different cell-free massive MIMO setups, respectively.
A common feature of all the above mentioned pioneer studies on cell-free massive MIMO is the use of a second-order interior-point method which requires the computation of the Hessian matrix of the objective, and thus their computational complexity and memory requirement makes them impossible to implement and investigate the performance of large-scale cell-free massive MIMO.To motivate our proposed method, let us consider an example where 2000 APs are deployed to serve 200 users over an area of 1 km 2 , which is typical for an urban area in our vision.The power control problem arising from this scenario has 4 × 10 5 optimization variables.
Consequently, we would basically need 160 GB of memory to store the resulting Hessian matrix, assuming a single-precision floating-point format.It is this immense memory requirement of the existing power control methods that only allows us to implement as well as to characterize the performance of cell-free massive MIMO for a relatively small-scale system.For example, the work of [11] was able to consider an area of 1 km 2 , consisting of 100 APs serving 40 users.
Numbers with the same order of magnitude were also observed in the above mentioned papers.
The performance of these scenarios fractionally represents the full potential of cell-free massive MIMO.
To fully understand the performance limits of cell-free massive MIMO and to make the power controls feasible for practical implementation, we urgently need to devise more memory efficient power control methods.To this end, we propose in this paper a first order method to maximize several system-wide utility functions, namely the total spectral efficiency, proportional fairness, harmonic-rate, and the minimum rate of the downlink.Referring to the motivating example in the preceding paragraph, a first order optimization method only requires a memory of 8 MB, which is affordable by most, if not all, modern desktops.In this paper, similar to many previous studies (e.g.[12]), we adopt the conjugate beamforming at each AP, and the power control problems for utility maximization are based on large-scale fading.While first-order methods are popular for convex programming, they are relatively open for nonconvex programming which is unfortunately the case for the considered problems.In the paper, we capitalize on the accelerated proximal gradient method for nonconvex programming presented in [18] to derive efficient and unified solutions to the considered utility maximization problems.A similar method has been used in [19] to solve the EEmax problem.Our contributions are as follows • We first present a brief introduction of an accelerated proximal gradient method in general and a special variant in particular, which we refer to as accelerated projected gradient (APG) method, for nonconvex programming.
• We propose iterative power control algorithms drawing on the APG method to efficiently solve the considered utility maximization problems.Particularly, each iteration of the proposed algorithms is done by closed-form expressions and can be done in parallel.To achieve this, we reformulate the considered problems so that the gradient of the objective is Lipchitz continuous and the projection is still efficient to compute.
• We provide a complexity and convergence analysis of the proposed methods.Specifically, the per-iteration complexity of our proposed method is only O(K 2 M) as compared to the per-iteration complexity of O √ K + M M 3 K 4 for the SCA-method in [12], where M and K are the numbers of APs and users, respectively.Accordingly, the proposed method takes much reduced run time to return a solution as numerically shown in Section V.As a result, the propose method can lay the foundation to numerically analyze the performance of large-scale cell-free massive MIMO.
• We carry out extensive numerical experiments to draw useful insights into the performance of large-scale cell-free massive MIMO regarding the four utility metrics above.In particular we find that, in the domain of large-scale cell-free massive MIMO, per-user rates are quite comparable for the four above utility functions, which means that large-scale cellfree massive MIMO can deliver universally good services to all users.Also, in terms of per-user rate, it is more beneficial to use a higher number of APs with a fewer antenna per AP than to use a smaller number of APs with more antennas per APs.
Notations: Bold lower and upper case letters represent vectors and matrices.CN (0, a) denotes a complex Gaussian random variable with zero mean and variance a. X T and X † stand for the transpose and Hermitian of X, respectively.x i is the i-th entry of vector x; [X] i,j is the (i, j)-th entry of X. ∇f (x) represents the gradient of f (x) and ∂ ∂x i f (x) is the partial gradient with respect to x i .x, y x T y is the inner product of vectors x and y.

A. System Model
We consider the downlink of a cell-free massive MIMO system model as in [12].In particular, there are M APs serving K single-antenna users in time division duplex (TDD) mode.Each AP is equipped with N antennas.All the APs and the users are assumed to be distributed in a large area.As TDD operation is adopted, APs first estimate the channels using pilot sequences The interested reader is referred to [12] for further details.

1) Uplink Training:
We assume the channel is reciprocal, i.e., the channel gains on the uplink and on the downlink are the same.Consequently, APs can estimate the downlink channel based on the pilot sequences sent by all users on the uplink.Let T p ψ k ∈ C Tp×1 , where ||ψ k || 2 = 1, be the pilot sequence transmitted from the k-th user, k = 1, . . ., K. Note that T p is the length of the pilot sequences, which is the same for all users.The received signal at the m-th AP is given by where ζ p is the normalized transmit signal-to-noise ratio (SNR) of each pilot symbol, and W up,m ∈ C N ×Tp is the noise matrix whose entries are independent and identically distributed (i.i.d.) drawn from CN (0, 1), and g mk ∈ C N ×1 is the channel between the m-th AP and the k-th user.As in [12], we model g mk as where β mk represents the large-scale fading (i.e.including path loss and shadowing effects) and h mk ∈ C N ×1 comprises of small-scale fading coefficients between the N antennas of the m-th AP and the k-th user.We further assume that the entries of h mk follows i.i.d.CN (0, 1).
Next the m-th AP needs to estimate the channel g mk , k = 1, 2, . . ., N, based on the received pilot signal R up,m .To do so, the m-th AP projects R up,m onto ψ k , producing where wmk W up,m ψ k ∈ C N ×1 has entries following i.i.d.CN (0, 1).Given r mk , the minimum mean-square error (MMSE) of the channel estimate of g mk is [12] Note that the expectations in the above equation are carried out with respect to small-scale fading and also that elements of ĝmk are independent and identical Gaussian distribution.The mean square of any element of ĝmk is given by (5) 2) Downlink Payload Data Transmission: For downlink payload data transmission, the APs use the channel estimates obtained in (4) to form separate radio beams to the K users.As mentioned earlier we adopt conjugate beamforming in this paper, which is due to two main reasons.First, conjugate beamforming is computationally simple and can be done locally at each AP.Second, conjugate beamforming offers excellent performance for a large number of APs (relatively compared to the number of users).Denote the symbol to be sent to the k-th user by c k and the power control coefficient between the m-th AP and the k-th user by η mk .For conjugate beamforming, the transmitted symbols from the m-th AP are contained in the vector x m given by where ζ d is the maximum downlink transmit power at each AP normalized to noise power.Note that the total power at each AP is The received signal at the k-th user is written as where w k is the white Gaussian noise with zero mean and unit variance.
3) Signal Detection based on Channel Statistics and Spectral Efficiency: Ideally, to detect c k , the k-th user needs to know the effective channel gain √ η mk g T mk ĝ * mk .However this is impossible since there are no downlink pilots.Instead, the k-th user will rely on the mean of the effective channel gain to detect c k .To see this we rewrite (8) as In the above equation the second term in the right side can be seen as the beamforming uncertainty, which is due to treating the mean of the effective channel gain as the true channel.
We remark that by the law of large numbers which holds for our system model, with high probability, this term is much smaller compared to the mean of the effective channel gain.By further treating this and the inter-user interference as the Gaussian noise, we can express the signal to interference plus noise ratio (SINR) at the k-th user as [12, Appendix A] Accordingly, the spectral efficiency of the k-th user is given by Note that for mathematical convenience we use the natural logarithm in (12), and thus the resulting unit of the SE is nat/s/Hz.However, for the numerical results presented in Section V, we instead use the logarithm to base 2 to compute the SE and the corresponding unit is bit/s/Hz.

B. Problem Formulation
To formulate the considered problem and to facilitate the development of the proposed algorithm, we define µ m ∈ R K + to be the vector of all power control coefficients associated with the m-th AP as where + to include the power control coefficients of all APs.To express the spectral efficiency in (12) as a function of µ, we denote by μk = [µ 1k ; µ 2k ; . . .; µ M k ] the vector of power control coefficients associated with user k.Thus we can write ν T ik η i as νik μk , where Similarly, we can write D ik ηi as Di μi , where Di is the diagonal matrix with the m-th diagonal entry equal to √ β mi .Now the spectral efficiency of the k-th user (in nat/s/Hz) can be expressed as where γ k (µ) is the SINR of the k-th user given by The total spectral efficiency of the system is defined as In this paper, we consider a power constraint at each AP which is given by ||µ m || 2 ≤ 1 N , m = 1, 2, . . ., M, which follows from (7).For the problem formulation purpose, we define the following set which is nothing but the feasible set of the utility maximization problems to be presented.In this paper, we consider the following four common power control utility optimization problems [20], namely • The problem of average spectral efficiency maximization (SEmax) • The problem of proportional fairness maximization (PFmax) Note that the above problem is equivalent to maximizing Thus it is also known as the problem of geometric-rate maximization.
• The problem of the harmonic rate maximization (HRmax) • The problem of maximizing the minimum rate (MRmax) among all users (also known as max-min fairness maximization) We note that the above four problems are noncovex and thus difficult to solve.For such problems, a pragmatic goal is to derive a low complexity high-performance solution, rather than a globally optimal solution.To this end, SCA has proved to be very effective and gradually become a standard mathematical tool [11], [12].The idea of SCA is to approximate a non-convex program by a series of convex sub-problems.In all known solutions for the considered problems or related ones, interior point methods (through the use of off-the-shelf convex solvers) are invoked to solve these convex problems [12], [16], [17], which do not scale favorably with the problem size.Thus the existing solutions are unable to characterize the performance limits of cell-free massive MIMO systems where the number of APs can be in the order of thousands, even from an off-line design perspective.In this paper, we propose methods that can tackle this scalability problem.In particular, our proposed methods are based on first order optimization methods which are presented in the following section.

III. PROPOSED SOLUTIONS
In this section, we present solutions to (P 1 ) to (P 4 ), using the APG methods, a variant of the proximal gradient method introduced in [18].In general the four considered problems can be written in a compact form as where for (P 3 ), and f (µ) = min 1≤k≤K SE k (µ) for (P 4 ).We remark that the method described in [18] concerns the following problem where f (x) is differentiable (but possibly nonconvex) and g(x) can be both nonconvex and nonsmooth.Further assumptions on f (x) are listed below: If we let g(x) be the indicator function of S, defined as then ( 24) is actually equivalent to (23).Furthermore, the proximal operator of g(x) becomes the Euclidean projection onto S. In the following, we customize the APG methods to solve the considered problems.In essence, the APG method moves the current point along the gradient of the objective with a proper step size and then projects the resulting point onto the feasible set.This step is repeated until some stopping criterion is met.
Remark 1.One may ask why we have made a change of variables from η to µ in Section II-B.
The question is relevant since the projection onto the feasible set expressed in terms of η can also be done efficiently.However, if the objective function of the four considered problems is written as a function of η, then there are two difficulties arising.Firstly, the expression of the gradient of the objective becomes very complicated.Secondly and more importantly, the gradient of the objective is not Lipschitz continuous since the term √ η mk would appear in the denominator of the gradient.Note that η mk can be zero, which can make the gradient unbounded.
A. Proposed Solution to (P 1 ) Since f (µ) for (P 1 ) is differentiable, the proposed algorithm for solving (P 1 ) follows closely the monotone APG method in [18], which is outlined in Algorithm 1.In Algorithm 1 α > 0 is called the step size which should be sufficiently small to guarantee its convergence and L f is the Lipschitz constant of ∇f (µ).The superscripts in Algorithm 1 denote the iteration count.
Also, the notation P S (u) denotes the projection onto S, i.e., Note that we adapt the monotone APG method in [18] for minimization to the context of maximization for our problems.Specifically, from a given operating point, we move along the direction of the gradient of f (µ) with the step size α, and then project the resulting point onto the feasible set.We note that y n in Step 4 is an extrapolated point which is used for convergence acceleration.However, unlike APG methods for the convex case, y n can be a bad extrapolation, and thus Step 7 is required to fix this issue.We now give the details for the two main operations of Algorithm 1, namely: the projection onto the feasible set S and the gradient of f (µ).
Algorithm 1 Accelerated projected gradient algorithm for solving (P 1 )-(P 4 ) 3: for n = 1, 2, . . .do 4: 5: z n+1 = P S (y n + α∇f (y n )) 6: 7: It is easy to see that the above problem can be decomposed into sub-problems at each AP m as minimize The above problem is in fact the projection onto the intersection of a ball and the positive orthant.
Interestingly, the analytical solution to this problem can be found by applying [21, Theorem 7.1], which produces The above expression means that we first project x m onto the positive orthant and then onto the Euclidean ball of radius 1/N.A simpler way to prove (29) is detailed in Appendix A.
2) Gradient of f (µ) for (P 1 ): To implement Algorithm 1, we also need to compute ∇ µ f (µ), which is derived in what follows.We know that the gradient of a multi-variable function is the vector of all its partial derivatives, i.e. where The gradient of SE k (µ) with respect to μi , i = 1, 2, . . ., K, is found as Now we recall the following identity ∇||Ax|| 2 = 2A T Ax for any symmetric matrix A, and thus ∇ μi b k (µ) and ∇ μi c k (µ) are respectively given by

B. Improved Convergence with Line Search
For (P 1 ), from (32), (33), and (34), it easy to check that ∇f (µ) is Lipschitz continuous, or equivalently f (µ) has Lipschitz continuous gradient.That is, there exists a constant Further details are given in Appendix B.
In practice, we in fact do not need to compute a Lipschitz constant of ∇f (µ) for two reasons.
First, the best Lipschitz constant of ∇f (µ) (i.e. the smallest L such that (35) holds) is hard to find.Second, the conditions α < 1 L f is sufficient but not necessary for Algorithm 1 to converge.Thus, we can allow α to take on larger values to speed up the convergence of Algorithm 1 by means of a linear search procedure.In this paper we use line search with the Barzilai-Borwein (BB) rule to compute step sizes for Algorithm 1.The APG method with line search backtracking line search is summarized in Algorithm 2. The step sizes α y and α µ computed in Steps 6 and 8 can be viewed as local estimate of the optimal Lipschitz constant of the gradient at y n−1 and µ n−1 , respectively.

D. Proposed Solution to (P 4 )
Problem (P 4 ) deserves further discussions since the objective is nonsmooth.We recall that for (P 4 ) the objective function is which is non-differentiable.Thus a straightforward application of the APG method is impossible.
To overcome this issue, we adopt a smoothing technique.In particular, f (µ) is approximated by the following log-sum-exp function given by [22] where τ > 0 is the positive smoothness parameter.To obtain (41), we have used the fact that In other words, f τ (µ) is a differentiable approximation of f (µ) with a numerical accuracy of log K τ .Thus, with a sufficiently high τ , we can find an approximate solution to (P 4 ) by running Algorithm 1 with f (µ) being replaced by f τ (µ) in (41).In this regard, the gradient of f τ (µ) is easily found as

A. Complexity Analysis
We now provide the complexity analysis of the proposed algorithms for one iteration using the big-O notation.It is clear that the complexity of Algorithm 1 is dominated by the computation of three quantities: the objective, the gradient, and the projection.It is easy to see that KM multiplications are required to compute SE k (µ).Therefore, the complexity of finding In a similar way, we can find that the complexity of ∇f (µ) is also O(K 2 M).The projection of µ onto S is given in (29), which requires the computation of the l 2 -norm of K ×1 vector x m at each AP, and thus the complexity of the projection is O(KM).In summary, the per-iteration complexity of the proposed algorithm for solving (P 1 ) is O(K 2 M).
To appreciate the low-complexity of the proposed methods, we now provide the per-iteration complexity of the SCA-based method for solving (P 1 ) derived from [12].Note that the iterative method presented in [12] is dedicated to the problem of total energy efficiency maximization but it can be easily customized to solve (P 1 ).Specifically, the convex sub-problem at the iteration n + 1 of the SCA-based method reads minimize µ≥0,t≥0 where and We remark that the objective admits a second order cone reformulation and thus (43) is a second order cone program.According to [23,Sect. 6.6.2], the complexity to solve (43) is which is much larger than O(K 2 M) for the proposed method, especially when M and K are large.

B. Convergence Analysis
We now discuss the convergence result of Algorithms 1 and 2 for solving (P 1 ), which is stated in the following lemma.
Lemma 3. Let {µ n } be the sequence produced by Algorithms 1 or 2. Then the following properties hold.
• The sequence of the objective values {f (µ n )} is nondecreasing and convergent.
• The sequence {µ n } is bounded and any limit point of {µ n } is a critical point of (P 1 ).
Proof: Please see Appendix C.
The same convergence result applies to Algorithms 1 and 2 for solving (P 2 ), (P 3 ) and (P 4 ).

V. NUMERICAL RESULTS
In this section, we evaluate the performance of the proposed methods in terms of computational complexity and achieved spectral efficiency.All simulations results are obtained using Algorithm 2 since it has a faster convergence rate.The users and the APs are uniformly dropped over a D×D km 2 .The large-scale fading coefficient between the m-th AP and the k-th user is generated as where PL mk and z mk represent the path loss and log-normal shadowing with mean zero and standard deviation σ sh , respectively.In this paper, we adopt the three-slope path loss model as in [12], in which PL mk In the first numerical experiment, we compare the convergence rate of the proposed method with the SCA-based method presented in [12] as explained in Section IV-A.To solve (43), we use convex conic solver MOSEK [24] through the modeling tool YALMIP [25].In particular, Figs.1(a) and 1(b) show the convergence of Algorithm 2 and the SCA-based method for the total spectral efficiency and the min-rate maximization problem, respectively.We can see that Algorithm 2 and SCA-based methods achieve the same performance but the SCA-based method requires fewer iterations to return a solution.However, the main advantage of Algorithm 2 over the SCA-based method is that each iteration of the proposed method is very memory efficient and can be done by closed-form expressions, and hence, is executed very fast.As a result, the total run-time of the proposed method is far less than that of the SCA-based method as shown in Table I.In Table I, we report the actual run-time of both methods to solve the SEmax problem.
Here, we execute our codes on a 64-bit Windows operating system with 16 GB RAM and Intel CORE i7, 3.7 GHz.Both iterative methods are terminated when the difference of the objective for the last 5 iterations is less than 10 −3 .
We next take advantage of the proposed methods to explore the spectral efficiency performance of large-scale cell-free massive MIMO that can cover, e.g. a large metropolitan area in our vision.
In particular, we investigate the performance of cell-free massive MIMO for two cases D = 1 and D = 10.To obtain a fair comparison, we keep the AP density (defined as the number of APs per square kilometer) same for both cases.Note that the AP density of 1000 means there are 10 000 APs for the case of D = 10, which has not been studied in the literature previously.To appreciate the proposed method for this large-scale scenario, we compare it with the SCA method and the equal power allocation (EPA) method where the power control coefficient η mk is given by, η mk = ( k i=1 ν mi ) −1 .The results in Fig. 2 are interesting.First, increasing the AP density expectedly improves the total spectral efficiency of the system.Second, for the same AP density, a larger area provides a better sum spectral efficiency.The reason is that for a larger area, the users that are served by the APs become far apart each other.As a result, the inter-user inference becomes weaker, leading to an improved sum spectral efficiency.On the other hand, the EPA method yields smaller spectral efficiency as the coverage area is larger because more power should be to spent to the users having small path loss.The SCA-based method produces the same spectral efficiency as the proposed APG method but it is unable to run for D = 10 on the system specifications mentioned above.Thus, the proposed scheme outperforms both SCA-based and the EPA methods in terms of sum spectral efficiency and coverage area.
In Fig. 3, we again investigate the spectral efficiency performance of the AP density but for different number of users (i.e.K = 100 and K = 40).It can be seen that for a given AP density, the SE is increased if the number of users becomes larger.The gain is more profound for larger AP density due to the fact that more APs allow for more efficient exploitation of multiuser diversity gains.).We can observe in Fig. 4 that the median values of the total achieved SEs are more or less the same for all the four utility functions.The 95%-likely achievable downlink SE is decreasing from (P 1 ) to (P 4 ) which can be explained by the fact that the following inequality holds: SE (P 1 ) > SE (P 2 ) > SE (P 3 ) > SE (P 4 ) , where SE (P i ) denotes the per-user spectral efficiency obtained by solving (P i ) [20].It is also known that the order is reversed in terms of fairness.
Consequently, the CDF of the per-user SE of (P 4 ) (i.e. the MRmax problem) has the steepest Fig. 4: CDF of per-user spectral efficiency for (P 1 )-(P 4 ) for small scale and large-scale problems.slope and that of the SEmax problem is more spread.It is particularly interesting to see that the difference on the CDF of the per-user SE of all four utility metrics is marginal for large-scale cell-free massive MIMO.This simulation result again confirms that cell-free massive MIMO can deliver universally good services to all users in the system.In the next experiment, we investigate how the average SE varies as a function of the number of APs.In particular, Fig. 5 shows the average spectral efficiency as a function of the number of APs for K = 100 and K = 50 users in an area of D = 1.We can see that the average SE increases quickly when the number of APs is less 1000 for both K = 50 and K = 100 users, and it starts to saturate when the number of APs is larger.The reason is that for a given user, there should be a certain number of APs (i.e closest APs) that truly provides macro diversity gains to that user in terms of SE.Thus, a user may be served by a subset of APs to achieve similar SE performance.This can help reduce the overhead in cell-free massive MIMO.Further insights into this are discussed in the next numerical experiment.In Fig. 6, we consider a scenario with M = 500 APs and study how many APs are effectively required for each user.In particular, we plot the average spectral efficiency as a function of the number of selected APs per user for two utility functions: SEmax and MRmax.For a given user, a number of APs is simply selected based on their large-scale fading coefficients.From Fig. 6, we can observe that not all 500 APs are needed to serve 25 or 50 users in an area of 1 km 2 .Instead a smaller number of APs per user can can yield nearly the same performance.
For example, we only need to assign less than 100 APs to a user to achieve 95% of the SE of the full system.Finally, we investigate the effect of increasing the number of antennas per AP on the sum SE and minimum SE.Specifically, we plot the average SE and minimum SE with respect to the number of antennas for both M = 500 and M = 1000 APs.The number of users is fixed to K = 50.Expectedly, the SE increases with the increase in the number of antennas per APs but the increase tends to be small when the number of antennas is sufficiently large.The reason is that for a large number of APs channel harderning and favorable propagation can be achieved by a few antennas per AP.Specifically, we can see that the SE for the case of 1000 APs and 4 antennas per AP is larger than the SE for the case of 500 APs and 8 antennas per AP.Therefore, for large-scale cell-free massive MIMO, having more APs with a few antennas each seems to be more beneficial than having fewer APs with more antennas each.

VI. CONCLUSION
We have considered the downlink of cell-free massive MIMO and aimed to maximize four utility functions, subject to a power constraint at each AP.Conjugate beamforming has been adopted, resulting in a power control problem for which an accelerated project gradient method has been proposed.Particularly, the proposed solutions only requires the first order information (50) Combining ( 46), ( 47) and (50) results in which completes the proof.

B. Lipschitz Constant of ∇f (µ)
Assessing the Lipschitz constant of ∇f (µ) for the four problems discussed in the paper boils down to finding the Lipschitz constant of ∇SE k (µ).A convenient way to do this is to rewrite the function SE k (µ) in the form of single variable µ which can be done by denoting A i I M ⊗ e T i .
Then, we can write μi as A i µ, and thus γ k (µ), the SINR of the k-th user, can be rewritten as The gradient of SE k (µ) with respect to µ is found as where ∇ µ b(µ) and ∇ µ c(µ) are given by Now the Lipschitz continuity of ∇ µ b k (µ) and ∇ µ c k (µ) is obvious, and so is that of ∇SE k (µ).
Although we can compute the Lipschitz constant of ∇SE k (µ), this is quite involved and not necessary since a line search is used to find a proper step size.

C. Convergence Proof of Algorithm 1
The proof is due to [18].First, we note that since ∇f (µ) is Lipschitz continuous and a line search is used to find a proper step size in Algorithm 2, it is sufficient to prove the convergence Algorithm 1.We begin with by recalling an important inequality of a L-smooth function.Specifically, for a function f (x) has the Lipschitz continuous gradient with a constant L f , the following inequality holds The projection in Step 6 of Algorithm 1 can be written as Applying (56) yields It is easy to see that f (v n+1 ) ≥ f µ n if α < 1 L f .From Step 7, if f (z n+1 ) ≥ f (v n+1 ), then Similar if f (z n+1 ) < f (v n+1 ), then From ( 59), (60), and (61) we have Since the feasible set of the considered problems is compact convex, the iterates {v n } and {µ n } are both bounded and thus {µ n } has accumulation points.As shown above, f µ n is nondecreasing, f has the same value, denoted by f * , at all the accumulation points.From (59), we have which results in Since α < 1 L f , we can conclude that The convergence proof of Algorithm 1 to a critical point of (P 1 ) follows the same arguments as those in [18] and thus, is omitted for the sake of brevity.
[x] + denotes the projector onto the positive orthant.|| • || represents the Euclidean norm; | • | is the absolute value of the argument.
from the uplink (commonly known as uplink training) and then apply a beamforming technique to transmit signals to all users in the downlink, or use a matched filter technique to combine signals in the uplink.Since this work focuses on the downlink transmission, we neglect the uplink payload transmission phase.Let us denote by T c and T p the length of the coherence interval and the uplink training phase in data symbols, respectively.The uplink training and downlink payload transmission phases are summarized as follows.

: µ n 1 )
Projection onto S: We show that the projection in Steps 5 and 6 in Algorithm 1 can be done in parallel and by closed-form expressions.Recall that for given a x ∈ R M K×1 , P S (x) is the solution to the following problem

2 W
(in dB) equals −L − 15 log 10 (d 1 ) − 20 log 10 (d 0 ) if d mk < d 0 , equals −L − 15 log 10 (d 1 ) − 20 log 10 (d mk ) if d 0 < d mk < d 1 , and equals −L − 35 log 10 (d mk ) otherwise, where L is a constant dependent on carrier frequency, d mk (in km) is the distance between the m-th AP and the k-th user, and d 0 and d 1 (both in km) are reference distances.Similar to [12], we choose L = 140.7 dB, d 0 = 0.01 km and d 1 = 0.05 km.We consider a system having a bandwidth of B = 20 MHz, the noise power density is N 0 = −174 (dBm/Hz), and a noise figure of 9 dB.If not otherwise mentioned, the length of the coherence interval and the uplink training phase are set to T p = 20, T c = 200, respectively.We set the power transmit power for downlink data transmission and uplink training phase (before normalization) as ζ d = 1 W and ζ p = 0., respectively.These parameters are taken from [12].

3 MFig. 1 :
Fig. 1: Total SE and the minimum SE versus the number of iterations.The values of M and K are given explicitly the figure.Each AP is equipped with a single antenna.
Next we plot the cumulative distribution function (CDF) of the per-user SE obtained by the four considered utility functions.Two settings are examined: M = 100, K = 20, T c = 200 symbols

Fig. 2 :
Fig. 2: Total spectral efficiency versus AP density.The number of users is K = 40.

Fig. 3 :
Fig. 3: Total spectral efficiency versus AP density and number of users.

Fig. 4 (
Fig.4(b)).We can observe in Fig.4that the median values of the total achieved SEs are more

Fig. 5 :
Fig. 5: Average spectral efficiency versus number of APs for K = 100 and K = 50 users for D = 1.

Fig. 6 :
Fig. 6: Average spectral efficiency versus number of selected APs for for M = 500 and D = 1.

Fig. 7 :
Fig. 7: Average spectral efficiency with respect to the number of antenna at each AP for K = 50, D = 1.
Since λ > 0, we get Nµ T m x m − 1 > 0 or µ T m x m > 1 N .By using the Cauchy-Schwartz inequality, we can write||µ m || ||x m || > 1 N or ||x m || 2 > 1 N, which refers to the case where x m lies outside S. Furthermore, substituting λ = Nµ T m x m − 1 into (46), we getx m = N(µ T m x m )µ m ,(49)which means that µ m is parallel to x m or µ m = ax m where a is a constant.Next using (49) m || 2 x m .

TABLE I :
Comparison of run-time (in seconds) between Algorithm 2 and the SCA-based method for solving the SEmax problem.The values of other parameters are taken as K = 40, N = 1 and D = 1.