Intelligent Reflecting Surface Aided Multigroup Multicast MISO Communication Systems

Intelligent reflecting surface (IRS) has recently been envisioned to offer unprecedented massive multiple-input multiple-output (MIMO)-like gains by deploying large-scale and low-cost passive reflection elements. By adjusting the reflection coefficients, the IRS can change the phase shifts on the impinging electromagnetic waves so that it can smartly reconfigure the signal propagation environment and enhance the power of the desired received signal or suppress the interference signal. In this paper, we consider downlink multigroup multicast communication systems assisted by an IRS. We aim for maximizing the sum rate of all the multicasting groups by the joint optimization of the precoding matrix at the base station (BS) and the reflection coefficients at the IRS under both the power and unit-modulus constraint. To tackle this non-convex problem, we propose two efficient algorithms under the majorization–minimization (MM) algorithm framework. Specifically, a concave lower bound surrogate objective function of each user's rate has been derived firstly, based on which two sets of variables can be updated alternately by solving two corresponding second-order cone programming (SOCP) problems. Then, in order to reduce the computational complexity, we derive another concave lower bound function of each group's rate for each set of variables at every iteration, and obtain the closed-form solutions under these loose surrogate objective functions. Finally, the simulation results demonstrate the benefits in terms of the spectral and energy efficiency of the introduced IRS and the effectiveness in terms of the convergence and complexity of our proposed algorithms.


I. INTRODUCTION
In the era of 5G and Internet of Things by 2020, it is predicted that the network capacity will increase by 1000 folds to serve at least 50 billions devices through wireless communications [1] and the capacity is expected to be achieved with lower energy consumption.To meet those Quality of Service (QoS) requirements, intelligent reflecting surface (IRS), as a promising new technology, has been proposed recently to achieve high spectral and energy efficiency.It is an artifical passive radio array structure where the phase of each passive element on the surface can be adjusted continuously or discretely with low power consumption [2], [3], and then change the directions of the reflected signal into the specific receivers to enhance the received signal power [4]- [6] or suppress interference as well as enhance security/privacy [7], [8].
The IRS, as a new concept beyond conventional massive multiple-input and multiple-output (MIMO) systems, mantains all the advantages of massive MIMO systems, such as being capable of focusing large amounts of energy in three-dimensional space which paves the way for wireless charging, remote sensing and data transmissions.However, the differences between IRS and massive MIMO are also obvious.Firstly, the IRS can be densely deployed in indoor spaces, making it possible to provide high data rates for indoor devices in the way of near-field communications [9].Secondly, in constrast to conventional active antenna array equipped with energy-consuming radio frequency chains and power amplifiers, the IRS with passive reflection elements is cost-effective and energy-efficient [4], which enables IRS to be a prospective energyefficient technology in green communications.Thirdly, as the IRS just reflects the signal in a passive way, there is no thermal noise or self-interference imposed on the received signal as in conventional full-dulplex relays.
Due to these significant advantages, IRS has been investigated in various wireless communication systems.Specifically, the authors in [4] first formulated the joint active and passive beamforming design problem both in downlink single-user and multiple-users multiple-input singleoutput (MISO) systems assisted by the IRS, while the total transmit power of the base station (BS) is minimized based on the semidefinite relaxation (SDR) [10] and alternating optimization (AO) techniques.In order to reduce the high computational complexity incurred by SDR, Yu et al.
proposed low complexity algorithms based on MM (Majorization-Minimization or Minorization-Maximization) algorithm in [7] and manifold optimization in [11] to design reflection coefficients with the targets of maximizing the security capacity and spectral efficiency communications, respectively.Pan et al. considered the weighted sum rate maximization problems in multicell MIMO communications [5] and simultaneous wireless information and power transfer (SWIPT) aided systems [6], both demonstrating the significant performance gains achieved by deploying an IRS in the networks.By adopting the alternating algorithm combined with the popular bisection search method, Shen et al. in [8] derive closed-form transmit covariance matrix of the source and semi-closed-form phase shift matrix of the IRS for the security rate maximization problem.However, all the above-mentioned contributions only investigated the performance benefits of deploying an IRS in unicast transmissions, where the BS sends an independent data stream to each user.However, unicast transmissions will cause severe interference and high system complexity when the number of users is large.To address this issue, the multicast transmission based on content reuse [12] (e.g., identical content may be requested by a group of users simultaneously) has attracted wide attention, especially for the application scenarios such as popular TV programme or video conference.From the perspective of operators, it can be envisioned that multicast transmission is capable of effectively alleviating the pressure of tremendous wireless data traffic and play a vital role in the next generation wireless networks.Therefore, it is necessary to explore the potential performance benefits brought by an IRS during the multigroup multicast transmission.
A common performance metric in multicast transmissions is the max-min fairness (MMF), where the minimum signal-to-interference-plus-noise-ratio (SINR) or spectral efficiency of users in each multicasting group or among all multicasting groups is maximized [13]- [17].Prior seminal treatments of multicast transmission in single-group and multigroup are presented in [13], [14], where the MMF problems are formulated as a fractional second-order cone programming (SOCP) and are NP-hard in general.The SDR technique [10] was adopted to approximately solve the SOCP problem with some mathematical manipulations.In order to reduce the high computaional complexity of SDR, several low-complexity algorithms, such as asymptotic approach [15], successive convex approximation approach [16], and heuristic algorithm [17], have been proposed by exploiting the special feature of near-orthogonal massive MIMO channels.
In this paper, we consider an IRS-assisted multigroup multicast transmission system in which a multiple-antenna BS transmits independent information data streams to multiple groups, and the single-antenna users in the same group share the same information and suffer from interference from those signals sent to other groups.Unfortunately, the popular SDR-based method incurs a high computational complexity which hinders its practical implementation when the number of design parameters (e.g., precoding matrix and reflection coefficient vector) becomes large.
Furthermore, the aforementioned low-complexity techniques designed for the IRS-aided unicast communication schemes cannot be directly applied in the multigroup multicast communication systems since the MMF metric is a discrete and complex objective function.
Against the above background, the main contributions of our work are summarized as follows: • To the best of our knowledge, this is the first work exploring the performance benefits of deploying an IRS in multigroup multicast communication systems.Specifically, we jointly optimize the precoding matrix and the reflection coefficient vector to maximize the sum rate of all the multicasting groups, where the rate of each multicasting group is limited by the minimum rate of users in the group.This formulated problem is much more challenging than previous problems considered in unicast systems since our considered objective function is discrete and complex due to the nature of the multicast transmission mechanism.In addition, the highly coupled variables and complex sum rate expression aggravates the difficulty to solve this problem.
• The formulated problem is solved efficiently in an iterative manner based on the alternating optimization method.Specifically, we firstly minorize the original non-concave objective function by a surrogete function which is biconcave of precoding matrix and reflection coefficient vector, and then apply the alternating optimization method to decouple those variables.At each iteration of the alternating optimization method, the subproblem corresponding to each set of variables is reformulated as an SOCP problem by introducing auxiliary variables, which can help to transform the discrete concave objective function into a series of convex constraints.
• To further reduce the computational complexity, we use the MM method to derive closedform solutions of each subproblem, instead of solving the complex SOCP problems with a high complexity at each iteration.Specifically, we firstly apply the log-sum-exp lower bound to approximate the discrete concave objective function, yielding a differentiable continuous concave function.Then, we use the MM method to derive a tractable surrogate objective function of the log-sum-exp function, based on which we derive the closed-form solutions of each subproblem.Finally, we prove that the proposed algorithm is guaranteed to converge and the solution sequences generated by the algorithm converge to stationary points.
• Finally, the simulation results demonstrate the superiority of the IRS-assisted multigroup multicast system over conventional massive MIMO systems in terms of spectral efficiency The remainder of this paper is organized as follows.Section II introduces the system model and formulates the optimization problem.An SOCP-based method is developed to solve the problem in Section III.Section IV further provides a low-complexity algorithm.Finally, Section V and Section VI show the simulation results and conclusions, respectively.

Notations:
The following mathematical notations and symbols are used throughout this paper.
Vectors and matrices are denoted by boldface lowercase letters and boldface uppercase letters, respectively.The symbols X * , X

A. Signal Transmission Model
As shown in Fig. 1, we consider an IRS-aided multigroup multicast MISO communication system.There is a BS with N transmit antennas serving G multicasting groups.Users in the same group share the same information data and the information data destined for different groups are independent and different, which means there exists inter-group interference.Let us define the set of all multicast groups by G = {1, 2, ..., G}.Assuming that there are K(K ≥ G) users in total, the user set belonging to group g ∈ G is denoted as K g and each user can only belong to one group, i.e., K i ∩K j =∅, ∀i, j ∈ G, i = j.The transmit signal at the BS is where s g is the desired information of group g and f g ∈ C N ×1 is the correspongding precoding vector.Let us denote the collection of all precoding vectors as , where P T is the maximum available transmit power at the BS.
In the multigroup multicast system, we propose to employ an IRS with the goal of enhancing It is assumed that the channel state information (CSI) is perfectly known at the BS.The BS is responsible for designing the reflection coefficients of the IRS and sends them back to the IRS controller as shown in Fig. 1.As a result, the received signal of user k ∈ K g belonging to group g is where n k is the received noise at user k, which is an additive white Gaussian noise (AWGN) following circularly symmetric complex Gaussian (CSCG) distribution with zero mean and variance σ 2 k .Then, its achievable data rate (bps/Hz) is given by Denoting by Note that e belongs to the set Then, the data rate expression in (3) can be rewritten in a compact form as Due to the nature of the multicast mechanism, the achievable data rate of group g is limited by the minimum user rate in this group and is defined as follows

B. Problem Formulation
In this paper, we aim to jointly optimize the precoding matrix F and reflection coefficient vector e to maximize the sum rate of the whole system, which is defined as the sum rate achieved by all groups.Mathematically, the optimization problem is formulated as Problem ( 8) is a non-convex problem and difficult to solve since the objective function F (F, e) is discrete and non-concave, while the unit-modulus constraint set S e is also non-convex.In the following, we propose two efficient algorithms to solve Problem (8).

III. SOCP-BASED ALTERNATING OPTIMIZATION METHOD
In this section, we propose an SOCP-based alternating optimization method to solve Problem (8).Specifically, we first handle the non-convex objective function by introducing its concave surrogate function.Then, we adopt the alternating optimization method to solve the subproblems corresponding to different sets of variables iterately.
Note that F (F, e) is a composite function which is the linear combinations of some pointwise minimum with non-concave subfunction R k (F, e).We first tackle the non-concave property of R k (F, e).To this end, We introduce the following theorem.
Theorem 1 Let {F n , e n } be the solutions obtained at iteraion n−1, then R k (F, e) is minorized by a concave surrogate function R k (F, e) defined by where Proof: Please refer to Appendix A.
Based on the above theorem, Problem (8) can be transformed into the following surrogate problem: We note that R k (F, e) is biconcave of F and e [18], since R k (F) = R k (F, e) with given e is concave of F and R k (e) = R k (F, e) with given F is concave of e.This biconvex problem enables us to use the alternating optimization (AO) method to alternately update F and e.

A. Optimizing the Precoding Matrix F
In this subsection, we aim to optimize the precoding matrix F with given e.With some manipulations, R k (F, e) in ( 9) can be shown to be a quadratic function of F: where and t g ∈ R G×1 is a selection vector in which the g th element is equal to one and all the other elements are equal to zero.
By using (11), the subproblem of Problem (10) for the optimization of F is We then tackle the pointwise minimum expressions in the objective function of Probelm (14) by introducing auxiliary variales γ g for ∀g ∈ G, as follows Problem ( 15) is an SOCP problem and can be solved by the CVX [19] solver, such as MOSEK [20].

B. Optimizing the reflection coefficient vector e
In this subsection, we focus on optimizing the reflection coefficient vector e with given F, then R k (e) can be rewritten as where Upon replacing the objective funcion of Problem ( 10) by ( 16), the subproblem for the optimization of e is given by Also introducing auxiliary variables κ g for ∀g ∈ G, Problem ( 19) is equivalent to The above problem is still non-convex due to the non-convex unit-modulus set S e .To address this issue, we replace it with a relaxed convex one as where i m ∈ R (M +1)×1 is a selection vector where the m th element is equal to one and all the other elements are equal to zero.Let us denote by e 1 the optimal solution of the following relaxed version of the SOCP problem in ( 21) Then, the optimal e in the n th iteration is where and symbol [ e 1 ] m denotes the m th element of the vector e 1 .Here the exp {•} and the (•) are both element-wise operations.

C. Algorithm development
Based on the above analysis, Algorithm 1 summarizes the alternating update process between precoding matrix F and reflection coefficient vector e to maximize the sum rate of the whole system.
1) Complexity analysis: Now we analyze the computational complexity of Algorithm 1, which mainly comes from optimizing F in the SOCP problem in (15) and optimizing e in the SOCP problem in (21).
According to [21], the complexity of solving an SOCP problem, with M socp second order cone constraints where the dimension of each is  21) has M constant modulus constraints with dimension one for sparse vector i m and K rate constraints with dimension M + 1.Therefore, the complexity of solving Problem (21) per iteration is O(M 3.5 + M 2.5 + (M + 1)K 3.5 + (M + 1) 3 K 2.5 ) log(1/ ).Therefore, the approximate complexity of Algorithm 1 2) Convergence analysis: In the following, we analyze the convergence of Algorithm 1.

Specifically, we have
where the first equality holds when F = F n and e = e n in (9); the first inequality follows from the globally optimal solution F n+1 of Problem (15); the second one follows from the locally optimal solution e n+1 of Problem (21); and the last inequality comes from (9).Then, we have a nondecreasing sequence {F (F n , e n )}.In addition, the sequence {F n , e n } generated at each iteration of Algorithm 1 converges to a stable point as n → 0 because F n and e n are bounded in their feasible sets S F and S e , respectively [22].Hence, sequence {F (F n , e n )} can guarantee to converge to a local optimum.

IV. LOG-SUM-EXP-BASED MAJORIZATION-MINIMIZATION METHOD
As seen in Algorithm 1, we need to solve two SOCP problems in each iteration, which incurs a high computational complexity.In this section, we aim to derive a low-complexity algorithm with good performance.
Since min k∈Kg R k (F, e) in Problem ( 10) is non-differentiable, we approximate it as a smooth function by using the following smooth log-sum-exp lower-bound [23] where µ g > 0 is a smoothing parameter which satisfies Theorem 2 f g (F, e) is biconcave of F and e.
Proof: According to [24], if the Hessian matrix of a function is semi-negative definite, that function is concave.In particular, we derive the Hessian matrix of the exp-sum-log function where z = (e x 1 , . . ., e x N ).Then for all v, we have where the components of vectors a and b are a n = v n √ z n and b n = √ z n , respectively.The inequality follows from the Cauchy-Schwarz inequality.Then ∇ 2 f (x) 0, and the log-sumexp function f (x) is concave.Therefore, − 1 µg log k∈Kg exp −µ g R k is an increasing and concave function w.r.t.R k .Recall that R k (F, e) is biconcave of F and e.Finally, according to the composition principle [24], f g (F, e) is biconcave of F and e.The proof is complete.
Large µ g leads to high accuracy of the approximation, but it also causes the problem to be nearly ill-conditioned.When µ g is chosen appropriately, Problem ( 10) is approximated as This problem is still a biconvex problem of F and e, which enables us to alternately update F and e by adopting the alternating optimization method.
Given e, the subproblem of Problem (27) for the optimization of F is Even f g (F) is a concave and continuous function of precoding matrix F, it is still very complex and difficult to be optimized directly.Since the aim of the MM algorithm [25], [26] is to find an easy-to-solve surrogate objective function and optimize it instead of the original complex one, we use this algorithm to find a locally optimal solution of Problem (28).
Let f g (F, F n ) denote a real-valued function of variable F with given F n .The function at given point F n if they satisfy the following conditions [26]: where f g (F n ; d), defined as the direction derivative of f g (F n ) in the direction d, is In this subsection, the minorizing function of f g (F) is given in the following theorem.
Theorem 3 Since f g (F) is twice differentiable and concave, we minorize f g (F) at any fixed F n with a quadratic function f g (F, F n ) satisfying conditions (A1)-(A4), as follows where Proof: Please refer to Appendix B.
Upon replacing the objective function of Problem ( 28) with ( 29), we obtain the following surrogate problem The optimal F n+1 could be obtained by introducing a Lagrange multiplier τ ≥ 0 associated with the power constraint, yielding the Lagrange function By setting the first-order derivative of L(F,τ ) w.r.t.F * to zero, we have Then the optimal solution of F in iteration n can be derived as By substituting (36) into the power constraint, one has It is obvious that the left hand side of (37) is a decreasing function of τ .
• If the power constraint inequality (37) holds when τ = 0, then • Otherwise, there must exist a τ > 0 that (37) holds with equality, then Upon adopting the MM algorithm framework, we first need to find a minorizing function of f g (e) and denote it as f g (e, e n ).Since S e is a non-convex set, we should modify (A3) so as to claim stationarity convergence [27], [28]: where T Se (e n ) is the Boulingand tangent cone of S e at e n .Therefore f g (e, e n ) should satisfy the following conditions: Theorem 4 Since f g (e) is twice differentiable and concave, we minorize f g (e) at any fixed e n with a function f g (e, e n ) satisfying conditions (B1)-(B4), as follows where Then, the optimal e at the n th iteration n is where exp {j (•)} is an element-wise operation.

C. Convergence Analysis
In this section, we adopt alternating optimization algorithm to alternately optimize precoding matrix F and reflection coefficient vector e.In each iteration, we adopt the MM algorithm to update each set of variables.The monotonicity of the MM algorithm has been proved in [26] and [29].In the following, we claim the monotonicity of Algorithm 2 which will be shown later.
At the n th iteration, with given e n , we have where the first equality follows from (A1), the first inequality follows from (35), and the second one follows from (A2).Subsequently, with given F n+1 , it is straightforward to have Therefore, the objective funcion values {f g (F n+1 , e n+1 )} generated during the procedure of the AO algorithm are monotonically increasing.
Let {F n } be the sequence generated by the proposed algorithm.Since S F is a convex set, every limit point of {F n } is a d-stationary point of Problem (8), and the limit point The proof of converging to a d-stationary point can be found in [30].
Let {e n } be the sequence generated by the proposed algorithm.Since S e is a non-convex set, every limit point of {F n } is a B-stationary point of Problem (8), and the limit point e ∞ satisfies f g (e ∞ ; d) ≤ 0, ∀d ∈ T Se (e ∞ ).
The proof of converging to a B-stationary point can be found in [27] and [28].

D. Low-complexity algorithm design
Note that the tightness of the lower bounds α g in ( 32) and β g in (44) affects the performance of the convergence speed.Here, we adopt SQUAREM [31] to accelerate the convergence speed of our proposed algorithm, which is summerized in Algorithm 2.
Let M F (•) denote the nonlinear fixed-point iteration map of the MM algorithm of F in (36), i.e., F n+1 = M F (F n ), and M e (•) of e in (48), i.e., e n+1 = M e (e n ).P S (•) is project operation to force wayward points to satisfy their nonlinear constraints.For the power constraint in Problem (35), the projection can be done by using the function  47), it can be obtained by using function exp {j (•)} element-wise to the solution vector.Steps 10 to 13 and steps 21 to 24 are to maintain the ascent property of the proposed algorithm.

E. Complexity Analysis
The computational complexity of Algorithm 2 is composed of the nonlinear fixed-point iteration maps M F (•) and M e (•).In M F (•), the computational complexity of U g in ( 33) mainly comes from g k (F n ) in ( 31) and α g in (32).Firstly, the computational complexity of g k (F n ) is of order neglecting the lower-order terms.In M e (•), the computational complexity of g k (e n ) in ( 43) is the same as g k (F n ), which is of complexity . Furthermore, the eigenvalue operations λ max (A k ) and λ max (A k A H k ) of order O((M + 1) 3 ) contribute to the main complexity of calculating β g in ( 44), which is of order O(|K g |(M + 1) 3 ).Neglecting the lower-order terms, the approximate complexity of 3 ) log(1/ ), neglecting the lower-order terms.
Comparing with Algorithm 1 based on SOCP, Algorithm 2 has a lower computational complexity and requires less CPU time, which will be shown in the following section.

V. SIMULATION RESULTS AND DISCUSSIONS
In this section, extensive simulation results are provided to evaluate the performance of our proposed algorithms for an IRS-aided multigroup multicast MISO communication system.All experiments are performed on a PC with a 1.99 GHz i7-8550U CPU and 16 GB RAM.Each point in the following figures is obtained by averaging over 100 independent trials.In the following simulations, the channels are drawn from a zero-mean complex Gaussian distribution.The noise variance is σ 2 k = −20 dBm, the convergence accuracy is = 10 −6 , and the smoothing parameter is set as µ g = 100 [23].The SNR in the figures is defined as 10 log 10 (P T /σ 2 k ).Unless otherwise stated, the other parameters are set as: The BS is equipped with N = 8 transmit antennas serving N = 2 multicasting groups, each of which has |K g |= 2 users.We use IRS-SOCP to represent Algorithm 1 and IRS-MM to represent Algorithm 2. For comparison purposes, we show the performance of the scheme without IRS, in which the precoding matrix is also obtained by our proposed two algorithms, denoted as NIRS-SOCP and NIRS-MM, respectively.
In Fig. 2, we first investigate the convergence behaviour of various algorithms in terms of the iteration number and the CPU time when SNR is −2 dB.Fig. 2(a) compares convergence speed in terms of the number of iterations.Only a small number of iterations are sufficient for Algorithm 1 to converge for both IRS and NIRS schemes.The reason is that the lower bound of the original objective function in (9) used in Algorithm 1 is tighter than those in ( 29) and (41) used in Algorithm 2. Although Algorithm 2 needs more iterations to converge, it has a fast convergence speed in terms of CPU time shown in Fig. 2(b).This is because in each iteration of Algorithm 2, there always exists closed-form solutions when designing precoding matrix and reflection coefficient vector.In addition, in the NIRS case the optimal sum rate obtained by both algorithms are the same, while in the IRS case the optimal objective value generated in Algorithm 2 is slightly higher than that in Algorithm 1.The reason is that the precoding matrix is designed under a convex power constraint and the obtained precoding matrix is globally optimal.
However, the reflection coefficient vector is designed under a non-convex set which is relaxed into a convex constraint in Algorithm 1, thus Algorithm 1 can obtain a sub-optimal reflection coefficient vector.Therefore, the optimal objective function values generated by both algorithms in the NIRS case are the same while those in the IRS case are different due to the different solutions of reflection coefficient vector.It is of practical significance to compare the communication performance of conventional large-scale antenna arrays deployed at the BS and large-scale passive elements deployed at the IRS, since IRS is regarded as an extension of massive MIMO antenna array.Fig. 4 illustrates the sum rate performance versus the numbers of antenna elements at the BS and reflection elements at the IRS when SNR = −2 dB.It is observed from Fig. 4 that significant gains can be achieved by the IRS scheme over that without an IRS even when M is as small as 8.
Comparing four solid lines, we observe that the performance gains achieved by increasing the number of reflection elements are much higher than those achieved by increasing the number of transmit antennas.In addition, it is more energy-efficient to deploy an IRS with passive elements than installing active large-scale antenna array with energy-consuming radio frequency chains and power amplifiers.These simulation results demonstrate that IRS technology is superior to traditional massive MIMO in terms of spectral efficiency and energy efficiency.
The above simulation results show that the performance gain of Algorithm 2 is slightly better than that of Algorithm 1, and it requires less CPU time.Hence, we adopt Algorithm 2 to investigate the effect of an IRS on the performance of a multicast communication system.Fig. 5 illustrates the sum rate versus the number of users per group for various numbers of groups.It can be observed from this figure that the sum rate for all values of G decreases with the increase of the number of users per group.The reason is that the data rate for each group is limited by the user with the worst channel condition.With the increase of the number of users per group, the channel gain for the worst user becomes smaller.
Fig. 5 shows that the sum rate of the system increases slowly and tends to be stable with the increase of the number of multicasting groups for a given number of antenna/reflection elements.
To dig further, Fig. 6 compares the effects of two improvements on the performance limit, namely,  increasing the number of antennas at the BS and the number of reflection elements at the IRS, respectively.The solid and dashed red curves are drawn from the data in Fig. 5 when |K g |= 1 and |K g |= 3, respectively.As a comparison, we increase the number of the BS antennas to 16 and the number of IRS reflection elements to 16, respectively.We observe that both improvements can enhance the sum rate performance limit, while increasing the number of reflection elements yields better performance gains than increasing the number of antennas on the BS, which is We perform some equivalent transformations of the rate expression (6) to show its hidden convexity, as follows where [32], thus its lower bound surrogate function could be obtained by the firstorder approximation, e.g., k , and substitute them into the right hand side of the last equation in (50), we have Hence, the proof is complete.

APPENDIX B THE PROOF OF THEOREM 3
Since f g (F) is twice differentiable and concave, we propose a quadratic surrogate function to minorize f g (F), as follows where matrices D g ∈ C N ×N and M g ∈ C N ×N are determined to satisfy conditions (A1)-(A4).
Note that (A1) and (A4) are already satisfied.Then we prove that condition (A3) also holds.
Let F be a matrix belonging to S F .The directional derivative of the right hand side of (52) at F n with direction F F n is given by: The directional derivative of where g k (F n ) is defined in (31).
In order to satisfy condition (A3), the two directional derivatives (53) and (54) must be equal, which means Now we proceed to prove that condition (A2) also holds.If surrogate function f g (F, F n ) is a lower bound for each linear cut in any direction, condition (A2) could be satisfied.Let Let us define . Now, a sufficient condition for (56) to hold is that the second derivative of the right hand side of (56) is lower than or equal to the second derivative of the left hand side of (56) for ∀γ ∈ [0, 1] and ∀ F, ∀F n ∈ S F , which is formulated as follows In order to calculate the left hand side of (57), we first calculate the first-order derivative, as follows where Then, the second-order derivative is derived as where in (59) into a quadratic form of f , as follows where Φ is given as follows We also manipulate the right hand side of (57) into a quadratic form of f by using vectorization operation Tr[A T BC] = vec T (A)(I ⊗ B)vec(C) [33], as follows Then, (57) is equivalent to where we need to find an M g that satisfies For convenience, we choose M g = α g I = λ min (Φ g ) I. Finally, (52) is equivalent to where U g and consF g are given in (30) and (34), respectively.α g in ( 32) is difficult to obtain for the complex expression of Φ g .In the following, we proceed to obtain the value of α g .
The following inequalities and equalities will be used later: Recall that F = F n + γ( F − F n ), ∀γ ∈ [0, 1], therefore ||F n + γ( F − F n )|| 2 F ≤ P T .By using (C4), the last term in the right hand side of the last equation of (62) satisfies inequality (63) as The third term in the right hand side of the last inequality of (63) is the optimal objective value of the following Problem (64) which has a closed-form solution.
Finally, combining (62) with (63), we arrive at (32).Hence, the proof is complete.Obviously, (B1) and (B4) are already satisfied.In order to satisfy condition (B3), the directional derivatives of f g (e) and the right hand side of (65) must be equal, yielding where g k (e n ) is defined in (43).
Then, we need to calculate the second-order derivatives of the left hand side and the right hand side of (67), and make the latter one lower than or equal to the former for ∀γ ∈ [0, 1] and ∀ e, ∀e n ∈ S e .
The second-order derivative of the left hand side of (67) is given by with t = e − e n .Ψ g is given as The second-order derivative of the right hand side of (67) is where u g , β g , and consE g are given in (42), (44), and (46), respectively.The last equation of ( 73) is from the unit-modulus constraints, i.e., e H e = (e n ) H e n = M + 1.The method to get the value of β g is similar as α g , so we omit it here.Hence, the proof is complete.
the received signal strength of users by reflecting signals from the BS to the users.It is assumed that that the IRS has M reflection elements, and the reflection coefficient matrix of the IRS is modelled by a diagonal matrix E = diag([e 1 , • • • , e M ] T ) ∈ C M ×M , where |e m | 2 = 1, ∀m = 1, • • • , M .The channels spanning from the BS to user k, from the BS to the IRS, and from the IRS to user k are denoted by h d,k ∈ C N ×1 , H dr ∈ C M ×N , and h r,k ∈ C M ×1 , respectively.
the equivalent channel spanning from the BS to user k and by e = [e 1 , • • • e M , 1] T ∈ C (M +1)×1 the equivalent reflection coefficient vector, we have 5 socp ) log(1/ ), where is the convergence accuracy.Problem (15) contains one power constraint with dimension N K and K rate constraints with dimension N K. Therefore, the complexity of solving Problem (15) per Algorithm 1 SOCP-based alternating optimization algorithm Initialize: Initialize F 1 and e 1 , and n = 0. 1: repeat 2: Set e = e n and calculate F n+1 by solving Problem (15); 3: Set F = F n+1 and calculate e n+1 by solving Problem (21); 4: n ← n + 1; 5: until The value of function F (F, e) in (8) converges.
) B. Optimizing the Reflection Coefficient Vector e Given F, the subproblem of Problem (27) for the optimization of e is max .e ∈ S e .
∀e n ∈ S e ; (B2) : f g (e, e n ) ≤ f g (e), ∀e, e n ∈ S e ; (B3) : f g (e, e n ; d)| e=e n = f g (e n ; d), ∀d ∈ T Se (e n ); (B4) : f g (e, e n ) is continuous in e and e n , and is given in the following theorem.

Proof:
Please refer to Appendix C. Upon replacing the objective function of Problem (40) by (41), we obtain the following surrogate problem as max e G g=1 2Re u H g e + consE g s.t. e ∈ S e .

Fig. 3 (
Fig. 3(a) and Fig. 3(b) show the sum rate and the corresponding CPU time under different SNRs respectively.It can be seen in Fig. 3(a) that the IRS structure can obviously enhance the sum rate performance of the multigroup multicast communication system without consuming additional transmission power.It is an energy efficient communication strategy and meets the green communication requirement.From Fig. 3(b), we observe that Algorithm 1 is time-consuming and the time required is unacceptable when SNR increases.

Fig. 4 :
Fig. 4: The sum rate versus the numbers of reflection elements at the IRS M and transmitting antennas at the BS N , when G = |K g |= 2 and SNR = −2 dB.

Fig. 5 :
Fig. 5: The sum rate versus the number of users per group, when N = M = 8 and SNR = −2 dB.

4
Since f g (e) is twice differentiable and concave, we minorize f g (e) at e n with a quadratic function, as followsf g (e) ≥f g (e n ) + 2Re d H g (e − e n ) + (e − e n ) H N g (e − e n ),(65)where vectors d g ∈ C M ×1 and matrices N g ∈ C M ×M are determined to satisfy conditions (B1)-(B4).