Hypergraph-Based Resource-Efficient Collaborative Reinforcement Learning for B5G Massive IoT

Beyond 5G (B5G) networks rapidly growing to connect billions of Internet of Things (IoT) devices and the dense deployment of IoT devices leads the large-scale network conflict and obstacles the resource-efficient, which brings a great challenge for network resource management (NRM). To tackle this problem, hypergraph based resource-efficient collaborative reinforcement learning (CRL) was proposed for B5G massive IoT. Firstly, the hypergraph theory based network conflict model was formulated to quantify the conflict degree of the B5G massive IoT. Then, since the conflict-free resource management problem is a combinatorial optimization problem with NP-hard, the resource management based Markov decision process (MDP) model was built for NRM in B5G massive IoT. To reduce the computational load by distributing the training overhead throughout the entire B5G massive IoT and achieve distributed collaborative learning, the federated averaging advantage Actor-Critic (FedAvg-A2C) based resource management is proposed to handle the network conflict-free resource management problem and accelerate the training process. Simulation results show the proposed scheme has high network throughput and the resource-efficient in B5G massive IoT.


I. INTRODUCTION A. BACKGROUND AND MOTIVATION
B EYOND 5G (B5G) networks are rapidly expanding to connect billions of machines and Internet of Things (IoT) devices and are promising to support a variety of unprecedented services, including smart cities, smart industries, connected and autonomous systems, telemedicine, etc. [1], [2], [3].Various new requirements for B5G networks are put forward by emerging application scenarios, such as high resource-efficient performance, ultra-low latency, high data rates and high reliability [3], [4].The network resource efficiency was improved through devices' dense deployment (i.e., form a dense network) in massive IoT, which increases network throughput and provides better quality of service (QoS) for more users [5].Multiplexing resources has become a fundamental phenomenon in massive IoT networks due to the large-scale dense connectivity of terminal devices (TD).However, the ongoing densification of the network induces severe resource conflict leading to largescale network conflict, which reduces network throughput.Therefore, dynamically providing and orchestrating network resource management (NRM) tailored to such emerging services will be a unique challenge, which needs to combine artificial intelligence (AI) technology to convert traditional wireless communication systems into intelligent wireless communication systems in B5G Massive IoT [6].
NRM system manages massive IoT by utilizing the available network resources efficiently to ensure the QoS and resource efficiency for massive IoT [7].It can be fully utilized by employing effective design techniques, equitable resource allocation and efficient packet scheduling.However, ensuring high network resource-efficient in wireless communication networks is a challenging task as the underlying optimization problem is a nonconvex combinatorial optimization (CO) problem in massive IoT scenarios [8].Recently, intelligent enhanced massive IoT will be built with collaborative reinforcement learning (CRL), which is a distributed collaborative machine learning.Due to multiple agents learning and performing tasks simultaneously, CRL can better deal with large-scale problems and complex environments for NRM systems [9].For instance, NRM leverages data analytics and AI techniques to analyze large volumes of data and make informed decisions, which enables better resource management decisions, leading to improved network performance and user experience [10].As a result, the AI-assisted IoT system could be a promising solution and enhance resource efficiency for B5G massive IoT [11].

B. RELATED WORK
There are various approaches for NRM in IoT system, among which mainly contains optimization-based methods and heuristic methods [12].However, multi-user NRM is usually modeled theoretically as a problem with NP-hard characteristics, which are challenging to solve by typical optimization methods [13], [14], [15].Ghanem et al. [16] use a branch-and-bound approach based on discrete monotonic optimization theory to develop a globally optimal solution for the NRM problem and reformulate the optimization problem using the canonical form of difference of convex programming.Although adopting convex-optimization-based approaches can solve NRM problems, primal problems must be converted into solvable problems.However, the optimal of the converted problem is usually not those of the primal one, while it needs computationally intensive to handle the converted problem [12].To tackle this problem, machine learning has emerged as a promising technology for NRM in IoT systems and was considered to be effective in improving the resource-efficient [17], [18], [19], [20], [21].Despite the mild loss of optimality, reinforcement learning (RL) approaches could still perform well [12].For instance, RL based scheme is adopted to address dynamic network resource management in IoT systems with cognitive radio capabilities, aiming to enhance data rates and minimize routing delays [17].Actor-Critic based radio resource management scheme was proposed to handle the radio resource management challenge [18].Zhu et al. [19] adopted deep reinforcement learning (DRL) and Q-learning methods, which mainly focused on resource management policies and offloading in vehicular edge computing networks.In the context of edge-IoT systems, resource management for maximizing users' QoS is investigated in [20], formulating the problem as a Markov decision process (MDP) and proposing a Qvalue approximation approach.This approach improves QoS, latency, and application task success ratio.Furthermore, the transmission latency and computation offloading could be solved by an MDP and model-free RL approach in dynamic mobile edge computing-aided IoT.In digital twin applications, resource management based on double deep Q-network scheme that optimizes the resource-efficient is proposed in [21] for multiple IoT devices, while it achieves a low computational complexity and optimal processing time.
In traditional RL, all data is often sent to a central server for training, leading to significant communication and computation overhead [22].Due to the training of AI-driven models being an essential part [23], several recent works have considered techniques for CRL schemes to decline the training overhead [24], [25], [26].CRL was a collaborative machine learning method that involves training a shared model across multiple decentralized and potentially non-identical agents or devices [27].CRL reduces this communication burden by allowing devices to train locally and only transmit model updates, whose systems can be more fault-tolerant as the shared model can adapt to changes, failures, or loss of individual agents without compromising the entire learning process [28].In addition, leverages the computational resources available on individual devices or agents, distributing the training workload and potentially reducing the need for centralized high-performance servers [29], [30], [31].For instance, a collaborative learning scheme called adapting federated averaging (FedAvg) was proposed in [29] for communication efficiency, which dramatically reduces the number of rounds to converge by taking the form of a distributed Adam optimization.In each round of model aggregation, the FedAvg method based on model segmentation is introduced, which uses a gossip protocol for client sampling [30].The collaborative learning models were proposed to improve resource utilization for multidomain networks by executing horizontal and vertical auto-scaling [31].Chen et al. [32] proposed a collaborative learning framework that considers network resource management and user selection to minimize the loss value of the collaborative learning model in the wireless network.The existing works focused on optimization of resource management, rarely take large-scale network conflict into account.Dense deployment of IoT devices leads to largescale network conflict, which brings a great challenge to resource management in massive IoT networks [12].Hence, how to adopt distributed collaborative machine learning technology to avoid large-scale network conflict and achieve network conflict-free resource management is an unresolved issue.

C. CONTRIBUTIONS
To tackle the challenge mentioned above, we propose a conflict hypergraph based CRL resource management framework for B5G massive IoT system management and applications, which enables B5G massive IoT to maximize The rest of this paper is organized as follows: Section II describes the system model and analyzes the resource conflict.Section III introduces the conflict hypergraph model, conflict-free resource management problem.The proposed scheme is in Section IV.Section V presents the simulation results of the proposed methods.Finally, Section VI concludes the paper.

II. SYSTEM MODEL
This section introduces resource management methods for TDs in B5G massive IoT architecture.It combines graph theory and CRL technology to support the scheduling of multidimensional resources in the form of transactions.

A. RESOURCE MANAGEMENT MODEL BASED ON COLLABORATIVE FRAMEWORK
As shown in Fig. 1, the B5G massive IoT is decentralized, and all transactions and related operations are recorded at the local data center.The B5G massive IoT includes a device set D = d 1 , d 2 , . . ., d N and a local data center set L = l 1 , l 2 , . . ., l K .In the model, collaborative machine learning data allocation in B5G IoT consists of two phases: 1) TDs with computational constraints sending their data to the local data center to train.2) the local data center l i (1 ≤ i ≤ K) ∈ R uploads training data to the aggregated global model server for training and integration.

B. CONFLICT ANALYZED BASED ON GRAPH
For the B5G massive IoT communication structure, it is recorded by graph G T = (V T , E T ), the V T = {v t1 , v t2 , . . ., v tn } is the set of nodes, and the E T = {e t1 , e t2 , . . ., e tm } is the set of edges, where e tm = {(v ti , v tj ) : v ti , v tj ∈ e tm for some e tm ∈ E}.The nodes and edges represent the TD and communication links between TD, respectively.The communications links and the relationship between the nodes can be represented with incidence matrix G TI . where An example is presented in Fig. 2, which includes 13 TDs and 16 communication links (CLs), denoted as TD 1 , TD 2 , . . ., TD 13 , and CL 1 , CL 2 , . . ., CL 16 , respectively.
To promote network resource management for resourceefficient in B5G massive IoT scenarios, the conflict conditions between TD are classified as direct conflict and indirect conflict as follows: • Direct conflict: Two TD pairs share a channel and have the same TD, i.e., the CL 1 and CL 2 share a channel in Fig. 3(a).• Indirect conflict: Two TD pairs share a channel and the TD of one TD pair is in the communication range of the other TD pair, i.e., the CL 1 and CL 3 share a channel in Fig. 3(b).To avoid TD conflicts in the topology of communication networks, direct conflict can be solved by solving the typical edge coloring algorithm.However, the indirect conflict caused by hidden TD remains inevitable, due to the indirect

III. RESOURCE MANAGEMENT DESIGN BASED ON CONFLICT HYPERGRAPHS
In this section, the conflict graph is built to clearly show the resource conflict relationship.In addition, based on the theory of cliques and hypergraphs, the conflict graph is transformed into a hypergraph, which reduces the difficulty of solving resource conflicts.Finally, the resource conflict problem is generalized as a node coloring problem of hypergraph.

A. CONFLICT GRAPH MODEL
To address the resources management conflict problem in B5G massive IoT, the conflict graph model G C = (V C , E C ) is established.In the model, V C = {e t1 , e t2 , . . ., e tm } is the set of nodes, E C is set of edges.The nodes and edges in the conflict graph model represent the CLs in G T and the conflict relationship of nodes, respectively.
The conflicting relationship between nodes can be represented by adjacency matrix G CA , where e ti , e tj = 1, e ti conflicts with e tj 0, e ti not conflicts with e tj .(4) Then, following the principles of Fig. 3, the conflict graph can be constructed as shown in Fig. 4. For understanding, an example is used to illustrate the construction on the conflict graph: the node CL 4 and CL 5 are connected since they contain the same TD 2 and use the same channel.For node CL 5 and CL 15 , as in TD 2 and TD 9 communication range, an edge between them.For clarity, we use different colors for the two different types of conflict.In Fig. 4, the nodes represent CLs, and edges represent the conflict relation between CLs.However, the complexity of the conflict graph grows rapidly, increasing the difficulty of avoiding conflicts.

B. CONFLICT HYPERGRAPH MODEL
To reduce the difficulty of avoiding resource conflicts, we simplified the conflict graph based on the theory of cliques and hypergraphs.As a fully connected subgraph in the graph, a clique can be expressed by a hyperedge and then quickly reduce the dimension of the matrix of the conflict graph.The definition of clique and hypergraph is described as follows: Clique: a sub-graph in the conflict graph, where any two nodes are connected.
Maximal clique: a clique which is not a sub-graph of other cliques.
The hypergraph can be expressed as G H ={V H ,E H }, where V H and E H is vertex set and hyperedge set, respectively.A simple graph is a special case of a hypergraph, where a hyperedge is only associated with two vertices.The hypergraph can represent by incidence matrix H, H ∈ R |E|×|V| , and the value of the elements according to h(v, e), According to the definition of the maximal clique, the maximal clique in the conflict graph as shown in Table 1.The nodes in a clique are connected to each other, which can be validated through the conflict relationship between the nodes in Fig. 4.
According to the theory of hypergraph and clique, where all nodes are connected with each other, thus any clique can form a hyperedge and contain guaranteed conflict information without a loss since the features that any nodes in the clique conflict with each other.The maximum clique can contain more nodes (i.e., the hyperedge contains multiple nodes).All the maximum cliques obtained transform the conflict graph into a conflict hypergraph, simplifying the matrix and reducing the difficulty of conflict avoidance while ensuring that the conflicted relationship between nodes remains unchanged.The conflict avoidance problem in the conflict hypergraph is essentially a node coloring problem of the hypergraph.

C. PROBLEM FORMULATION
In this section, we formulate the CO problem (i.e., the node coloring of the hypergraph) for network resource-efficient management in the B5G massive IoT scenario.To avoid resource allocation conflict, we defined a conflict degree of nodes, denoted as ϕ, which includes two categories of conflict in the node coloring of the hypergraph: 1) Nodes belonging to the same hyperedge are assigned the same color.
2) The same node is assigned different colors repeatedly.The resource allocation is conflict-free if ϕ = 0.Moreover, the signal-to-interference-plus-noise ratio (SINR) υ t i of the i th TD at time slot t, which can be defined as: where P i and P j denote the transmission powers of the i th and j th TDs, respectively.h i is the power gain of the channel corresponding to i th TD, σ 2 is the noise power, h j,i is the conflict power gain from j th TD.N TD is the number of all TDs.N i represents the conflict IDs set of i th TD.Hence, the transmission rate of the i th TD at time t can be expressed as where B is the bandwidth.The CO problem can be formulated as the long-term total conflict-free resource-efficient maximization problem, described as follows: where λ 1 , λ 2 ∈ (0, 1) represent the weight of the optimization objective.The objective function of optimization (8a) is to maximize the network throughput and network resource-efficient while avoiding network conflict and meeting the requirement of the minimization SINR are implicit optimization goals.

IV. RESOURCE MANAGEMENT BASED ON CRL METHOD
To solve the complicated CO optimization problem in ( 8), the CRL-based method in B5G massive IoT was proposed to achieve long term resource-efficient.Hence, the conflictfree resource management MDP problem should need to be defined carefully for implementation in B5G massive IoT.

A. NETWORK CONFLICT-FREE RESOURCE MANAGEMENT MDP PROBLEM FORMULATION
The optimization problem can be modeled as an MDP problem by designing a reasonable reward, where the reward function design is related to the optimization objective and constraints.Therefore, the reward should involve throughput, resource-efficient, conflict and SINR requirements.Generally, RL-based network resource management problems can be regarded as learning the action of resource management in the B5G massive IoT environment by sequentially allocating resources to all nodes over a sequence of times.Hence, resource management of the B5G massive IoT network is modeled as an MDP problem, which has Markov characteristics and could access all the relevant information needed to make decisions.In MDP, the agent takes maximizing the cumulative discount reward from time t rewards as the RL optimization goal, which can be denoted as where γ ∈ (0, 1) is the discount factor.R t and G γ t are reward and cumulative discount reward at time t, respectively.In the B5G massive IoT system, the RL optimization goal G γ t is to improve the resource-efficient and network throughput under the premise of guaranteeing the network conflict-free constraint.Further, the optimal network resource management policy π * was obtained by the RL agent, whose optimization objective of maximizing cumulative discounted reward J(π , where E[•] denotes the expectation operator.The conflict-free network resource management MDP problem for B5G massive IoT can be formulated as The MDP problem of maximizing cumulative discount reward to solve depends on action-value function To obtain optimal policy π * , which can maximize V π (s) and Q π (s, a), choose the corresponding optimal action is max a Q π * (s, a) for given any state.Where Q π * (s, a) denotes the action-value function the guidance of the optimal policy π * .

B. RL AGENT DESIGN
The B5G massive IoT network state is formed by the following parameters that are observed by the RL agent at time t.
1) m t ν : The set of all TDs SINR ν at time t. 2) ϕ t : The network conflict of B5G massive IoT at time t.
3) c t min : The set of minimum rate requirement at time t. 4) H: The hypergraph incidence matrix of B5G massive IoT. 5) k t : The set of assigned network resources for all TDs at time t.At time t, the system state s t is a vector defined as s t ∈ S, where S denotes the state space and s t is defined as follows The B5G massive IoT environment transitions from state s t to state s t+1 by taking an action in RL.
At each time t, the RL agent adopts an action a t ∈ A, which consists of selecting a network resource, s t ∈ A, by following a policy π .Thus the dimension of the action space is N res when there are N res resource blocks for the NRM system.
To maximize the network throughput and network resource-efficient while avoiding conflict and meeting the requirement of the minimization SINR are implicit optimization goals in (8).According to (8), the reward function mainly consists of four parts, as follows: network throughput, resource efficiency, the requirement of SINR, and conflict-free.Hence, when the agent maximizes the cumulative discounted reward, the long-term maximization of network throughput and resource efficiency is achieved through resource allocation subject to satisfying constraints.The network conflict-free condition is represented as a penalty if the RL agent adopts network resource allocation actions generating network conflict.Therefore, the B5G massive IoT environment will return a reward r t according to the actions taken by the agent in the time t, which is defined as where λ 1 , λ 2 , λ 3 , λ 4 ∈ (0, 1).
The value functions are defined to quantify the expected return under B5G massive IoT network resource management policy π .The RL estimation functions include state value function and action value function.The state value function V π (s) denotes the expected return from state s, whereas the action value function Q π (s, a) represents the expected return after performing action a in state s.The value functions specific definitions are as follows: where E[ • ] is the expectation.For simplicity, s and a are the current system state and action at the time t, respectively.And s is the next system state at the time t.

C. FEDAVG-A2C BASED RESOURCE MANAGEMENT METHOD
The actor is a policy network that takes the state as input and outputs the action that approximates the policy model π(a|s), aiming to maximize the expected cumulative reward by updating its parameters based on the value function provided by the actor.The RL agent tries to optimize the policy π(a|s; θ), which gives the probability distribution of actions for each state, to obtain the maximal resourceefficient under the conflict-free constraint.To update the policy π(a|s; θ), we adopt the policy gradient method of DRL with the goal of maximizing the following expected long-term discounted reward.The policy-based optimization scope is to maximize the cumulative discounted reward from an expectation perspective, which can be written as where r(τ ) = T t=0 λ t r t is the limited step discount expected reward, and τ is the sampling trajectory.The policy-based optimization method will optimize the policy according to the above objective function through the gradient-based method (i.e., using gradient estimates on cumulative discounted rewards for gradient learning, which will obtain the optimal policy and finally maximize the cumulative discounted reward).We assume that the gradient policy π(a|s; θ) is differentiable in parameter θ .Therefore, the parameter θ gradient can be expressed as follows We can measure the advantage of taking action a t for state s t at time slot t by comparing the average value and the estimated value.The advantage function is given by which can guide the RL agent to understand how to update the DNN.Specifically, the advantage function evaluates the benefits or drawbacks of actions for the policy from the actor.To minimize J π (θ ), the policy parameter θ is updated in a gradient descent direction, which is given by Combining ( 16) and ( 17), the parameter θ gradient can be approximated with (19), shown at the bottom of the page.
The critic can provide an action-value function to measure the loss of the resource management strategy network.The Q-value is estimated by a deep neural network (DNN), that is, using the parameter w to approximate the action-value function Q π (s, a), which can be defined as Q π θ (s, a; w).The parameter update is given by where w denotes the learning the parameters of the critic function, η is the learning rate.And the loss function J Q (w) for the estimated action-value function can be defined as ( 21), shown at the bottom of the page.To derive the gradients for the maximization objective, we leverage the action-value function Q π θ (s, a; w).To train Q π θ (s, a; w), this paper leverages the gradient descent method, which is formulated as (22), shown at the bottom of the page.
To address the huge data volume of B5G massive IoT, this paper proposed the FedAvg-A2C method to update the parameters of the value network and estimation network.In the considered B5G massive IoT, the global A2C network was maintained by the FedAvg-A2C server, and all RL agents obtain the global model of the FedAvg-A2C server to constitute the local A2C network.In each round of the global model training process, each RL agent updates its own local A2C model by random samples mini-batch of data B from local replayer buffer D. The local update of the k th RL agent minimizes the aforementioned policy network and value network loss function, J(w t k ) and J(θ t k ), respectively.It random select a mini-batch sample B k from replay buffer D k .Then, FedAvg-A2C global network is obtained with a weighted average of the parameters at the end of a round, which contains involved local A2C in this round learning process.At time t, the minimization of FedAvg-A2C global policy network and value network loss function J(w t ), J(θ t ) can be expressed as: where The RL agents interact with the server which serves as the model aggregator at time t, as follows.
Each RL agent first obtains the global model's latest parameters J(w t−1 ) and J(θ t−1 ) from the server.Then, the RL agent updates its local model by computing its gradient ∇J(w t−1 k ) and ∇J(θ t−1 k ) on the historical experience.After the local training, the RL agent sends J(w t ) and J(θ t ) to server.And the server broadcasts the global model parameters to all RL agents.Algorithm 1 summarizes the training procedure.

D. ALGORITHM COMPLEXITY ANALYSIS
The computational complexity of FedAvg-A2C accounts for the local model training at the A2C agent and the local model aggregation at the server.Since single A2C network model training with random samples from its own local buffer, the complexity of the RL local update is Q((T value + T policy ) × N lr ), depending on the value network T value complexity, policy network complexity T policy and local training number N lr .The complexity of the model aggregation is O(K), as it grows linearly with the number of agents K.The overall complexity of the FedAvg-A2C algorithm is O (T value +T policy )×N lr K + K .Therefore, the higher the number of RL agents, the faster the training speed of the FedAvg-A2C algorithm.

V. SIMULATION
In this section, the proposed scheme was validated by conducting numerical simulations.First, the simulation setup  was outlined, followed by a comprehensive presentation and analysis of the numerical results.The primary goal of this process is to showcase the superiority of the proposed schemes when compared to existing works.We run the simulations on a DELL server with an Intel Xeon Gold 6242R CPU running at 3.1 GHz and 64GB of RAM and two GPUs(NVIDIA GeForce RTX 3080Ti) running an Ubuntu 18.04 LTS OS, and we use the Python 3.9.13environment, Pytorch 2.0.0.The FedAvg-A2C algorithm was implemented in Pytroch.The hyperparameters of the proposed FedAvg-A2C are shown in Table 2.

A. CONVERGENCE OF THE PROPOSED ALGORITHM
Fig. 6 shows the convergence of the proposed algorithm under different learning rates.And the number of TDs is set by 20.The horizontal and vertical axes represent the number of training iterations and the amount of received reward, respectively.As the learning rate increases, the convergence of the proposed method increases, while the convergence is enhanced.Fig. 6 presents that the FedAvg-A2C model has a better reward when η = 0.001.Therefore, this paper chooses the learning rate η = 0.001 as the parameter for subsequent experiments.
The convergence of different discount factors is shown in Fig. 7.When γ = 0.95 the cumulative reward is higher than the other cases.Therefore, the learning rate η is set as 0.001, the discount factor γ is set as 0.95.heightened network resource conflicts within the communication system, four algorithms experience an overall increase in the maximum network throughput.Remarkably, the proposed algorithm outperforms comparison algorithm 1, comparison algorithm 2 and comparison algorithm 3, exhibiting significantly higher network throughput.The findings presented in Fig. 8 serve as compelling evidence, validating the remarkable capability of the proposed algorithm to effectively enhance network throughput and push the upper limit of the system's ability.Fig. 9 significantly emphasizes the comparison of average network throughput among the algorithm proposed in this paper and three comparison algorithms for varying numbers of TDs.As the number of TDs increases, all four algorithms consistently demonstrate a notable upward trend in network throughput.Importantly, it is the proposed algorithm that distinctly outperforms both comparison algorithm 1, comparison algorithm 2 and comparison algorithm 3, unequivocally highlighting its remarkable effectiveness in enhancing the average network throughput.The compelling evidence presented in Fig. 9 effectively validates the exceptional capability of the proposed algorithm in significantly improving the performance of the system.Fig. 10 presents a comparison of the maximal resourceefficient between the algorithm proposed in this paper and three comparison algorithms for varying numbers of TDs.From the figure, the number of TDs increases leading to network resource-efficient drops.The proposed method has  much better performance to mitigate the effectively enhance the maximal network resource-efficient of the system.Fig. 11 presents a comparison of the average resourceefficient between the algorithm proposed in this paper and three comparison algorithms for varying numbers of TDs.The number of TDs increasing will decline system stability, which makes the average network resource-efficient drop in Fig. 11.The proposed method has much better performance to mitigate the effectively enhance the average network resource-efficient of the system.

VI. CONCLUSION
In this paper, the conflict-free network resource-efficient management problem in the B5G massive IoT scenario was investigated, which consists of dense deployment of IoT devices and a resource management system.However, the dense deployment of IoT devices will generate largescale network conflict of B5G massive IoT systems and decline the resource-efficient of the resource management system.Hypergraph theory-based network conflict model was proposed to quantify the conflict of the whole B5G massive IoT.Hence, under the conflict hypergraph model constraint, this paper has formulated the CO problem by maximizing the network throughput and resource-efficient.Since the conflict hypergraph-based resource management is an optimization problem with NP-hard, which needs computationally intensive to be handled, we formulate an MDP for the NRM system with sequential decision-making characteristics and propose a resource-efficient CRL solution.Then, FedAvg-A2C based resource management algorithm was proposed to handle the network conflict-free resource management problem in B5G massive IoT scenarios and accelerate the training process of learning.Finally, simulation results demonstrate the effectiveness of the FedAvg-A2C and validate the superiority of FedAvg-A2C compared with other comparison algorithms.

Fig. 8
Fig.8focuses on highlighting the advantages of the proposed algorithm by comparing its maximum network throughput with three comparison algorithms for different numbers of TDs.As the number of TDs increases, resulting in