An efficient Actor Critic DRL Framework for Resource Allocation in Multi-cell Downlink NOMA | IEEE Conference Publication | IEEE Xplore

An efficient Actor Critic DRL Framework for Resource Allocation in Multi-cell Downlink NOMA


Abstract:

In this paper, a tractable framework for downlink non-orthogonal multiple access (NOMA) is proposed based on a model-free reinforcement learning (RL) approach for dynamic...Show More

Abstract:

In this paper, a tractable framework for downlink non-orthogonal multiple access (NOMA) is proposed based on a model-free reinforcement learning (RL) approach for dynamic resource allocation in a multi-cell network structure. With the aid of actor critic deep reinforcement learning (ACDRL), we optimize the active power allocation for multi-cell NOMA systems under an online environment to maximize the long-term sum rate. To exploit the dynamic nature of NOMA, this work utilizes the instantaneous data rate for designing the dynamic reward. The state space in ACDRL contains all possible resource allocation realizations depending on a three-dimensional association among users, base stations, and sub-channels. We propose an ACDRL algorithm with this transformed state space which is scalable to handle different network loads by utilizing multiple deep neural networks. Lastly, the simulation results validate that the proposed solution for multi-cell NOMA outperforms the conventional RL, DRL algorithms, and orthogonal multiple access (OMA) schemes in terms of the evaluated long-term sum rate.
Date of Conference: 07-10 June 2022
Date Added to IEEE Xplore: 08 July 2022
ISBN Information:

ISSN Information:

Conference Location: Grenoble, France

Contact IEEE to Subscribe

References

References is not available for this document.