Loading [MathJax]/extensions/MathZoom.js
Energy Efficiency Deep Reinforcement Learning for URLLC in 5G Mission-Critical Swarm Robotics | IEEE Journals & Magazine | IEEE Xplore

Energy Efficiency Deep Reinforcement Learning for URLLC in 5G Mission-Critical Swarm Robotics


Abstract:

5G network provides high-rate, ultra-low latency, and high-reliability connections in support of wireless mobile robots with increased agility for factory automation. In ...Show More

Abstract:

5G network provides high-rate, ultra-low latency, and high-reliability connections in support of wireless mobile robots with increased agility for factory automation. In this paper, we address the problem of swarm robotics control for mission-critical robotic applications in an automated grid-based warehouse scenario. Our goal is to maximize long-term energy efficiency while meeting the energy consumption constraint of the robots and the ultra-reliable and low latency communication (URLLC) requirements between the central controller and the swarm robotics. The problem of swarm robotics control in the URLLC regime is formulated as a nonconvex optimization problem since the achievable rate and decoding error probability with short blocklength are neither convex nor concave in bandwidth and transmit power. We propose a deep reinforcement learning (DRL) based approach that employs the deep deterministic policy gradient (DDPG) method and convolutional neural network (CNN) to achieve a stationary optimal control policy that consists of a number of continuous and discrete actions. Numerical results show that our proposed multi-agent DDPG algorithm outperforms the baselines in terms of decoding error probability and energy efficiency.
Published in: IEEE Transactions on Network and Service Management ( Volume: 21, Issue: 5, October 2024)
Page(s): 5018 - 5032
Date of Publication: 27 May 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.