Abstract:
Unmanned Aerial Vehicle (UAV)-enabled mobile edge computing has been proposed as an efficient task-offloading solution for user equipments (UEs). Nevertheless, the presen...Show MoreMetadata
Abstract:
Unmanned Aerial Vehicle (UAV)-enabled mobile edge computing has been proposed as an efficient task-offloading solution for user equipments (UEs). Nevertheless, the presence of heterogeneous UAVs makes centralized navigation policies impractical. Decentralized navigation policies also face significant challenges in knowledge sharing among heterogeneous UAVs. To address this, we present the soft hierarchical deep reinforcement learning network (SHDRLN) and dual-end federated reinforcement learning (DFRL) as a decentralized navigation policy solution. It enhances overall task-offloading energy efficiency for UAVs while facilitating knowledge sharing. Specifically, SHDRLN, a hierarchical DRL network based on maximum entropy learning, reduces policy differences among UAVs by abstracting atomic actions into generic skills. Simultaneously, it maximizes the average efficiency of all UAVs, optimizing coverage for UEs and minimizing task-offloading waiting time. DFRL, a federated learning (FL) algorithm, aggregates policy knowledge at the cloud server and filters it at the UAV end, enabling adaptive learning of navigation policy knowledge suitable for the UAV's performance parameters. Extensive simulations demonstrate that the proposed solution not only outperforms other baseline algorithms in overall energy efficiency but also achieves more stable navigation policy learning under different levels of heterogeneity of different UAV performance parameters.
Published in: IEEE Transactions on Mobile Computing ( Volume: 23, Issue: 12, December 2024)
Funding Agency:
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Mobile Edge Computing ,
- UAV-enabled Mobile Edge Computing ,
- Federated Reinforcement Learning ,
- Deep Learning ,
- Learning Algorithms ,
- Energy Efficiency ,
- Performance Parameters ,
- Cloud Computing ,
- Knowledge Sharing ,
- Level Of Heterogeneity ,
- Unmanned Aerial Vehicles ,
- General Skills ,
- Deep Reinforcement Learning ,
- Policy Learning ,
- User Equipment ,
- Federated Learning ,
- Baseline Algorithms ,
- Policy Knowledge ,
- Linearizable ,
- Simulation Environment ,
- Deep Reinforcement Learning Algorithm ,
- Task Offloading ,
- Time Slot ,
- Reinforcement Learning Algorithm ,
- Coverage Rate ,
- Differences In Policies ,
- Flight Speed ,
- Coverage Radius ,
- Deep Reinforcement Learning Model ,
- Early Stage Of Training
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Mobile Edge Computing ,
- UAV-enabled Mobile Edge Computing ,
- Federated Reinforcement Learning ,
- Deep Learning ,
- Learning Algorithms ,
- Energy Efficiency ,
- Performance Parameters ,
- Cloud Computing ,
- Knowledge Sharing ,
- Level Of Heterogeneity ,
- Unmanned Aerial Vehicles ,
- General Skills ,
- Deep Reinforcement Learning ,
- Policy Learning ,
- User Equipment ,
- Federated Learning ,
- Baseline Algorithms ,
- Policy Knowledge ,
- Linearizable ,
- Simulation Environment ,
- Deep Reinforcement Learning Algorithm ,
- Task Offloading ,
- Time Slot ,
- Reinforcement Learning Algorithm ,
- Coverage Rate ,
- Differences In Policies ,
- Flight Speed ,
- Coverage Radius ,
- Deep Reinforcement Learning Model ,
- Early Stage Of Training
- Author Keywords