Traffic Scenario Clustering and Load Balancing with Distilled Reinforcement Learning Policies | IEEE Conference Publication | IEEE Xplore

Traffic Scenario Clustering and Load Balancing with Distilled Reinforcement Learning Policies


Abstract:

Due to the rapid increase in wireless communication traffic in recent years, load balancing is becoming increasingly important for ensuring the quality of service. Howeve...Show More

Abstract:

Due to the rapid increase in wireless communication traffic in recent years, load balancing is becoming increasingly important for ensuring the quality of service. However, variations in traffic patterns near different serving base stations make this task challenging. On one hand, crafting a single control policy that performs well across all base station sectors is often difficult. On the other hand, maintaining separate controllers for every sector introduces overhead, and leads to redundancy if some of the sectors experience similar traffic patterns. In this paper, we propose to construct a concise set of controllers that cover a wide range of traffic scenarios, allowing the operator to select a suitable controller for each sector based on local traffic conditions. To construct these controllers, we present a method that clusters similar scenarios and learns a general control policy for each cluster. We use deep reinforcement learning (RL) to first train separate control policies on diverse traffic scenarios, and then incrementally merge together similar RL policies via knowledge distillation. Experimental results show that our concise policy set reduces redundancy with very minor performance degradation compared to policies trained separately on each traffic scenario. Our method also outperforms handcrafted control parameters, joint learning on all tasks, and two popular clustering methods.
Date of Conference: 16-20 May 2022
Date Added to IEEE Xplore: 11 August 2022
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea, Republic of

Contact IEEE to Subscribe

References

References is not available for this document.