Abstract:
Conventional TCP end-to-end Congestion Control approaches cannot be applied directly to heterogeneous wireless networks as TCP is unable to distinguish the reason for the...Show MoreMetadata
Abstract:
Conventional TCP end-to-end Congestion Control approaches cannot be applied directly to heterogeneous wireless networks as TCP is unable to distinguish the reason for the packet loss. The high error rates in wireless networks are due to interference or frame collisions caused by multiple simultaneous transmissions resulting in throughput degradation. There exist several ML approaches towards congestion control but neither the supervised nor the unsupervised learning techniques are suitable for learning the optimal policy. Therefore, a model is to be developed that predicts the optimal congestion window by interacting with the environment dynamically. To address these challenges, we propose a reinforcement learning model to dynamically adjust the congestion window using the Actor-Critic method and Temporal Difference learning. From the experiments, it is evident that the proposed learning model achieves 40% more throughput than the existing techniques while maintaining low transmission latency.
Published in: 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT)
Date of Conference: 06-08 July 2021
Date Added to IEEE Xplore: 03 November 2021
ISBN Information: