Skip to Main Content
An adaptive multiagent reinforcement learning method for solving congestion control problems on dynamic high-speed networks is presented. Traditional reactive congestion control selects a source rate in terms of the queue length restricted to a predefined threshold. However, the determination of congestion threshold and sending rate is difficult and inaccurate due to the propagation delay and the dynamic nature of the networks. A simple and robust cooperative multiagent congestion controller (CMCC), which consists of two subsystems: a long-term policy evaluator, expectation-return predictor and a short-term rate selector composed of action-value evaluator and stochastic action selector elements has been proposed to solve the problem. After receiving cooperative reinforcement signals generated by a cooperative fuzzy reward evaluator using game theory, CMCC takes the best action to regulate source flow with the features of high throughput and low packet loss rate. By means of learning procedures, CMCC can learn to take correct actions adaptively under time-varying environments. Simulation results showed that the proposed approach can promote the system utilization and decrease packet losses simultaneously.