Abstract:
In federated learning a global model is trained with training data geographically distributed over a number of clients. To reduce the communication cost over the expensiv...Show MoreMetadata
Abstract:
In federated learning a global model is trained with training data geographically distributed over a number of clients. To reduce the communication cost over the expensive wide area network, clients complete multiple local iterations before synchronization. However, since the training data are non-iid, such infrequent synchronization would compromise the accuracy after model convergence. In order to tackle this problem, we propose Two-Dimensional Learning Rate Decay (2D-LRD) in this paper, which aims to improve the model performance by adaptively tuning the learning rate on two dimensions: round-dimension and iteration-dimension during the model training. That is, we gradually decrease the learning rate and decrease the learning rates of local iterations in a synchronization round with different speeds. Based on our experiments and analysis, we find that the sum of the inner product of round updates is a valuable signal for learning rate tuning. We perform evaluation and demonstrate that 2D-LRD can make great progress compared to the baseline scheme.
Date of Conference: 18-22 July 2021
Date Added to IEEE Xplore: 20 September 2021
ISBN Information: