Loading [MathJax]/extensions/MathMenu.js
MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates | IEEE Journals & Magazine | IEEE Xplore

MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates


Abstract:

Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the model u...Show More

Abstract:

Federated learning (FL) is a promising framework for privacy-preserving collaborative learning, where model training tasks are distributed to clients and only the model updates need to be collected at a server. However, when being deployed at mobile edge networks, clients may have unpredictable availability and drop out of the training process, which hinders the convergence of FL. This paper tackles such a critical challenge. Specifically, we first investigate the convergence of the classical FedAvg algorithm with arbitrary client dropouts. We find that with the common choice of a decaying learning rate, FedAvg may oscillate around a stationary point of the global loss function in the worst case, which is caused by the divergence between the aggregated and desired central update. Motivated by this new observation, we then design a novel training algorithm named MimiC, where the server modifies each received model update based on the previous ones. The proposed modification of the received model updates mimics the imaginary central update irrespective of dropout clients. The theoretical analysis of MimiC shows that divergence between the aggregated and central update diminishes with proper learning rates, leading to its convergence. Simulation results further demonstrate that MimiC maintains stable convergence performance and learns better models than the baseline methods.
Published in: IEEE Transactions on Mobile Computing ( Volume: 23, Issue: 7, July 2024)
Page(s): 7572 - 7584
Date of Publication: 30 November 2023

ISSN Information:

Funding Agency:


I. Introduction

The resurgence of deep learning (DL) intensifies the demand for training high-quality models from distributed data sources. However, the traditional approach of centralized model training may raise severe concerns of privacy leakage, since data, possibly with personal and sensitive information, need to be collected by a server prior to training. As a result, federated learning (FL) [1], [2], which is a privacy-preserving distributed model training paradigm, has recently received enormous attention [3], [4], [5], [6]. A typical FL system consists of a central server and many clients with private data, which collaborate to accomplish an iterative training process. In each training iteration, clients train local models based on their local data and upload the model updates to the server. Upon receiving the model updates, the server performs model aggregation and the aggregated global model is disseminated to clients for the next training iteration.

Contact IEEE to Subscribe

References

References is not available for this document.