I. Introduction
Despite significant accomplishments of DL approach in a wide range of applications, the centralization of the training data in one central server has raised data sovereignty and data privacy concerns. In order to guarantee the privacy-preserving in training schemes, federated learning (FL) framework [1] is proposed that allows multiple entities (e.g., individuals, or organizations) to collaboratively train a DL model without sharing their local data to the server. Specifically, each client's data is trained and stored locally, and only updated gradients are transferred to the server for aggregation purposes. By training data in a decentralized fashion, FL approaches can alleviate many of the systemic privacy risks of traditional centralized DL methods.