Introduction
Recently, federated learning has attracted a lot of attention as a novel distributed deep learning paradigm [1]. In federated learning, the clients separately train their local deep neural network (DNN) models with their private data. The local model updates are then sent to the central server, while the private data remain on the clients. After collecting all local updates, the central server is responsible for aggregating a new global model, which will be delivered to the clients for the next round of model training. This distributed training iteration repeats until the global model converges to a satisfying test accuracy. By using federated learning, the individual privacy of clients could be effectively preserved as no private data is shared among the clients and the central server.