VPFLI: Verifiable Privacy-Preserving Federated Learning With Irregular Users Based on Single Server | IEEE Journals & Magazine | IEEE Xplore

VPFLI: Verifiable Privacy-Preserving Federated Learning With Irregular Users Based on Single Server


Abstract:

Federated learning (FL) is widely used in neural network-based deep learning, which allows multiple users to jointly train a model without disclosing their data. However,...Show More

Abstract:

Federated learning (FL) is widely used in neural network-based deep learning, which allows multiple users to jointly train a model without disclosing their data. However, the data quality of the users is not uniform, and some users with poor computing ability and outdated equipments called irregular ones may collect low-quality data and thus reduce the accuracy of the global model. In addition, the untrusted server may return wrong aggregation results to cheat the users. To solve these problems, we propose a verifiable privacy-preserving FL protocol with irregular users (VPFLI) based on single server. The protocol is privacy-preserving for the untrusted server and it is proved secure based on drop-tolerant homomorphic encryption. For low-quality datasets, their proportion would be decreased in the aggregation results in order to ensure the accuracy of the global model. Also, the aggregation results can be effectively verified by the users based on linear homomorphic hash. Moreover, VPFLI is proposed based on single server, which is more applicable in reality compare with the previous ones based on two non-colluding servers. The experiments show that VPFLI improves the accuracy of the model from 83.5% to 91.5% based on MNIST dataset compared to the traditional FL protocols.
Published in: IEEE Transactions on Services Computing ( Volume: 18, Issue: 2, March-April 2025)
Page(s): 1124 - 1136
Date of Publication: 23 December 2024

ISSN Information:

Funding Agency:


I. Introduction

With the development of the cloud computing technology, deep learning (DL) has found extensive applications in diverse fields, such as natural language processing [1], computer vision [2], and speech recognition [3]. As we know, training models by using traditional DL requires vast amounts of data. Therefore, DL needs a central server to collect data from different users for training. However, it is not secure for the users since the data may include personal privacy during the training process. To protect the personal information of the users, Google [4] introduced the concept of federal learning (FL) in 2016. FL models typically include a cloud server and multiple users. After training on local data, the users upload the parameters of local model instead of their original data in FL. The users with small amounts of local data could also get a highly accurate predictive model by FL. As a result, FL has received attention of many researchers in recent years.

Contact IEEE to Subscribe

References

References is not available for this document.