Loading [a11y]/accessibility-menu.js
Personalized Privacy-Preserving Framework for Cross-Silo Federated Learning | IEEE Journals & Magazine | IEEE Xplore

Personalized Privacy-Preserving Framework for Cross-Silo Federated Learning


Abstract:

Federated learning (FL) is recently surging as a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clie...Show More

Abstract:

Federated learning (FL) is recently surging as a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data. However, in the context of the central party being active and dishonest, the data of individual clients might be perfectly reconstructed, leading to the high possibility of sensitive information being leaked. Moreover, FL also suffers from the nonindependent and identically distributed (non-IID) data among clients, resulting in the degradation in the inference performance on local clients’ data. In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL), with a concentration on cross-silo FL to overcome these challenges. Specifically, we introduce a stabilized variant of the Model-Agnostic Meta-Learning (MAML) algorithm to collaboratively train a global initialization from clients’ synthetic data generated by Differential Private Generative Adversarial Networks (DP-GANs). After reaching convergence, the global initialization will be locally adapted by the clients to their private data. Through extensive experiments, we empirically show that our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
Published in: IEEE Transactions on Emerging Topics in Computing ( Volume: 12, Issue: 4, Oct.-Dec. 2024)
Page(s): 1014 - 1024
Date of Publication: 31 January 2024

ISSN Information:

Funding Agency:


I. Introduction

Despite significant accomplishments of DL approach in a wide range of applications, the centralization of the training data in one central server has raised data sovereignty and data privacy concerns. In order to guarantee the privacy-preserving in training schemes, federated learning (FL) framework [1] is proposed that allows multiple entities (e.g., individuals, or organizations) to collaboratively train a DL model without sharing their local data to the server. Specifically, each client's data is trained and stored locally, and only updated gradients are transferred to the server for aggregation purposes. By training data in a decentralized fashion, FL approaches can alleviate many of the systemic privacy risks of traditional centralized DL methods.

Contact IEEE to Subscribe

References

References is not available for this document.