Abstract:
Federated Learning ensures that clients can collaboratively train a global model by uploading local gradients, keeping data locally, and preserving the security of sensit...Show MoreMetadata
Abstract:
Federated Learning ensures that clients can collaboratively train a global model by uploading local gradients, keeping data locally, and preserving the security of sensitive data. However, studies have shown that attackers can infer local data from gradients, raising the urgent need for gradient protection. The differential privacy technique protects local gradients by adding noise. This article proposes a federated privacy-enhancing algorithm that combines local differential privacy, parameter sparsification, and weighted aggregation for cross-silo setting. First, our method introduces Rényi differential privacy by adding noise before uploading local parameters, achieving local differential privacy. Moreover, we dynamically adjust the privacy budget to control the amount of noise added, balancing privacy and accuracy. Second, considering the diversity of clients’ communication abilities, we propose a novel Top-K method with dynamically adjusted parameter upload rates to effectively reduce and properly allocate communication costs. Finally, based on the data volume, trustworthiness, and upload rates of participants, we employ a weighted aggregation method, which enhance the robustness of the privacy framework. Through experiments, we validate the effective trade-off among privacy, accuracy, communication costs and robustness achieved by the proposed method.
Published in: IEEE Transactions on Services Computing ( Volume: 17, Issue: 5, Sept.-Oct. 2024)