Loading [MathJax]/extensions/MathMenu.js
Model Cloaking against Gradient Leakage | IEEE Conference Publication | IEEE Xplore

Model Cloaking against Gradient Leakage


Abstract:

Gradient leakage attacks are dominating privacy threats in federated learning, despite the default privacy that training data resides locally at the clients. Differential...Show More

Abstract:

Gradient leakage attacks are dominating privacy threats in federated learning, despite the default privacy that training data resides locally at the clients. Differential privacy has been the de facto standard for privacy protection and is deployed in federated learning to mitigate privacy risks. However, much existing literature points out that differential privacy fails to defend against gradient leakage. The paper presents ModelCloak, a principled approach based on differential privacy noise, aiming for safe-sharing client local model updates. The paper is organized into three major components. First, we introduce the gradient leakage robustness trade-off, in search of the best balance between accuracy and leakage prevention. The trade-off relation is developed based on the behavior of gradient leakage attacks throughout the federated training process. Second, we demonstrate that a proper amount of differential privacy noise can offer the best accuracy performance within the privacy requirement under a fixed differential privacy noise setting. Third, we propose dynamic differential privacy noise and show that the privacy-utility trade-off can be further optimized with dynamic model perturbation, ensuring privacy protection, competitive accuracy, and leakage attack prevention simultaneously.
Date of Conference: 01-04 December 2023
Date Added to IEEE Xplore: 05 February 2024
ISBN Information:

ISSN Information:

Conference Location: Shanghai, China

Contact IEEE to Subscribe

References

References is not available for this document.