Loading [MathJax]/extensions/MathZoom.js
Stealing Secrecy from Outside: A Novel Gradient Inversion Attack in Federated Learning | IEEE Conference Publication | IEEE Xplore

Stealing Secrecy from Outside: A Novel Gradient Inversion Attack in Federated Learning


Abstract:

Knowing model parameters has been regarded as a vital factor for recovering sensitive information from the gradients in federated learning. But is it safe to use federate...Show More

Abstract:

Knowing model parameters has been regarded as a vital factor for recovering sensitive information from the gradients in federated learning. But is it safe to use federated learning when the model parameters are unavailable for adversaries, i.e., external adversaries’ In this paper, we answer this question by proposing a novel gradient inversion attack. Speciffically, we observe a widely ignored fact in federated learning that the participants’ gradient data are usually transmitted via the intermediary node. Based on this fact, we show that an external adversary is able to recover the private input from the gradients, even if it does not have the model parameters. Through extensive experiments based on several real-world datasets, we demonstrate that our proposed new attack can recover the input with pixelwise accuracy and feasible efficiency.
Date of Conference: 10-12 January 2023
Date Added to IEEE Xplore: 27 March 2023
ISBN Information:

ISSN Information:

Conference Location: Nanjing, China

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.