Differentially-Private Deep Learning With Directional Noise | IEEE Journals & Magazine | IEEE Xplore

Differentially-Private Deep Learning With Directional Noise


Abstract:

With the popularity of deep learning applications, the privacy of training data has become a major concern as the data sources may be sensitive. Recent studies have found...Show More

Abstract:

With the popularity of deep learning applications, the privacy of training data has become a major concern as the data sources may be sensitive. Recent studies have found that deep learning models are vulnerable to privacy attacks, which are able to infer private training data from model parameters. To mitigate such attacks, differential privacy has been proposed to preserve data privacy by adding randomized noise to these models. However, since deep learning models usually consist of a large number of parameters and complicated layered structures, an overwhelming amount of noise is often inserted, which significantly degrades model accuracy. We seek a better tradeoff between model utility and data privacy, by choosing directions of noise w.r.t. the utility subspace. We propose an optimized mechanism for differentially-private stochastic gradient descent, and derive a closed-form solution. The form of the solution makes the mechanism ready to be deployed in real-world deep learning systems. Experimental results on a variety of models, datasets, and privacy settings show that our proposed mechanism achieves higher accuracies at the same privacy guarantee compared to the state-of-the-art methods. Further, we extend the privacy guarantee to a mutual information bound, and propose a general form to the utility-privacy problem.
Published in: IEEE Transactions on Mobile Computing ( Volume: 22, Issue: 5, 01 May 2023)
Page(s): 2599 - 2612
Date of Publication: 23 November 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.