Loading [a11y]/accessibility-menu.js
TP-NET: Training Privacy-Preserving Deep Neural Networks under Side-Channel Power Attacks | IEEE Conference Publication | IEEE Xplore

TP-NET: Training Privacy-Preserving Deep Neural Networks under Side-Channel Power Attacks


Abstract:

Privacy in deep learning is receiving tremendous attention with its wide applications in industry and academics. Recent studies have shown the internal structure of a dee...Show More

Abstract:

Privacy in deep learning is receiving tremendous attention with its wide applications in industry and academics. Recent studies have shown the internal structure of a deep neural network is easily inferred via side-channel power attacks in the training process. To address this pressing privacy issue, we propose TP-NET, a novel solution for training privacy-preserving deep neural networks under side-channel power attacks. The key contribution of TP-NET is the introduction of randomness into the internal structure of a deep neural network and the training process. Specifically, the workflow of TP-NET includes three steps: First, Independent Sub-network Construction, which generates multiple independent sub-networks via randomly se-lecting nodes in each hidden layer. Second, Sub-network Random Training, which randomly trains multiple sub-networks such that power traces keep random in the temporal domain. Third, Prediction, which outputs the predictions made by the most accu-rate sub-network to achieve high classification performance. The performance of TP-NET is evaluated under side-channel power attacks. The experimental results on two benchmark datasets demonstrate that TP-NET decreases the inference accuracy on the number of hidden nodes by at least 38.07% while maintaining competitive classification accuracy compared with traditional deep neural networks. Finally, a theoretical analysis shows that the power consumption of TP-NET depends on the number of sub-networks, the structure of each sub-network, and atomic operations in the training process.
Date of Conference: 18-22 December 2022
Date Added to IEEE Xplore: 02 February 2023
ISBN Information:
Conference Location: Warangal, India

I. Introduction

Deep learning has been widely used in various fields such as medical systems [1], recommendation systems [2], credit loan applications [3] and computer vision [4]. While it achieves remarkable performance, privacy in deep learning is becoming increasingly prominent with the emergence of numerous attack techniques [5], [6]. In particular, recent studies have shown that existing deep neural networks (DNNs) are extremely vulnerable to side-channel attacks [7], [8]. For example, the internal structure of a DNN is easily inferred via side-channel power attacks [7], including the number of hidden layers or hidden nodes. Further, the leakage of model internal information may lead to users' extremely sensitive predictions being leaked, such as whether or not a user is an HIV carrier. Even worse, the leakage of users' sensitive information may bring ethical issues due to privacy regulations. Therefore, it is critically important to protect the model's internal information for avoiding users' privacy leakage under side-channel power attacks. Nevertheless, to date, few efficient solutions have been proposed for training privacy-preserving DNN sunder powerful side-channel power attacks.

Contact IEEE to Subscribe

References

References is not available for this document.