I. Introduction
Deep learning has been widely used in various fields such as medical systems [1], recommendation systems [2], credit loan applications [3] and computer vision [4]. While it achieves remarkable performance, privacy in deep learning is becoming increasingly prominent with the emergence of numerous attack techniques [5], [6]. In particular, recent studies have shown that existing deep neural networks (DNNs) are extremely vulnerable to side-channel attacks [7], [8]. For example, the internal structure of a DNN is easily inferred via side-channel power attacks [7], including the number of hidden layers or hidden nodes. Further, the leakage of model internal information may lead to users' extremely sensitive predictions being leaked, such as whether or not a user is an HIV carrier. Even worse, the leakage of users' sensitive information may bring ethical issues due to privacy regulations. Therefore, it is critically important to protect the model's internal information for avoiding users' privacy leakage under side-channel power attacks. Nevertheless, to date, few efficient solutions have been proposed for training privacy-preserving DNN sunder powerful side-channel power attacks.