Abstract:
Recent innovations and breakthroughs in deep neural networks have advanced or evolved many industries as well as human daily life. To facilitate the deployment of these m...Show MoreMetadata
Abstract:
Recent innovations and breakthroughs in deep neural networks have advanced or evolved many industries as well as human daily life. To facilitate the deployment of these models to edge devices, custom hardware for deep neural networks have been designed to bridge the gap between performance and efficiency. However, due to the absence of underlying theory and an intractable nature, deep learning is susceptible to adversarial attacks. Furthermore, hardware solutions are also vulnerable to various threats through the globalized supply chains. Therefore, it is of great importance to study the implications of adversarial deep learning from the hardware perspective. This paper presents a novel methodology for injecting hardware Trojans in neural network implementations, particularly on a unique functional block to neural networks, i.e., rectified linear unit (ReLU). Experimental results show that a carefully designed hardware Trojan can always achieve the desired misclassification on the selected input trigger key. Additionally, in an experimental setting where a perturbation is only required on one neuron, 100% of the test data are unaltered while requiring only a 0.0022% overhead in hardware, which validates that the proposed designs are effective yet stealthy.
Date of Conference: 26-29 May 2019
Date Added to IEEE Xplore: 01 May 2019
Print ISBN:978-1-7281-0397-6
Print ISSN: 2158-1525