Abstract:
Deep Neural Networks (DNNs) have become integral to security-sensitive and mission-critical tasks due to their remarkable performance. However, their deployment faces var...Show MoreMetadata
Abstract:
Deep Neural Networks (DNNs) have become integral to security-sensitive and mission-critical tasks due to their remarkable performance. However, their deployment faces various security risks, including integrity corruption by fault attacks that disrupt computations or tamper with parameters. While past studies have primarily focused on the vulnerabilities of DNN weights to these attacks, the wider implications of single-bit flips (SBFs) on other parts of DNN implementations have not been investigated. In this research, we target a comprehensive and holistic analysis of the robustness of quantized DNN models against SBFs. Utilizing the AMD-Xilinx DPU, an advanced FPGA DNN accelerator, we delve into the tangible repercussions of SBFs on a DNN hardware implementation. Our results reveal that an SBF in about 25% of the bits in an AMD-Xilinx DPU DNN can lead to severe consequences, ranging from application execution failures and system lock-ups to notable inference accuracy losses. Through binary comparison, we pinpoint single points of failure (SPOFs) on the AMD-Xilinx DPU DNN models. On the CPU side, we evaluate the PyTorch TorchScript model format, designed for production deployment on servers, and the results show that SBFs have comparable detrimental effects on DNN software implementations, underscoring the generality of this problem. Our analysis is based on an effective framework that runs the bit-flipped quantized DNN model on real deployment platforms, hardware accelerators or CPUs, to monitor the consequences of SBFs, allowing for broad assessment across diverse models and datasets. Contrary to the prevailing belief that quantized DNNs are resilient to bit-flips, our systematic analysis offers new insights and finds SPOFs, showing that quantized DNN models are actually very vulnerable to fault attacks. Our work stresses the pressing need for protection strategies for robust DNN inferences in critical applications.
Date of Conference: 06-09 May 2024
Date Added to IEEE Xplore: 06 June 2024
ISBN Information: