Abstract:
Deep neural networks (DNN) have enabled various intelligent applications on computing devices, e.g., image recognition, voice recognition, and language modeling. When dep...Show MoreMetadata
Abstract:
Deep neural networks (DNN) have enabled various intelligent applications on computing devices, e.g., image recognition, voice recognition, and language modeling. When deploying DNNs in safety-critical applications, it is crucial to consider their vulnerabilities. For example, bit-flipping can cause DNNs to malfunction, and it can be induced through various means, e.g., hardware attacks, soft errors, or write errors in emerging memory devices. In this paper, we focus on subsets of bit-flipping outcomes of IEEE-754 32-bit floating point (FP32). These subsets are FP32 special numbers, i.e., not a number (NaN) and infinity (Inf). We found that performing 1-bit flips on subsets of parameters in DNN pretrained weights can produce NaN or Inf, thereby leading to model failure. Such NaN- sensitive and Inf-sensitive parameters were analyzed across 78 torchvision pretrained models. The results provide insight into their probable locations and ranges of magnitude. In addition, heuristic-based protection methods are proposed to mitigate such attacks.
Published in: 2024 1st International Conference on Robotics, Engineering, Science, and Technology (RESTCON)
Date of Conference: 16-18 February 2024
Date Added to IEEE Xplore: 19 March 2024
ISBN Information: