NaN Attacks: Bit-Flipping Deep Neural Network Parameters to NaN or Infinity | IEEE Conference Publication | IEEE Xplore

NaN Attacks: Bit-Flipping Deep Neural Network Parameters to NaN or Infinity


Abstract:

Deep neural networks (DNN) have enabled various intelligent applications on computing devices, e.g., image recognition, voice recognition, and language modeling. When dep...Show More

Abstract:

Deep neural networks (DNN) have enabled various intelligent applications on computing devices, e.g., image recognition, voice recognition, and language modeling. When deploying DNNs in safety-critical applications, it is crucial to consider their vulnerabilities. For example, bit-flipping can cause DNNs to malfunction, and it can be induced through various means, e.g., hardware attacks, soft errors, or write errors in emerging memory devices. In this paper, we focus on subsets of bit-flipping outcomes of IEEE-754 32-bit floating point (FP32). These subsets are FP32 special numbers, i.e., not a number (NaN) and infinity (Inf). We found that performing 1-bit flips on subsets of parameters in DNN pretrained weights can produce NaN or Inf, thereby leading to model failure. Such NaN- sensitive and Inf-sensitive parameters were analyzed across 78 torchvision pretrained models. The results provide insight into their probable locations and ranges of magnitude. In addition, heuristic-based protection methods are proposed to mitigate such attacks.
Date of Conference: 16-18 February 2024
Date Added to IEEE Xplore: 19 March 2024
ISBN Information:
Conference Location: Pattaya, Thailand

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.