Abstract:
Well-trained deep neural network (DNN) models are considered valuable assets because they require large amounts of data, expertise, and resources to achieve desired perfo...Show MoreMetadata
Abstract:
Well-trained deep neural network (DNN) models are considered valuable assets because they require large amounts of data, expertise, and resources to achieve desired performance. Hence, protecting the intellectual property of such hard-to-develop models against unauthorized usage or model leaking is a significant concern. This paper proposes a novel key-based obfuscation method that locks the model with a significant accuracy drop when the incorrect key is applied. Due to the importance and developments of binary neural networks (BNNs) in hardware implementation of state-of-the-art DNN models, we study our method on BNNs. The proposed model protection solution leads to a higher accuracy drop with even a lower perturbation rate across different binary neural network architectures and benchmark datasets than its state-of-the-art counterpart. Furthermore, we present an efficient spintronic-based in-memory computing structure for the hardware implementation of the proposed method. We validate the proposed design using post-layout simulations based on the TSMC 40nm technology. With the same approach for hardware implementation, our proposed design provides, on average, 18%, 41%, and 40% improvements regarding the area, average power consumption, and weight modification energy per filter in the neural network structure, respectively.
Published in: IEEE Transactions on Circuits and Systems I: Regular Papers ( Volume: 71, Issue: 7, July 2024)