Abstract:
While becoming an indispensable part of our lives, artificial intelligence comes also with some problems. For instance, the theft of trained Machine Learning (ML) models ...Show MoreMetadata
Abstract:
While becoming an indispensable part of our lives, artificial intelligence comes also with some problems. For instance, the theft of trained Machine Learning (ML) models through side-channel attacks on their inference engines has started to be a significant threat. Recently, as an approach to protect such inference hardware, the protected ML model ModuloNET was proposed. ModuloNET is based on binarized neural networks and uses modular arithmetic throughout the calculations, making it compatible with usual masking techniques. In the original work, ModuloNET is proven to be first-order glitch-extended probing secure. However, recent research has shown that, in addition to glitches, transitions in memory cells have a significant impact on leakage of a circuit. Hence, in this paper, we analyse the security of ModuloNET under the first-order glitch-and transition-extended probing model. We start with discussing the original work on ModuloNET and later extend the proofs presented in the previous study to a stronger threat model. We also identify potential next steps as well as future directions for further research.
Date of Conference: 23-25 July 2023
Date Added to IEEE Xplore: 27 July 2023
ISBN Information: