Loading web-font TeX/Main/Regular
Make Shuffling Great Again: A Side-Channel-Resistant Fisher–Yates Algorithm for Protecting Neural Networks | IEEE Journals & Magazine | IEEE Xplore

Make Shuffling Great Again: A Side-Channel-Resistant Fisher–Yates Algorithm for Protecting Neural Networks


Abstract:

Neural network (NN) models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parame...Show More

Abstract:

Neural network (NN) models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases. There are already available countermeasure methods currently used for protecting cryptographic implementations that can be tailored to protect embedded NN models. Shuffling, a hiding-based countermeasure that randomly shuffles the order of computations, was shown to be vulnerable to SCA when the Fisher-Yates algorithm is used. In this article, we propose a design of an SCA-secure version of the Fisher-Yates algorithm. By integrating the masking technique for modular reduction and Blakely’s method for modular multiplication, we effectively remove the vulnerability in the division operation that led to side-channel leakage in the original version of the algorithm. We experimentally evaluate that the countermeasure is effective against SCA by implementing a correlation power analysis (CPA) attack on an embedded NN model implemented on ARM Cortex-M4. Compared to the original proposal, the memory overhead is 2\times the biggest layer of the network, while the time overhead varies from 4% to 0.49% for a layer with 100 and 1000 neurons, respectively.
Page(s): 1 - 13
Date of Publication: 06 May 2025

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe