Abstract:
Federated learning (FL) has emerged as a distributed machine learning paradigm with applications across various domains, offering the ability to train a global model acro...Show MoreMetadata
Abstract:
Federated learning (FL) has emerged as a distributed machine learning paradigm with applications across various domains, offering the ability to train a global model across multiple devices while preserving data privacy. However, the distributed nature of FL also introduces backdoor vulnerabilities, where malicious participants can cooperatively poison the global model by meticulously scaling their shared models. In this paper, we propose Fed-NAD, a backdoor-resilient FL framework. Specifically, Fed-NAD leverages neural attention distillation to enable benign clients to effectively purify the backdoored global model during local training. Through a two-stage process, benign clients first train a teacher network locally on clean datasets to capture benign input features, which is then used to perform neural attention distillation on the aggregated backdoored global model. This process ensures that benign clients can cooperatively obtain clean global models without backdoors. Extensive experiments conducted on the CIFAR-10 dataset utilizing ResNet-18 architecture showcase the efficacy and resilience of Fed-NAD, constituting a significant contribution to the domain of FL security. Numerical results demonstrate a notable decrease in attack success rates, ranging from 30% to 60%, while incurring no more than a 2% reduction in accuracy compared to other defense baselines.
Date of Conference: 10-12 May 2024
Date Added to IEEE Xplore: 16 July 2024
ISBN Information: