Abstract:
Federated Learning (FL) provides a decentralized training mechanism that ensures users' data privacy. However, FL is vulnerable to backdoor attacks, a type of data poison...Show MoreMetadata
Abstract:
Federated Learning (FL) provides a decentralized training mechanism that ensures users' data privacy. However, FL is vulnerable to backdoor attacks, a type of data poisoning attack. The adversaries tampered with the local models by injecting a trigger into a subset of training data. After the aggregation process, the global model would be poisoned and mispredict the input images that injected a trigger designed by an adversary. Unlike the existing defense methods attempting to identify and remove the abnormal model updates on the aggregation step, this paper proposes a Successive Interference Cancellation-based Defense Framework (SICDF) to detect and eliminate the trigger during model inference. SICDF first employs Explainable AI to infer where the trigger is and then uses image processing skills to eliminate potential trigger effects. Experiment results show that SICDF can effectively recover the poisoned data while only slightly reducing the accuracy of the clean model and benign data.
Date of Conference: 28 May 2023 - 01 June 2023
Date Added to IEEE Xplore: 23 October 2023
ISBN Information:
Electronic ISSN: 1938-1883