Abstract:
As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. Howe...Show MoreMetadata
Abstract:
As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.
Published in: IEEE Transactions on Information Forensics and Security ( Volume: 20)