Abstract:
Federated Learning (FL) has surfaced as a paradigm-shifting methodology in the field of machine learning (ML), operating on the principle that data privacy is maintained ...Show MoreMetadata
Abstract:
Federated Learning (FL) has surfaced as a paradigm-shifting methodology in the field of machine learning (ML), operating on the principle that data privacy is maintained while a network of distributed clients train models collectively. FL is vulnerable to adversarial challenges, most notably sign flipping attack (SFA), in which malicious participants alter the direction of model updates on purpose. This paper investigates the effects of sign flipping attacks within FL, characterized by attackers who manipulate the parameters of the model based on varying probabilities (\alpha) and percentages of attackers. By conducting a comprehensive simulation spanning 100 rounds with 40 clients per round, the resilience of the system is assessed in the face of attack probabilities (\alpha) of 0.5,0.7, and 1.0, as well as attacker percentages of 10% and 30%. The impact on average training accuracy and loss for local models, as well as the test accuracy, precision, recall, and F1 score of the global model, is thoroughly assessed in the comprehensive analysis. The results validate the hypothesis that increased attack probabilities and percentages of attackers have a substantial negative impact on system performance. By providing insight into the subtle consequences of sign-flipping attacks and laying the groundwork for the creation of resilient defense strategies to protect FL systems from adversarial intrusions, this study adds to the expanding corpus of knowledge on adversarial tactics in FL.
Published in: 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT)
Date of Conference: 24-28 June 2024
Date Added to IEEE Xplore: 04 November 2024
ISBN Information: