Abstract:
Federated Learning (FL) is a privacy-preserving machine learning technique that trains models on client devices and only uploads new model gradients to servers for aggreg...Show MoreMetadata
Abstract:
Federated Learning (FL) is a privacy-preserving machine learning technique that trains models on client devices and only uploads new model gradients to servers for aggregation. However, transmitting the true gradients introduces the risk of reverse inference attack, which can compromise client privacy. To address this issue, we present the federated learning aggregation scheme based on privileged secret sharing (FLAP). Firstly, we present privileged secret sharing by combining the one-time pad with shamir secret sharing. Secondly, we introduce game theory to generate games between servers to resist collusion attacks. Finally, the analysis demonstrates that FLAP enables the server to successfully update the global gradient while ensuring the utmost protection of client privacy and FLAP achiev the effect of resisting collusion attack and reverse inference attack. We have proved through experiments that FLAP is 26.42% more efficient than the current scheme, and has strong robustness to drop out.
Date of Conference: 18-21 August 2023
Date Added to IEEE Xplore: 24 October 2023
ISBN Information: