Abstract:
Privacy-preserving federated learning can protect the privacy of model gradients/parameters in the model aggregation phase. Most existing schemes only consider the scenar...Show MoreMetadata
Abstract:
Privacy-preserving federated learning can protect the privacy of model gradients/parameters in the model aggregation phase. Most existing schemes only consider the scenario where user models have the same weight in model aggregation. However, users often hold different numbers of training samples in practice. This makes the model convergence speed of existing schemes very slow. To solve this problem, we propose a privacy-preserving federated learning scheme with secure weighted aggregation. It is able to allocate appropriate user weights based on the user’s local data size with privacy protection. In addition, it is impossible for the cloud server to obtain the user’s original model parameters and local data size in the proposed scheme. Specifically, we use Lagrange interpolation to combine the model parameters and local data size into a set of ciphertexts. The cloud server can smoothly perform weighted aggregation based on these ciphertexts. Leveraging the Chinese Remainder Theorem, we convert the local data size into a series of verification values. This enables the user to verify the correctness of results returned from the server. We provide a theoretical analysis for the proposed scheme, demonstrating its effectiveness, privacy, and verifiability. We perform extensive experiments on the MNIST dataset. Experimental results demonstrate its model performance, computation overhead, and communication overhead.
Published in: IEEE Transactions on Information Forensics and Security ( Volume: 20)