Abstract:
Federated learning has emerged as a promising paradigm for large-scale collaborative training tasks, harnessing diverse local datasets from different clients to jointly t...Show MoreMetadata
Abstract:
Federated learning has emerged as a promising paradigm for large-scale collaborative training tasks, harnessing diverse local datasets from different clients to jointly train global models. In real-world implementations, client data could have label noise, causing the quality of the global model to be influenced. Existing label-correction solutions assume all the clients are discreet and fail to consider detecting the malicious clients, thus are not practical or privacy-preserving. In this paper, we present zkCor, an efficient and reliable label noise correction scheme with zero-knowledge confidentiality. Our method is designed upon FedCorr [1], but with more relaxed security assumptions. zkCor is established from the ingenious synergy of the label noise correction protocol and the zero-knowledge proof (ZKP), requiring each client to provide a computation integrity proof to the aggregator in each iteration. Thus, clients are forced to jointly guarantee label-correction reliability. We further devise a batch ZKP that is efficient and more suitable for federated learning settings. We rigorously illustrate the building blocks of zkCor and complete the prototype implementation. The extensive experiments demonstrate that zkCor can gain at least 2 to 30 times better performance than the baseline approach on verification workloads with nearly no extra proof time cost from clients.
Date of Conference: 13-16 May 2024
Date Added to IEEE Xplore: 23 July 2024
ISBN Information: