Abstract:
Privacy-Preserving Federated Learning (PPFL) has gained significant attention for training machine learning models on distributed data without compromising user privacy. ...Show MoreMetadata
Abstract:
Privacy-Preserving Federated Learning (PPFL) has gained significant attention for training machine learning models on distributed data without compromising user privacy. While recent frameworks have employed chained secure multiparty computing (SMPC) to enhance privacy, the impact of multichain aggregation in a Client-Edge-Cloud Hierarchical Federated Learning (HFL) architecture on model accuracy and scalability remains under-explored. In this paper, we propose a PrivacyPreserving Hierarchical Federated Learning approach (MCPPHFL) based on a secure multi-chain aggregation mechanism. Our study investigates the impact of multi-level aggregation on model accuracy and scalability, providing deeper insights into privacy-preserving techniques. We report on simulationbased experiments using the publicly available MNIST dataset, comparing our model with state-of-the-art schemes. Our results demonstrate that MC-PPHFL configurations with fewer edge servers and chains achieve a balance between scalability and accuracy, comparable to Chain-PPFL and FedAVG, while offering better utilisation of computational resources. Configurations with more chains per edge server and fewer training rounds enhance scalability with some trade-offs in accuracy. Overall, our MC-PPHFL framework presents a promising solution for scalable and privacy-preserving federated learning in resourceconstrained environments.
Published in: 2024 2nd International Conference on Federated Learning Technologies and Applications (FLTA)
Date of Conference: 17-20 September 2024
Date Added to IEEE Xplore: 21 January 2025
ISBN Information: