Abstract:
Federated Learning (FL) is a method for training Machine Learning (ML) models across various clients while maintaining data privacy. Addressing the challenge of client re...Show MoreMetadata
Abstract:
Federated Learning (FL) is a method for training Machine Learning (ML) models across various clients while maintaining data privacy. Addressing the challenge of client resource diversity, this paper presents a novel approach combining the Forward-Forward (FF) algorithm with Back Propagation (BP). This integration forms a blockwise network structure that achieves robust convergence without the chain rule, dividing the model into subnetworks for efficient training. The strategy allows dynamic allocation of network segments to clients based on their computational resources, enabling independent optimization of subnetworks, thus preventing delays and memory issues. Experiments in IID and non-IID settings across datasets assess the methodology’s viability, focusing on the impact of data and label distribution on convergence. The study also examines weight aggregation and regularization techniques like FedAvg and FedProx, adapting them to understand their effect on this FL approach. Source code available at: https://github.com/MODALUNINA/SHELOB_FFL
Published in: 2024 IEEE 44th International Conference on Distributed Computing Systems Workshops (ICDCSW)
Date of Conference: 23-23 July 2024
Date Added to IEEE Xplore: 04 September 2024
ISBN Information: