Loading [MathJax]/extensions/MathMenu.js
Towards an Efficient Federated Learning Framework with Selective Aggregation | IEEE Conference Publication | IEEE Xplore

Towards an Efficient Federated Learning Framework with Selective Aggregation


Abstract:

Federated Learning shows promise for collaborative, decentralized machine learning but faces efficiency challenges, primarily network straggler-induced latency bottleneck...Show More

Abstract:

Federated Learning shows promise for collaborative, decentralized machine learning but faces efficiency challenges, primarily network straggler-induced latency bottlenecks and the need for complex aggregation techniques. To address these issues, ongoing research explores asynchronous FL, i.e., federated learning models, including an Asynchronous Parallel Federated Learning [5] framework. This study investigates the impact of varying worker node numbers on key metrics. More nodes offer faster convergence but may increase communication overhead and straggler vulnerability. We aim to quantify how the number of worker node variations for one global aggregation can affect convergence speed, communication efficiency, model accuracy, and system robustness, optimizing asynchronous FL system configurations. This work is crucial for practical and scalable FL applications, mitigating network stragglers, data distribution, and security challenges. This work analyses Asynchronous Parallel Federated Learning and showcases a paradigm shift in the approach by selectively aggregating early arriving worker node updates with a novel parameter ‘x’, improving efficiency and reshaping FL.
Date of Conference: 03-07 January 2024
Date Added to IEEE Xplore: 16 February 2024
ISBN Information:

ISSN Information:

Conference Location: Bengaluru, India

Contact IEEE to Subscribe

References

References is not available for this document.