Loading [MathJax]/extensions/MathMenu.js
Towards an Efficient Federated Learning Framework with Selective Aggregation | IEEE Conference Publication | IEEE Xplore

Towards an Efficient Federated Learning Framework with Selective Aggregation


Abstract:

Federated Learning shows promise for collaborative, decentralized machine learning but faces efficiency challenges, primarily network straggler-induced latency bottleneck...Show More

Abstract:

Federated Learning shows promise for collaborative, decentralized machine learning but faces efficiency challenges, primarily network straggler-induced latency bottlenecks and the need for complex aggregation techniques. To address these issues, ongoing research explores asynchronous FL, i.e., federated learning models, including an Asynchronous Parallel Federated Learning [5] framework. This study investigates the impact of varying worker node numbers on key metrics. More nodes offer faster convergence but may increase communication overhead and straggler vulnerability. We aim to quantify how the number of worker node variations for one global aggregation can affect convergence speed, communication efficiency, model accuracy, and system robustness, optimizing asynchronous FL system configurations. This work is crucial for practical and scalable FL applications, mitigating network stragglers, data distribution, and security challenges. This work analyses Asynchronous Parallel Federated Learning and showcases a paradigm shift in the approach by selectively aggregating early arriving worker node updates with a novel parameter ‘x’, improving efficiency and reshaping FL.
Date of Conference: 03-07 January 2024
Date Added to IEEE Xplore: 16 February 2024
ISBN Information:

ISSN Information:

Conference Location: Bengaluru, India
Citations are not available for this document.

I. Introduction

Federated Learning (FL) is an innovative approach to machine learning that allows multiple devices to collaboratively train a shared model while preserving the privacy of their data. This decoupling of model training from direct data access is a game-changer, particularly in sensitive fields like biomedicine and finance, where data privacy and security are paramount. Most existing FL methods, including the pioneering FedAvg [3], operate synchronously (SyncFL), involving a central server broadcasting the global model, edge devices updating their local models with private data [1], [2], [4], and the central server aggregating updates to create the next global model. However, in real-world scenarios marked by device heterogeneity, two challenges hinder SyncFL's efficiency:

Cites in Papers - |

Cites in Papers - IEEE (1)

Select All
1.
Satvik Rajesh, Ishaan Gakhar, Rajeev Shorey, Rohit Verma, "Unveiling the Trade-offs: A Parameter-Centric Comparison of Synchronous and Asynchronous Federated Learning", 2025 17th International Conference on COMmunication Systems and NETworks (COMSNETS), pp.1013-1017, 2025.

Contact IEEE to Subscribe

References

References is not available for this document.