I. Introduction
Federated Learning (FL) is an innovative approach to machine learning that allows multiple devices to collaboratively train a shared model while preserving the privacy of their data. This decoupling of model training from direct data access is a game-changer, particularly in sensitive fields like biomedicine and finance, where data privacy and security are paramount. Most existing FL methods, including the pioneering FedAvg [3], operate synchronously (SyncFL), involving a central server broadcasting the global model, edge devices updating their local models with private data [1], [2], [4], and the central server aggregating updates to create the next global model. However, in real-world scenarios marked by device heterogeneity, two challenges hinder SyncFL's efficiency: