Abstract:
We propose FLASH-RL, a framework utilizing Double Deep Q-Learning (DDQL) to address system and static heterogeneity in Federated Learning (FL). FLASH-RL introduces a new ...Show MoreMetadata
Abstract:
We propose FLASH-RL, a framework utilizing Double Deep Q-Learning (DDQL) to address system and static heterogeneity in Federated Learning (FL). FLASH-RL introduces a new reputation-based utility function to evaluate client contributions based on their current and past performances. Additionally, an adapted DDQL algorithm is proposed to expedite the learning process. Experimental results on MNIST and CIFAR-10 datasets demonstrate that FLASH-RL strikes a balance between model performance and end-to-end latency, reducing latency by up to 24.83% compared to FedAVG and 24.67% compared to FAVOR. It also reduces training rounds by up to 60.44% compared to FedAVG and 76% compared to FAVOR. Similar improvements are observed on the MobiAct Dataset for fall detection, underscoring the real-world applicability of our approach.
Date of Conference: 06-08 November 2023
Date Added to IEEE Xplore: 22 December 2023
ISBN Information: