Abstract:
Federated Learning (FL) has emerged as a privacy-preserving paradigm enabling collaborative model training among distributed clients. However, current FL methods operate ...Show MoreMetadata
Abstract:
Federated Learning (FL) has emerged as a privacy-preserving paradigm enabling collaborative model training among distributed clients. However, current FL methods operate under the closed-world assumption, i.e., all local training data originates from a global labeled dataset balanced across classes, which is often invalid for practical scenarios. In contrast, in many open-world settings, data have been observed to exhibit heavy-tailed distributions, particularly in the realm of mobile computing and Internet of Things (IoT). Heavy-tailed data can have a significant negative impact on the performance of learning algorithms due to amplifying the heterogeneity in the FL environment. To this end, we introduce a novel framework to counter biased training caused by diverse and imbalanced classes. This framework includes a balance-aware reward aggregation mechanism addressing local majority and global minority class disparities. Rewards are assigned based on client class prevalence for fair aggregation. A calibration module supplements global aggregation to manage conflicts from inconsistent data distribution among clients. Using reward aggregation and calibration, we effectively mitigate heavy-tailed distribution effects, enhancing FL model performance. This framework seamlessly integrates with leading FL methods, demonstrated through extensive experiments on benchmark and real-world datasets.
Published in: IEEE Transactions on Mobile Computing ( Volume: 23, Issue: 12, December 2024)