Abstract:
Federated Learning (FL) offers a privacy-preserving massively distributed Machine Learning (ML) paradigm where many clients cooperatively work together towards training a...Show MoreMetadata
Abstract:
Federated Learning (FL) offers a privacy-preserving massively distributed Machine Learning (ML) paradigm where many clients cooperatively work together towards training a shared machine learning model. FL, however, is susceptible to data heterogeneity problems as the FL clients have diverse data sources. Prior works employ auto-weighted model aggregation to mitigate the heterogeneity issue to minimize the impact of unfavorable model updates. However, existing approaches require extensive computation for statistical analysis of clients’ model updates. To circumvent this, we propose, FedASL (Federated Learning with Auto-weighted Aggregation based on Standard Deviation of Training Loss) which uses only the local training loss of FL clients for auto-weighting the model aggregation. Our evaluation under three different datasets and various data corruption scenarios reveals that FedASL can effectively thwart data corruption from bad clients while causing as little as one-tenth of the computation cost of existing approaches.
Date of Conference: 10-16 July 2022
Date Added to IEEE Xplore: 24 August 2022
ISBN Information: