A Channel Aggregation Based Dynamic Pruning Method in Federated Learning | IEEE Conference Publication | IEEE Xplore

A Channel Aggregation Based Dynamic Pruning Method in Federated Learning


Abstract:

Federated Learning (FL) provides a new solution to the conflict between model training based on data aggregation and data privacy protection. However, in FL, end nodes (o...Show More

Abstract:

Federated Learning (FL) provides a new solution to the conflict between model training based on data aggregation and data privacy protection. However, in FL, end nodes (or clients) need to constantly exchange model parameters with the aggregation server, resulting in significant communication overhead. This paper proposes a structured compression-based method. During local training, each client iteratively prunes the original network based on the L1 norm of the channel as an importance measure to dynamically converge to sub-networks with different structures that fit their local data. When the server aggregates the heterogeneous networks of all clients, it takes average of the channels, significantly improving communication and computation overhead. This paper evaluated the proposed method on several public datasets for image classification, and the results showed that compared with the original network, the model parameter volume was compressed by 61 % to 91 %. Moreover, while maintaining model accuracy, compared to existing methods, the proposed method reduced training time by 39 % to 54%, but decreased the number of floating-point operations by 21 % to 44%.
Date of Conference: 04-08 December 2023
Date Added to IEEE Xplore: 26 February 2024
ISBN Information:

ISSN Information:

Conference Location: Kuala Lumpur, Malaysia

Contact IEEE to Subscribe

References

References is not available for this document.