Quantization and Knowledge Distillation for Efficient Federated Learning on Edge Devices | IEEE Conference Publication | IEEE Xplore

Quantization and Knowledge Distillation for Efficient Federated Learning on Edge Devices


Abstract:

Federated learning enables distributed machine learning for decentralized data on edge devices. As communication is a critical bottleneck for federated learning, we utili...Show More

Abstract:

Federated learning enables distributed machine learning for decentralized data on edge devices. As communication is a critical bottleneck for federated learning, we utilize model compression techniques for efficient federated learning. First, we propose an adaptive quantized federated average algorithm to reduce the communication cost by dynamically quantizing neural networks' weights. Then, we design a federated knowledge distillation method to achieve high-quality small models with limited labeled data. Adaptive quantized federated learning can significantly speed up model training while retaining model accuracy. With a small fraction of data as labeled data, our federated knowledge distillation can reach a fixed accuracy achieved by supervised learning with the entire labeled data set.
Date of Conference: 14-16 December 2020
Date Added to IEEE Xplore: 26 April 2021
ISBN Information:
Conference Location: Yanuca Island, Cuvu, Fiji

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.