Loading [MathJax]/extensions/MathMenu.js
Two-Stream Federated Learning: Reduce the Communication Costs | IEEE Conference Publication | IEEE Xplore

Two-Stream Federated Learning: Reduce the Communication Costs


Abstract:

Federated learning algorithm solves the problem of training machine learning models over distributed networks that consist of a massive amount of modern smart devices. It...Show More

Abstract:

Federated learning algorithm solves the problem of training machine learning models over distributed networks that consist of a massive amount of modern smart devices. It overcomes the challenge of privacy preservation, unbalanced and Non-IID data distributions, and does its best to reduce the required communication rounds. However, communication costs are still the principle constraint compared to other factors, such as computation costs. In this paper, we adopt a two-stream model with MMD (Maximum Mean Discrepancy) constraint instead of the single model to be trained on devices in standard federated learning settings. Following experiments show that the proposed model outperforms baseline methods, especially in Non-IID data distributions, and achieves a reduction of more than 20% in required communication rounds.
Date of Conference: 09-12 December 2018
Date Added to IEEE Xplore: 25 April 2019
ISBN Information:
Print on Demand(PoD) ISSN: 1018-8770
Conference Location: Taichung, Taiwan

I. Introduction

Modern phones and tablets, wearable devices, and smart loT (Internet of Things) devices are generating massive amounts of data everyday, which is suitable for training machine learning models. However, the rich data is often privacy sensitive and large in quantity. Uploading such data to server and training there using traditional methods are neither affordable in privacy or cost. Due to the growing computational power of these devices, federated learning [1][2][3] offers a way to train a shared model directly on devices, by aggregating locally-computed updates. Potential examples include: learning to recognize activities of phone users, predicting health events, or learning personalized typing recommendation systems. Although the federated learning algorithm shows a reduction in required communication rounds as compared to synchronized stochastic gradient descent (SGD) methods, communication costs are still the principle constraint.

Contact IEEE to Subscribe

References

References is not available for this document.