Loading [a11y]/accessibility-menu.js
Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified Communication-Learning Design Approach | IEEE Journals & Magazine | IEEE Xplore

Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified Communication-Learning Design Approach


Abstract:

To exploit massive amounts of data generated at mobile edge networks, federated learning (FL) has been proposed as an attractive substitute for centralized machine learni...Show More

Abstract:

To exploit massive amounts of data generated at mobile edge networks, federated learning (FL) has been proposed as an attractive substitute for centralized machine learning (ML). By collaboratively training a shared learning model at edge devices, FL avoids direct data transmission and thus overcomes high communication latency and privacy issues as compared to centralized ML. To improve the communication efficiency in FL model aggregation, over-the-air computation has been introduced to support a large number of simultaneous local model uploading by exploiting the inherent superposition property of wireless channels. However, due to the heterogeneity of communication capacities among edge devices, over-the-air FL suffers from the straggler issue in which the device with the weakest channel acts as a bottleneck of the model aggregation performance. This issue can be alleviated by device selection to some extent, but the latter still suffers from a tradeoff between data exploitation and model communication. In this paper, we leverage the reconfigurable intelligent surface (RIS) technology to relieve the straggler issue in over-the-air FL. Specifically, we develop a learning analysis framework to quantitatively characterize the impact of device selection and model aggregation error on the convergence of over-the-air FL. Then, we formulate a unified communication-learning optimization problem to jointly optimize device selection, over-the-air transceiver design, and RIS configuration. Numerical experiments show that the proposed design achieves substantial learning accuracy improvement compared with the state-of-the-art approaches, especially when channel conditions vary dramatically across edge devices.
Published in: IEEE Transactions on Wireless Communications ( Volume: 20, Issue: 11, November 2021)
Page(s): 7595 - 7609
Date of Publication: 10 June 2021

ISSN Information:

Funding Agency:


I. Introduction

The availability of massive amounts of data at mobile edge devices has led to a surge of interest in developing artificial intelligence (AI) services, such as image recognition [2] and natural language processing [3], at the edge of wireless networks. Conventional machine learning (ML) requires a data center to collect all data for centralized model training. In a wireless system, collecting data from distributed mobile devices incurs huge energy/bandwidth cost, high time delay, and potential privacy issues [4]. To address these challenges, a new paradigm called federated learning (FL) has emerged [5]. In a typical FL framework, each edge device computes its local model updates based on its own dataset and uploads the model updates to a parameter server (PS). The global model is computed at the PS and shared with the devices. By doing so, direct data transmission is replaced by model parameter uploading. This significantly relieves the communication burden and prevents revealing local data to the other devices and the PS.

Contact IEEE to Subscribe

References

References is not available for this document.