The GA presents the research objectives, methodologies, and findings of our work, which involves the implementation of Federated Learning without a central server using s...
Abstract:
Federated Learning (FL) presents a mechanism to allow decentralized training for machine learning (ML) models inherently enabling privacy preservation. The classical FL i...Show MoreMetadata
Abstract:
Federated Learning (FL) presents a mechanism to allow decentralized training for machine learning (ML) models inherently enabling privacy preservation. The classical FL is implemented as a client-server system, which is known as Centralised Federated Learning (CFL). There are challenges inherent in CFL since all participants need to interact with a central server resulting in a potential communication bottleneck and a single point of failure. In addition, it is difficult to have a central server in some scenarios due to the implementation cost and complexity. This study aims to use Decentralized Federated learning (DFL) without a central server through one-hop neighbours. Such collaboration depends on the dynamics of communication networks, e.g., the topology of the network, the MAC protocol, and both large-scale and small-scale fading on links. In this paper, we employ stochastic geometry to model these dynamics explicitly, allowing us to quantify the performance of the DFL. The core objective is to achieve better classification without sacrificing privacy while accommodating for networking dynamics. In this paper, we are interested in how such topologies impact the performance of ML when deployed in practice. The proposed system is trained on a well-known MINST dataset for benchmarking, which contains labelled data samples of 60K images each with a size 28\times 28 pixels, and 1000 random samples of this MNIST dataset are assigned for each participant’ device. The participants’ devices implement a CNN model as a classifier model. To evaluate the performance of the model, a number of participants are randomly selected from the network. Due to randomness in the communication process, these participants interact with the random number of nodes in the neighbourhood to exchange model parameters which are subsequently used to update the participants’ individual models. These participants connected successfully with a varying number of neighbours to exchange paramete...
The GA presents the research objectives, methodologies, and findings of our work, which involves the implementation of Federated Learning without a central server using s...
Published in: IEEE Access ( Volume: 11)