Abstract:
Deep learning-based CSI compression has shown its efficacy for massive multiple-input multiple-output networks, and on the other hand, federated learning (FL) excels the ...Show MoreMetadata
Abstract:
Deep learning-based CSI compression has shown its efficacy for massive multiple-input multiple-output networks, and on the other hand, federated learning (FL) excels the conventional centralized learning by avoiding privacy leakage issues and training communication overhead. The realization of an FL-based CSI feedback network consumes more computational resources and time, and the continuous reporting of local models to the base station results in overhead. To overcome these issues, in this letter, we propose a FBCNet. The proposed FBCNet combines the advantages of the novel fusion basis (FB) technique and the fully connected complex-valued neural network (CNet) based on gradient (G) and non-gradient (NG) approaches. The experimental results show the advantages of both CNet and FB individually over the existing techniques. FBCNet, the combination of both FB and CNet, outperforms the existing federated averaging-based CNet (FedCNet) with improvement in reconstruction performance, less complexity, reduced training time, and low transmission overhead. For the distributed array-line of sight topology at the compression ratio (CR) of 20:1, it is noted that the NMSE and the cosine similarity of FedCNet-G are −8.2837 dB, 0.9262; FedCNet-NG are −3.5291 dB, 0.8452; proposed FB are −26.8621, 0.9653. Also the NMSE and the cosine similarity of the proposed FBCNet-G are −19.7521, 0.9307; FBCNet-NG are −24.0442, 0.9539 at a high CR of 64:1.
Published in: IEEE Networking Letters ( Volume: 6, Issue: 4, December 2024)
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Neural Network ,
- Massive MIMO ,
- Channel State Information Compression ,
- Massive MIMO Networks ,
- Training Time ,
- Computational Resources ,
- Base Station ,
- Compression Ratio ,
- Communication Overhead ,
- Reconstruction Performance ,
- Federated Learning ,
- Normalized Mean Square Error ,
- Reduce Training Time ,
- Gradient Approach ,
- High Compression Ratio ,
- Mean Square Error ,
- Complex Formation ,
- Convolutional Neural Network ,
- Hidden Layer ,
- User Equipment ,
- Orthogonal Frequency Division Multiplexing ,
- Normalized Mean Square ,
- Input Layer ,
- Channel Vector ,
- Complex Input ,
- Reduction In Time ,
- Eigenvectors ,
- Partial Differential ,
- Uniform Linear Array
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Neural Network ,
- Massive MIMO ,
- Channel State Information Compression ,
- Massive MIMO Networks ,
- Training Time ,
- Computational Resources ,
- Base Station ,
- Compression Ratio ,
- Communication Overhead ,
- Reconstruction Performance ,
- Federated Learning ,
- Normalized Mean Square Error ,
- Reduce Training Time ,
- Gradient Approach ,
- High Compression Ratio ,
- Mean Square Error ,
- Complex Formation ,
- Convolutional Neural Network ,
- Hidden Layer ,
- User Equipment ,
- Orthogonal Frequency Division Multiplexing ,
- Normalized Mean Square ,
- Input Layer ,
- Channel Vector ,
- Complex Input ,
- Reduction In Time ,
- Eigenvectors ,
- Partial Differential ,
- Uniform Linear Array
- Author Keywords