Abstract:
The emergence of 6th generation (6G) mobile networks brings new challenges in supporting high-mobility communications, particularly in addressing the issue of channel agi...Show MoreMetadata
Abstract:
The emergence of 6th generation (6G) mobile networks brings new challenges in supporting high-mobility communications, particularly in addressing the issue of channel aging. While existing channel prediction methods offer improved accuracy at the expense of increased computational complexity, limiting their practical application in mobile networks. To address these challenges, we present LinFormer, an innovative channel prediction framework based on a scalable, all-linear, encoder-only Transformer model. Our approach, inspired by natural language processing (NLP) models such as BERT, adapts an encoder-only architecture specifically for channel prediction tasks. We propose replacing the computationally intensive attention mechanism commonly used in Transformers with a time-aware multi-layer perceptron (TMLP), significantly reducing computational demands. The inherent time awareness of TMLP module makes it particularly suitable for channel prediction tasks. We enhance LinFormer’s training process by employing a weighted mean squared error loss (WMSELoss) function and data augmentation techniques, leveraging larger, readily available communication datasets. Our approach achieves a substantial reduction in computational complexity while maintaining high prediction accuracy, making it more suitable for deployment in cost-effective base stations (BS). Comprehensive experiments using both simulated and measured data demonstrate that LinFormer outperforms existing methods across various mobility scenarios, offering a promising solution for future wireless communication systems.
Published in: IEEE Transactions on Wireless Communications ( Early Access )
Funding Agency:
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Multiple-input Multiple-output ,
- Transformer Architecture ,
- Channel Prediction ,
- Loss Function ,
- Prediction Accuracy ,
- Data Augmentation ,
- Attention Mechanism ,
- Base Station ,
- Wireless Systems ,
- Mobile Network ,
- Transformer Model ,
- Wireless Communication Systems ,
- Data Augmentation Techniques ,
- Computational Complexity Reduction ,
- Model Performance ,
- Time Step ,
- Training Dataset ,
- Simulated Data ,
- Temporal Information ,
- Input Sequence ,
- LSTM Model ,
- Encoder Layer ,
- Minimum Mean Square Error Estimator ,
- Inference Speed ,
- Input Sequence Length ,
- Delay Spread ,
- Self-attention Module ,
- Channel Matrix ,
- Minimum Mean Square Error ,
- Time Division Duplex
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Multiple-input Multiple-output ,
- Transformer Architecture ,
- Channel Prediction ,
- Loss Function ,
- Prediction Accuracy ,
- Data Augmentation ,
- Attention Mechanism ,
- Base Station ,
- Wireless Systems ,
- Mobile Network ,
- Transformer Model ,
- Wireless Communication Systems ,
- Data Augmentation Techniques ,
- Computational Complexity Reduction ,
- Model Performance ,
- Time Step ,
- Training Dataset ,
- Simulated Data ,
- Temporal Information ,
- Input Sequence ,
- LSTM Model ,
- Encoder Layer ,
- Minimum Mean Square Error Estimator ,
- Inference Speed ,
- Input Sequence Length ,
- Delay Spread ,
- Self-attention Module ,
- Channel Matrix ,
- Minimum Mean Square Error ,
- Time Division Duplex
- Author Keywords