Abstract:
Applying self-supervised deep learning improves the processing speed and accuracy of automatic modulation recognition (AMR). It reduces the dependence of previous deep ne...Show MoreMetadata
Abstract:
Applying self-supervised deep learning improves the processing speed and accuracy of automatic modulation recognition (AMR). It reduces the dependence of previous deep networks on many labeled samples. However, affected by an incomplete signal representation modes set, previous models do not fully utilize the multiview property of signals in self-supervised learning. To deal with this issue, a hybrid-view contrastive model for AMR is proposed in this article based on self-supervised learning framework. First, star video is proposed to complete the set of signal representation modes. Next, a self-supervised learning framework based on hybrid-view contrastive learning, hybrid-view self-supervised framework (HVSF), is established to fully extract the signal features, where signals are augmented across views, including the discrete sequence, image, and video format. Considering the view-exclusive information loss and the model complexity, a weakly contrastive strategy and a Transformer-based view-shared feature extractor are finally constructed. Evaluation on four standard datasets demonstrates that the proposed model, HVSF, outperforms both the self-supervised models and supervised models, affirming its superior performance and stability.
Published in: IEEE Internet of Things Journal ( Volume: 12, Issue: 6, 15 March 2025)