Abstract:
Existing deep learning (DL)-based semantic communication generally employs deep neural networks (DNNs) for semantic extraction at a fixed dimension and incorporates chann...Show MoreMetadata
Abstract:
Existing deep learning (DL)-based semantic communication generally employs deep neural networks (DNNs) for semantic extraction at a fixed dimension and incorporates channel state information (CSI) to facilitate semantic transmission. However, aiming at reducing channel uses and enabling robust task performance, the optimal dimension of the semantic information to transmit varies with wireless channel conditions, that is, trading off between channel uses and performance depending on the channel condition. To strike the balance, this letter proposes a channel-aware deep joint source-channel coding (CA-DJSCC) for multi-task oriented semantic communication. Specifically, we formulate an optimization problem to characterize the intricate and hard-to-quantify relationship between the channel condition and the optimal dimension of the semantic information to transmit. To achieve the semantic transmission with a channel-adaptive dimension, we develop a joint source and channel encoder, pruning elements based on semantic importance. Correspondingly, we also design a joint source and channel decoder, introducing a reference signal to obtain an implicit version of the CSI and utilizing a generative adversarial network to enhance the multi-task performance. Simulation results demonstrate that the CA-DJSCC achieves superior performance in terms of both image reconstruction and classification tasks, with over 44.1% reduction in average channel uses compared to typical baselines.
Published in: IEEE Wireless Communications Letters ( Early Access )