Abstract:
The latest versatile video coding (VVC) standard proposed by the Joint Video Exploration Team (JVET) has significantly improved coding efficiency compared to that of its ...Show MoreMetadata
Abstract:
The latest versatile video coding (VVC) standard proposed by the Joint Video Exploration Team (JVET) has significantly improved coding efficiency compared to that of its predecessor, while introducing an extremely higher computational complexity by 6\sim 26 times. The quad-tree plus multi-type tree (QTMT)-based coding unit (CU) partition accounts for most of the encoding time in VVC encoding. This paper proposes a data-driven fast CU partition approach based on an efficient Transformer model to accelerate VVC inter-coding. First, we establish a large-scale database for inter-mode VVC, comprising diverse CU partition patterns from more than 800 raw video sequences across various resolutions and contents. Next, we propose a deep neural network model with a Transformer-based temporal topology for predicting the CU partition, named as TCP-Net, which is adaptive to the group of pictures (GOP) hierarchy in VVC. Then, we design a two-stage structured output for TCP-Net, reflecting both the locations of CU edges and the split modes of all possible CUs. Accordingly, we develop a dual-supervised optimization mechanism to train the TCP-Net model with improved accuracy. The experimental results have verified that our approach can reduce the encoding time by 46.89\sim 55.91 % with negligible rate-distortion (RD) degradation, outperforming other state-of-the-art approaches.
Published in: IEEE Transactions on Image Processing ( Volume: 34)