Abstract:
Video super-resolution (SR) has important real world applications such as enhancing viewing experiences of legacy low-resolution videos on high resolution display devices...Show MoreMetadata
Abstract:
Video super-resolution (SR) has important real world applications such as enhancing viewing experiences of legacy low-resolution videos on high resolution display devices. However, there are no visual quality assessment (VQA) models specifically designed for evaluating SR videos while such models are crucially important both for advancing video SR algorithms and for viewing quality assurance. This paper addresses this gap. We start by contributing the first video super-resolution quality assessment database (VSR-QAD) which contains 2,260 SR videos annotated with mean opinion score (MOS) labels collected through an approximately 400 man-hours psychovisual experiment by a total of 190 subjects. We then build on the new VSR-QAD and develop the first VQA model specifically designed for evaluating SR videos. The model features a two-stream convolutional neural network architecture and a two-stage training algorithm designed for extracting spatial and temporal features characterizing the quality of SR videos. We present experimental results and data analysis to demonstrate the high data quality of VSR-QAD and the effectiveness of the new VQA model for measuring the visual quality of SR videos. The new database and the code of the proposed model will be available online at https://github.com/key1cdc/VSRQAD.
Published in: IEEE Transactions on Broadcasting ( Volume: 70, Issue: 2, June 2024)