RDEN: Residual Distillation Enhanced Network-Guided Lightweight Synthesized View Quality Enhancement for 3D-HEVC | IEEE Journals & Magazine | IEEE Xplore

RDEN: Residual Distillation Enhanced Network-Guided Lightweight Synthesized View Quality Enhancement for 3D-HEVC


Abstract:

In the three-dimensional video system, the depth image-based rendering is a key technique for generating synthesized views, which provides audiences with depth perception...Show More

Abstract:

In the three-dimensional video system, the depth image-based rendering is a key technique for generating synthesized views, which provides audiences with depth perception and interactivity. However, the inaccuracy of depth information leads to geometrical rendering position errors, and the compression distortion of texture and depth videos degrades the quality of the synthesized views. Although existing quality enhancement methods can eliminate the distortions in the synthesized views, their huge computational complexity hinders their applications in real-time multimedia systems. To this end, a residual distillation enhanced network (RDEN)-guided lightweight synthesized view quality enhancement (SVQE) method is proposed to minimize holes and compression distortions in the synthesized views while reducing the model complexity. First, a rethinking on the deep-learning-based SVQE methods is performed. Then, a feature distillation attention block is proposed to effectively reduce the distortions in the synthesized views and make the model fulfill more real-time tasks, which is a lightweight and flexible feature extraction block using an information distillation mechanism and a lightweight multi-scale spatial attention mechanism. Third, a residual feature fusion block is proposed to improve the enhancement performance by using the feature fusion mechanism, which efficiently improves the feature extraction capability without introducing any additional parameters. Experimental results prove that the proposed RDEN efficiently improves the SVQE performance while consuming few computational complexities compared with the state-of-the-art SVQE methods.
Published in: IEEE Transactions on Circuits and Systems for Video Technology ( Volume: 32, Issue: 9, September 2022)
Page(s): 6347 - 6359
Date of Publication: 21 March 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.