An End-to-End Learning Framework for Video Compression | IEEE Journals & Magazine | IEEE Xplore

An End-to-End Learning Framework for Video Compression


Abstract:

Traditional video compression approaches build upon the hybrid coding framework with motion-compensated prediction and residual transform coding. In this paper, we propos...Show More

Abstract:

Traditional video compression approaches build upon the hybrid coding framework with motion-compensated prediction and residual transform coding. In this paper, we propose the first end-to-end deep video compression framework to take advantage of both the classical compression architecture and the powerful non-linear representation ability of neural networks. Our framework employs pixel-wise motion information, which is learned from an optical flow network and further compressed by an auto-encoder network to save bits. The other compression components are also implemented by the well-designed networks for high efficiency. All the modules are jointly optimized by using the rate-distortion trade-off and can collaborate with each other. More importantly, the proposed deep video compression framework is very flexible and can be easily extended by using lightweight or advanced networks for higher speed or better efficiency. We also propose to introduce the adaptive quantization layer to reduce the number of parameters for variable bitrate coding. Comprehensive experimental results demonstrate the effectiveness of the proposed framework on the benchmark datasets.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 43, Issue: 10, 01 October 2021)
Page(s): 3292 - 3308
Date of Publication: 20 April 2020

ISSN Information:

PubMed ID: 32324541

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.