Deep Video Prior for Video Consistency and Propagation | IEEE Journals & Magazine | IEEE Xplore

Deep Video Prior for Video Consistency and Propagation


Abstract:

Applying an image processing algorithm independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a...Show More

Abstract:

Applying an image processing algorithm independently to each video frame often leads to temporal inconsistency in the resulting video. To address this issue, we present a novel and general approach for blind video temporal consistency. Our method is only trained on a pair of original and processed videos directly instead of a large dataset. Unlike most previous methods that enforce temporal consistency with optical flow, we show that temporal consistency can be achieved by training a convolutional network on a video with Deep Video Prior (DVP). Moreover, a carefully designed iteratively reweighted training strategy is proposed to address the challenging multimodal inconsistency problem. We demonstrate the effectiveness of our approach on 7 computer vision tasks on videos. Extensive quantitative and perceptual experiments show that our approach obtains superior performance than state-of-the-art methods on blind video temporal consistency. We further extend DVP to video propagation and demonstrate its effectiveness in propagating three different types of information (color, artistic style, and object segmentation). A progressive propagation strategy with pseudo labels is also proposed to enhance DVP’s performance on video propagation. Our source codes are publicly available at https://github.com/ChenyangLEI/deep-video-prior.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 45, Issue: 1, 01 January 2023)
Page(s): 356 - 371
Date of Publication: 11 January 2022

ISSN Information:

PubMed ID: 35015633

Contact IEEE to Subscribe

References

References is not available for this document.