Abstract:
The reference-based video colorization method hallucinates a plausible color version for a gray-scale video by referring distributions of possible colors from an input co...Show MoreMetadata
Abstract:
The reference-based video colorization method hallucinates a plausible color version for a gray-scale video by referring distributions of possible colors from an input color frame, which has semantic correspondences with the gray-scale frames. The plausibility of colors and the temporal consistency are two significant challenges in this task. In this paper, we propose a novel Generative Adversarial Network (GAN) with a siamese training framework to tackle these challenges. Specifically, the siamese training framework allows us to implement temporal feature augmentation, enhancing temporal consistency. Further, to improve the plausibility of colorization results, we propose a multi-scale fusion module that correlates features of reference frames to source frames accurately. Experiments on various datasets demonstrate that our proposed method performs favorably against the state-of the-art approaches.
Date of Conference: 19-22 September 2021
Date Added to IEEE Xplore: 23 August 2021
ISBN Information: