Abstract:
Generating videos from some initial frames is an appealing field of research in deep learning. There exists an ever expanding foray of approaches to generate long-range a...Show MoreMetadata
Abstract:
Generating videos from some initial frames is an appealing field of research in deep learning. There exists an ever expanding foray of approaches to generate long-range and realistic video frame series. Generating videos can help predict trajectories and even model object movements, to enhance autonomous robots. However, there are only a few comprehensive studies that review various approaches on the basis of their relative advantages, disadvantages, and evolution. Hence, this paper presents a detailed overview of Deep Learning based approaches employed to tackle the complex problem of video generation. The approaches involve Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs) and even the Transformer model. Finally, the performance of all the approaches are examined and compared on the BAIR Robot Pushing dataset.
Date of Conference: 17-19 December 2020
Date Added to IEEE Xplore: 01 March 2021
ISBN Information:
References is not available for this document.
References is not available for this document.