Loading [MathJax]/extensions/MathZoom.js
Compositional Scene Representation Learning via Reconstruction: A Survey | IEEE Journals & Magazine | IEEE Xplore

Compositional Scene Representation Learning via Reconstruction: A Survey


Abstract:

Visual scenes are composed of visual concepts and have the property of combinatorial explosion. An important reason for humans to efficiently learn from diverse visual sc...Show More

Abstract:

Visual scenes are composed of visual concepts and have the property of combinatorial explosion. An important reason for humans to efficiently learn from diverse visual scenes is the ability of compositional perception, and it is desirable for artificial intelligence to have similar abilities. Compositional scene representation learning is a task that enables such abilities. In recent years, various methods have been proposed to apply deep neural networks, which have been proven to be advantageous in representation learning, to learn compositional scene representations via reconstruction, advancing this research direction into the deep learning era. Learning via reconstruction is advantageous because it may utilize massive unlabeled data and avoid costly and laborious data annotation. In this survey, we first outline the current progress on reconstruction-based compositional scene representation learning with deep neural networks, including development history and categorizations of existing methods from the perspectives of the modeling of visual scenes and the inference of scene representations; then provide benchmarks, including an open source toolbox to reproduce the benchmark experiments, of representative methods that consider the most extensively studied problem setting and form the foundation for other methods; and finally discuss the limitations of existing methods and future directions of this research topic.
Page(s): 11540 - 11560
Date of Publication: 14 June 2023

ISSN Information:

PubMed ID: 37314900

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.