PQ-Transformer: Jointly Parsing 3D Objects and Layouts From Point Clouds | IEEE Journals & Magazine | IEEE Xplore

PQ-Transformer: Jointly Parsing 3D Objects and Layouts From Point Clouds


Abstract:

3D scene understanding from point clouds plays a vital role for various robotic applications. Unfortunately, current state-of-the-art methods use separate neural networks...Show More

Abstract:

3D scene understanding from point clouds plays a vital role for various robotic applications. Unfortunately, current state-of-the-art methods use separate neural networks for different tasks like object detection or room layout estimation. Such a scheme has two limitations: 1) Storing and running several networks for different tasks are expensive for typical robotic platforms. 2) The intrinsic structure of separate outputs are ignored and potentially violated. To this end, we propose the first transformer architecture that predicts 3D objects and layouts simultaneously, using point cloud inputs. Unlike existing methods that either estimate layout keypoints or edges, we directly parameterize room layout as a set of quads. As such, the proposed architecture is termed as P(oint)Q(uad)-Transformer. Along with the novel quad representation, we propose a tailored physical constraint loss function that discourages object-layout interference. The quantitative and qualitative evaluations on the public benchmark ScanNet show that the proposed PQ-Transformer succeeds to jointly parse 3D objects and layouts, running at a quasi-real-time (8.91 FPS) rate without efficiency-oriented optimization. Moreover, the new physical constraint loss can improve strong baselines, and the F1-score of the room layout is significantly promoted from 37.9% to 57.9%.1

Code and models can be accessed at https://github.com/OPEN-AIR-SUN/PQ-Transformer.

Published in: IEEE Robotics and Automation Letters ( Volume: 7, Issue: 2, April 2022)
Page(s): 2519 - 2526
Date of Publication: 14 January 2022

ISSN Information:

Funding Agency:


I. Introduction

Recent years have witnessed the emergence of 3D scene understanding technologies, which enables robots to understand the geometric, semantic and cognitive properties of real-world scenes, so as to assist robot decision making. However, 3D scene understanding remains challenging due to the following problems: 1) Holistic understanding requires many sub-problems to be addressed, such as semantic label assignment [2], object bounding box localization [3] and room structure boundary extraction [1] etc. However, current methods solve these tasks with separate models, which is expensive in terms of storage and computation. 2) The physical commonsense [5] like gravity [6] or interference [7] between different tasks are ignored and potentially violated, producing geometrically implausible results.

Contact IEEE to Subscribe

References

References is not available for this document.