3D Moving Object Reconstruction by Temporal Accumulation | IEEE Conference Publication | IEEE Xplore

3D Moving Object Reconstruction by Temporal Accumulation


Abstract:

Much progress has been made recently in the development of 3D acquisition technologies, which increased the availability of low-cost 3D sensors, such as the Microsoft Kin...Show More

Abstract:

Much progress has been made recently in the development of 3D acquisition technologies, which increased the availability of low-cost 3D sensors, such as the Microsoft Kinect. This promotes a wide variety of computer vision applications needing object recognition and 3D reconstruction. We present a novel algorithm for full 3D reconstruction of unknown rotating objects in 2.5D point cloud sequences, such as those generated by 3D sensors. Our algorithm incorporates structural and temporal motion information to build 3D models of moving objects and is based on motion compensated temporal accumulation. The proposed algorithm requires only the fixed centre or axis of rotation, unlike other 3D reconstruction methods, it does not require key point detection, feature description, correspondence matching, provided object models or any geometric information about the object. Moreover, our algorithm integrally estimates the best rigid transformation parameters for registration, applies surface resembling, reduces noise and estimates the optimum angular velocity of the rotating object.
Date of Conference: 24-28 August 2014
Date Added to IEEE Xplore: 06 December 2014
Electronic ISBN:978-1-4799-5209-0
Print ISSN: 1051-4651
Conference Location: Stockholm, Sweden

I. Introduction

The increasing availability of low-cost 3D sensors such as the Microsoft Kinect has allowed many 3D reconstruction methods to be developed. The reconstruction of 3D models of rigid objects is generally achieved by the following steps: First the data acquisition step where point clouds or range images (depth maps) are generated by the 3D sensor. This data is 2.5D where only the surfaces facing the sensor are captured. Secondly, an optional segmentation and filtering step is applied to separate the observed object from its background. Thirdly scans from different viewpoints are aligned together in one coordinate frame (registration). Then the aligned scans are typically resampled and merged (integrated) by surface reconstruction techniques into a seamless 3D surface and rendered for display.

Contact IEEE to Subscribe

References

References is not available for this document.