By Topic

Augmenting monocular motion estimation using intermittent 3D models from depth sensors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
M V Rohith ; Video/Image Modeling and Synthesis (VIMS) Lab, Dept. of Computer and Information Sciences, University of Delaware, Delaware, USA ; Chandra Kambhamettu

Estimation of human motion has been improved by recent advances in depth sensors such as the Microsoft Kinect. However, they often have limited range of depths and a large number of such sensors are necessary to estimate motion in large areas. In this paper, we explore the possibility of estimating motion from monocular data using initial and intermittent 3D models provided by the depth sensor. We use motion segmentation to divide the scene into several rigidly moving components. The orientation of individual components are estimated and these reconstructions are synthesized to provide a coherent estimate of the scene. We demonstrate our algorithm on three sequences from a real video sequence. Quantitative comparison with depth sensor reconstructions show that the proposed method can accurately estimate motion even with a single 3D initialization.

Published in:

Pattern Recognition (ICPR), 2012 21st International Conference on

Date of Conference:

11-15 Nov. 2012