By Topic

Optimal Large View Visual Servoing with Sets of SIFT Features

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Thomas Nierobisch ; Faculty of Electrical Engineering and Information Technology; Chair, Control and Systems Engineering, Universität Dortmund, 44221 Dortmund, Germany. thomas.nierobisch@uni-dortmund.de ; Johannes Krettek ; Umar Khan ; Frank Hoffmann

This paper presents a novel approach to large view visual servoing in the context of object manipulation. In many scenarios the features extracted in the reference pose are only perceivable across a limited region of the work space. The limited visibility of features necessitates the introduction of additional intermediate reference views of the object and requires path planning in view space. In our scheme the visual control is based on decoupled moments of SIFT-features, which are generic in the sense that the control operates with a dynamic set of feature correspondences rather than a static set of individual features. The additional freedom of dynamic feature sets enables flexible path planning in the image space and online selection of optimal reference views during servoing to the goal view. The time to convergence to the goal view is estimated by a neural network based on the residual feature error and the quality of the SIFT feature distribution. The transition among reference views occurs on the basis of this estimated cost which is evaluated online based on the current set of visible features. The dynamic switching scheme achieves robust and nearly time-optimal convergence of the visual control across the entire task space. The effectiveness and robustness of the scheme is confirmed in an evaluation in a virtual reality simulation and on a real robot arm with a eye-in-hand configuration.

Published in:

Proceedings 2007 IEEE International Conference on Robotics and Automation

Date of Conference:

10-14 April 2007