By Topic

What can be learned from human reach-to-grasp movements for the design of robotic hand-eye systems?

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
A. Hauck ; Lab. Process Control & Real-Time Syst., Tech. Univ. Munchen, Germany ; M. Sorg ; G. Farber ; T. Schenk

In the field of robot motion control, visual servoing has been proposed as the suitable strategy to cope with imprecise models and calibration errors. Remaining problems such as the necessity of a high rate of visual feedback are deemed to be solvable by the development of real-time vision modules. However, human grasping, which still outshines its robotic counterparts especially with respect to robustness and flexibility, definitely requires only sparse, asynchronous visual feedback. We therefore examined current neuroscientific models for the control of human reach-to-grasp movements with the emphasis lying on the visual control strategy used. From this, we developed a control model that unifies the two robotic strategies look-then-move and visual servoing, thereby compensating the problems that each strategy shows when used alone

Published in:

Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on  (Volume:4 )

Date of Conference: