By Topic

Learning grasp strategies composed of contact relative motions

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Platt, R. ; Dextrous Robot. Lab., NASA, Greenbelt, MD

Of central importance to grasp synthesis algorithms are the assumptions made about the object to be grasped and the sensory information that is available. Many approaches avoid the issue of sensing entirely by assuming that complete information is available. In contrast, this paper focuses on the case where force feedback is the only source of new information and limited prior information is available. Although, in general, visual information is also available, the emphasis on force feedback allows this paper to focus on the partially observable nature of the grasp synthesis problem. In order to investigate this question, this paper introduces a parameterizable space of atomic units of control known as contact relative motions (CRMs). CRMs simultaneously displace contacts on the object surface and gather force feedback information relevant to the object shape and the relative manipulator-object pose. This allows the grasp synthesis problem to be re-cast as an optimal control problem where the goal is to find a strategy for executing CRMs that leads to a grasp in the shortest number of steps. Since local force feedback information usually does not completely determine system state, the control problem is partially observable. This paper expresses the partially observable problem as a k-order Markov Decision Process (MDP) and solves it using Reinforcement Learning. Although this approach can be expected to extend to the grasping of spatial objects, this paper focuses on the case of grasping planar objects in order to explore the ideas. The approach is tested in planar simulation and is demonstrated to work in practice using Robonaut, the NASA-JSC space humanoid.

Published in:

Humanoid Robots, 2007 7th IEEE-RAS International Conference on

Date of Conference:

Nov. 29 2007-Dec. 1 2007