Skip to Main Content
This paper presents a hierarchical, statistical topic model for representing the grasp preshapes of a set of objects. Observations provided by teleoperation are clustered into latent affordances shared among all objects. Each affordance defines a joint distribution over position and orientation of the hand relative to the object and conditioned on visual appearance. The parameters of the model are learned using a Gibbs sampling method. After training, the model can be used to compute grasp preshapes for a novel object based on its visual appearance. The model is evaluated experimentally on a set of objects for its ability to generate grasp preshapes that lead to successful grasps, and compared to a baseline approach.
Date of Conference: Nov. 29 2007-Dec. 1 2007