Incorporation of the Intended Task into a Vision-based Grasp Type Predictor for Multi-fingered Robotic Grasping | IEEE Conference Publication | IEEE Xplore

Incorporation of the Intended Task into a Vision-based Grasp Type Predictor for Multi-fingered Robotic Grasping


Abstract:

Robots that make use of multi-fingered or fully anthropomorphic end-effectors can engage in highly complex manipulation tasks. However, the choice of a suitable grasp for...Show More

Abstract:

Robots that make use of multi-fingered or fully anthropomorphic end-effectors can engage in highly complex manipulation tasks. However, the choice of a suitable grasp for manipulating an object is strongly influenced by factors such as the physical properties of an object and the intended task. This makes predicting an appropriate grasping pose for carrying out a concrete task notably challenging. At the same time, current grasp type predictors rarely consider the task as a part of the prediction process. This work proposes a learning model that considers the task in addition to an object’s visual features for predicting a suitable grasp type. Furthermore, we generate a synthetic dataset by simulating robotic grasps on 3D object models based on the BarrettHand end-effector. With an angular similarity of 0.9 and above, our model achieves competitive prediction results compared to grasp type predictors that do not consider the intended task for learning grasps. Finally, to foster research in the field, we make our synthesized dataset available to the research community.
Date of Conference: 26-30 August 2024
Date Added to IEEE Xplore: 30 October 2024
ISBN Information:

ISSN Information:

Conference Location: Pasadena, CA, USA

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.