Loading [a11y]/accessibility-menu.js
Fixture-Aware DDQN for Generalized Environment-Enabled Grasping | IEEE Conference Publication | IEEE Xplore

Fixture-Aware DDQN for Generalized Environment-Enabled Grasping


Abstract:

This paper expands on the problem of grasping an object that can only be grasped by a single parallel gripper when a fixture (e.g., wall, heavy object) is harnessed. Prec...Show More

Abstract:

This paper expands on the problem of grasping an object that can only be grasped by a single parallel gripper when a fixture (e.g., wall, heavy object) is harnessed. Preceding work that tackle this problem are limited in that the employed networks implicitly learn specific targets and fixtures to leverage. However, the notion of a usable fixture can vary in different environments, at times without any outwardly noticeable differences. In this paper, we propose a method to relax this limitation and further handle environments where the fixture location is unknown. The problem is formulated as visual affordance learning in a partially observable setting. We present a self-supervised reinforcement learning algorithm, Fixture-Aware Double Deep Q-Network (FA-DDQN), that processes the scene observation to 1) identify the target object based on a reference image, 2) distinguish possible fixtures based on interaction with the environment, and finally 3) fuse the information to generate a visual affordance map to guide the robot to successful Slide-to-Wall grasps. We demonstrate our proposed solution in simulation and in real robot experiments to show that in addition to achieving higher success than baselines, it also performs zero-shot generalization to novel scenes with unseen object configurations.
Date of Conference: 23-27 October 2022
Date Added to IEEE Xplore: 26 December 2022
ISBN Information:

ISSN Information:

Conference Location: Kyoto, Japan

Funding Agency:


References

References is not available for this document.