Loading [a11y]/accessibility-menu.js
Efficient grasping from RGBD images: Learning using a new rectangle representation | IEEE Conference Publication | IEEE Xplore

Efficient grasping from RGBD images: Learning using a new rectangle representation


Abstract:

Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration—its 3D location, 3D orientation and the gripper...Show More

Abstract:

Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration—its 3D location, 3D orientation and the gripper opening width. Recently, learning algorithms have been successfully applied to grasp novel objects—ones not seen by the robot before. While these approaches use low-dimensional representations such as a ‘grasping point’ or a ‘pair of points’ that are perhaps easier to learn, they only partly represent the gripper configuration and hence are sub-optimal. We propose to learn a new ‘grasping rectangle’ representation: an oriented rectangle in the image plane. It takes into account the location, the orientation as well as the gripper opening width. However, inference with such a representation is computationally expensive. In this work, we present a two step process in which the first step prunes the search space efficiently using certain features that are fast to compute. For the remaining few cases, the second step uses advanced features to accurately select a good grasp. In our extensive experiments, we show that our robot successfully uses our algorithm to pick up a variety of novel objects.
Date of Conference: 09-13 May 2011
Date Added to IEEE Xplore: 18 August 2011
ISBN Information:

ISSN Information:

Conference Location: Shanghai, China

I. Introduction

In this paper, we consider the task of grasping novel objects, given its image and aligned depth map. Our goal is to estimate the gripper configuration (i.e., the 3D location, 3D orientation and gripper opening width) at the final location when the robot is about to close the gripper. Recently, several learning algorithms [1]–[3] have shown promise in handling incomplete and noisy data, variations in the environment, as well as grasping novel objects. It is not clear, however, what the output of such learning algorithms should be-in this paper we discuss this issue, propose a new representation for grasping, and present a fast and efficient learning algorithm to learn this representation.

Contact IEEE to Subscribe

References

References is not available for this document.