Abstract:
In this work, we present a real-time, deep convolutional encoder-decoder neural network to realize open-loop robotic grasping using only depth image information. Our prop...Show MoreMetadata
Abstract:
In this work, we present a real-time, deep convolutional encoder-decoder neural network to realize open-loop robotic grasping using only depth image information. Our proposed U-Grasping fully convolutional neural network(UG-Net) predicts the quality and the pose of grasp in pixel-wise. Using only depth information to predict each pixel's grasp policy overcomes the limitation of sampling discrete grasp candidates which can take a lot of computation time. Our UG-Net improves the grasp quality comparing to other pixel-wise grasping learning methods, more robust grasping decision making within 27ms with 370MB parameters approximately (a light competitive version is also given). In the physical experiment, we achieve 93.08% and 93.23% grasp success rate on a 3D-printed adversarial object benchmark set and a household object benchmark set.
Date of Conference: 04-09 August 2019
Date Added to IEEE Xplore: 23 March 2020
ISBN Information: