Loading [MathJax]/extensions/MathMenu.js
Learning Nonprehensile Dynamic Manipulation: Sim2real Vision-Based Policy With a Surgical Robot | IEEE Journals & Magazine | IEEE Xplore

Learning Nonprehensile Dynamic Manipulation: Sim2real Vision-Based Policy With a Surgical Robot


Abstract:

Surgical tasks such as tissue retraction, tissue exposure, and needle suturing remain challenging in autonomous surgical robotics. One challenge in these tasks is nonpreh...Show More

Abstract:

Surgical tasks such as tissue retraction, tissue exposure, and needle suturing remain challenging in autonomous surgical robotics. One challenge in these tasks is nonprehensile manipulation such as pushing tissue, pressing cloth, and needle threading. In this work, we isolate the problem of nonprehensile manipulation by implementing a vision-based reinforcement learning agent for rolling a block, a task that has complex dynamics interactions, small scale objects, and a narrow field of view. We train agents in simulation with a reward formulation that encourages efficient and safe learning, domain randomization that allows for robust sim2real transfer, and a recurrent memory layer that enables reasoning about randomized dynamics parameters. We successfully transfer our agents from simulation to real and show robust execution of our vision-based policy with a 96.3% success rate. We analyze and discuss the success rate, trajectories, and recovery behaviours for various models that are either using the recurrent memory layer or are trained with a difficult physics environment.
Published in: IEEE Robotics and Automation Letters ( Volume: 8, Issue: 10, October 2023)
Page(s): 6763 - 6770
Date of Publication: 05 September 2023

ISSN Information:

Funding Agency:


I. Introduction

Fine manipulation skills are a critical aspect of surgical robotics tasks such as needle insertion, knot tying, and tissue retraction [1]. In recent years, there has been great success in applying deep learning methods such as reinforcement learning (RL) to learn these complex autonomous behaviors. These advances can be attributed to the development of simulations for surgical robots such as the da Vinci Research Kit (dVRK) [2]. One common challenge in RL for robotics is that agents trained in the simulation must be transferable to real robot scenarios. In our previous work [3], we created a novel simulation for the dVRK inside Unity3D and showed that a robust visuomotor policy could be trained efficiently in simulation and transferred to real through our sim2real training pipeline using Domain Randomization.

Contact IEEE to Subscribe

References

References is not available for this document.