Abstract:
Developing robust walking controllers for bipedal robots is a challenging endeavor. Traditional model-based locomotion controllers require simplifying assumptions and car...Show MoreMetadata
Abstract:
Developing robust walking controllers for bipedal robots is a challenging endeavor. Traditional model-based locomotion controllers require simplifying assumptions and careful modelling; any small errors can result in unstable control. To address these challenges for bipedal locomotion, we present a model-free reinforcement learning framework for training robust locomotion policies in simulation, which can then be transferred to a real bipedal Cassie robot. To facilitate sim-to-real transfer, domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics. The learned policies enable Cassie to perform a set of diverse and dynamic behaviors, while also being more robust than traditional controllers and prior learning-based methods that use residual control. We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw. (Video1)
Date of Conference: 30 May 2021 - 05 June 2021
Date Added to IEEE Xplore: 18 October 2021
ISBN Information:
ISSN Information:
Funding Agency:
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA
University of California, Berkeley, CA, USA