Abstract:
We present SLoMo: a first-of-its-kind framework for transferring skilled motions from casually captured “in-the-wild” video footage of humans and animals to legged robots...Show MoreMetadata
Abstract:
We present SLoMo: a first-of-its-kind framework for transferring skilled motions from casually captured “in-the-wild” video footage of humans and animals to legged robots. SLoMo works in three stages: 1) synthesize a physically plausible reconstructed key-point trajectory from monocular videos; 2) optimize a dynamically feasible reference trajectory for the robot offline that includes body and foot motion, as well as a contact sequence that closely tracks the key points; and 3) track the reference trajectory online using a general-purpose model-predictive controller on robot hardware. Traditional motion imitation for legged motor skills often requires expert animators, collaborative demonstrations, and/or expensive motion-capture equipment, all of which limit scalability. Instead, SLoMo only relies on easy-to-obtain videos, readily available in online repositories like YouTube. It converts videos into motion primitives that can be executed reliably by real-world robots. We demonstrate our approach by transferring the motions of cats, dogs, and humans to example robots including a quadruped (on hardware) and a humanoid (in simulation).
Published in: IEEE Robotics and Automation Letters ( Volume: 8, Issue: 11, November 2023)