Abstract:
Humans quickly solve tasks in novel systems with complex dynamics, without requiring much interaction. While deep reinforcement learning algorithms have achieved tremendo...Show MoreMetadata
Abstract:
Humans quickly solve tasks in novel systems with complex dynamics, without requiring much interaction. While deep reinforcement learning algorithms have achieved tremendous success in many complex tasks, these algorithms need a large number of samples to learn meaningful policies. In this letter, we present a task for navigating a marble to the center of a circular maze. While this system is very intuitive and easy for humans to solve, it can be very difficult and inefficient for standard reinforcement learning algorithms to learn meaningful policies. We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system. Learning consists of initializing a physics engine with parameters estimated using data from the real system. The error in the physics engine is then corrected using Gaussian process regression, which is used to model the residual between real observations and physics engine simulations. The physics engine augmented with the residual model is then used to control the marble in the maze environment using a model-predictive feedback over a receding horizon. To the best of our knowledge, this is the first time that a hybrid model consisting of a full physics engine along with a statistical function approximator has been used to control a complex physical system in real-time using nonlinear model-predictive control (NMPC).
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 2, April 2021)
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Physical Problem Solving ,
- Complex Systems ,
- Complex Environment ,
- Physical System ,
- Gaussian Process ,
- Kriging ,
- Model Predictive Control ,
- Nonlinear Control ,
- Deep Reinforcement Learning ,
- Physics Engine ,
- Deep Reinforcement Learning Algorithm ,
- Nonlinear Model Predictive Control ,
- Dynamic Model ,
- Process Model ,
- Cost Function ,
- Weight Matrix ,
- Control Problem ,
- Simulation Environment ,
- Inverse Model ,
- Angular Position ,
- Gaussian Process Model ,
- Outermost Ring ,
- Wide Range Of Systems ,
- Model-based Reinforcement Learning ,
- Trajectory Optimization ,
- Physical Use ,
- Static Friction ,
- Model-based Control ,
- Human Learning ,
- Servo Motor
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Physical Problem Solving ,
- Complex Systems ,
- Complex Environment ,
- Physical System ,
- Gaussian Process ,
- Kriging ,
- Model Predictive Control ,
- Nonlinear Control ,
- Deep Reinforcement Learning ,
- Physics Engine ,
- Deep Reinforcement Learning Algorithm ,
- Nonlinear Model Predictive Control ,
- Dynamic Model ,
- Process Model ,
- Cost Function ,
- Weight Matrix ,
- Control Problem ,
- Simulation Environment ,
- Inverse Model ,
- Angular Position ,
- Gaussian Process Model ,
- Outermost Ring ,
- Wide Range Of Systems ,
- Model-based Reinforcement Learning ,
- Trajectory Optimization ,
- Physical Use ,
- Static Friction ,
- Model-based Control ,
- Human Learning ,
- Servo Motor
- Author Keywords