Skip to Main Content
This paper presents a nonlinear controller based on a trajectory library. To generate the library, we combine two trajectory optimization methods: a parametric trajectory optimization method that finds coarse initial trajectories and Differential Dynamic Programming (DDP) that further refines these trajectories and generates linear local models of the optimal control laws. To construct a controller from these local models, we maintain the consistency of adjacent trajectories. To keep the resultant library a reasonable size and also satisfy performance requirements, the library is generated based on the controller's predicted performance. It is applied to standing balance control of humanoid robots that explicitly handle pushes. Most previous work assumes that pushes are impulsive. The proposed controller also handles continuous pushes that change with time. We compared our approach with a Linear Quadratic Regulator (LQR) gain scheduling controller using the same optimization criterion. The effectiveness of the proposed method is explored with simulation and experiments.