Skip to Main Content
In this paper we discuss two online algorithms based on policy iterations for learning the continuous-time (CT) optimal control solution when nonlinear systems with infinite horizon quadratic cost are considered. For the first time we present an online adaptive algorithm implemented on an actor/critic structure which involves synchronous continuous-time adaptation of both actor and critic neural networks. This is a version of generalized policy iteration for CT systems. The convergence to the optimal controller based on the novel algorithm is proven while stability of the system is guaranteed. The characteristics and requirements of the new online learning algorithm are discussed in relation with the regular online policy iteration algorithm for CT systems which we have previously developed. The latter solves the optimal control problem by performing sequential updates on the actor and critic networks, i.e. while one is learning the other one is held constant. In contrast, the new algorithm relies on simultaneous adaptation of both actor and critic networks. To support the new theoretical result a simulation example is then considered.