Skip to Main Content
In this paper, a novel online reinforcement learning neural network (NN)-based optimal output feedback controller, referred to as adaptive critic controller, is proposed for affine nonlinear discrete-time systems, to deliver a desired tracking performance. The adaptive critic design consist of three entities, an observer to estimate the system states, an action network that produces optimal control input and a critic that evaluates the performance of the action network. The critic is termed adaptive as it adapts itself to output the optimal cost-to-go function which is based on the standard Bellman equation. By using the Lyapunov approach, the uniformly ultimate boundedness (UUB) of the estimation and tracking errors and weight estimates is demonstrated. The effectiveness of the controller is evaluated for the task of nanomanipulation in a simulation environment.