Skip to Main Content
Motion controllers capable of incremental learning and optimization can automatically tune their parameters to pursue optimal control. By implementing reinforcement learning and approximate dynamic programming, an adaptive critic motion controller is shown able to achieve this objective. The control policy and the adaptive critic are implemented by sparse radial basis function networks. The policy and the critic updating rules are derived. Ability and performance of the adaptive critic motion controller is demonstrated by the control of a rotary inverted pendulum system.