Skip to Main Content
An adaptive actor-critic algorithm is proposed under the assumption that a predictive model is available and only the measurement at time k is used to update the learning algorithms. Two value-functions are realized as a pure static mapping, according to the fact that they can be reduced to nonlinear current estimators, which can be easily constructed by using any artificial neural networks (NNs) with sigmoidal function or radial basis function (RBF), if all the inputs to the present value-functions are based on simulated experiences generated from the predictive model. In addition, if a predictive model is assumed to be used to construct a model-based actor (MBA) in the framework of adaptive actor-critic approach, then this type of MBA can be viewed as a network whose connection weights are composed of the elements of feedback gain matrix, so that the temporal difference (TD) learning can also be naturally applied to update the weights of the actor. Since the present method can update the learning by using only one measurement at time k, a relatively fast learning is expected, compared with the previous approach that needs two measurements at times k and k + 1 to update the actor-critic networks. The effectiveness of the proposed approach is illustrated by simulating a trajectory-tracking control problem for a nonholonomic mobile robot.