Skip to Main Content
In this paper, we examine the computational aspects of a certain class of discrete-time optimal control problems. We propose and analyze two partial conjugate gradient algorithms which operate in cycles of conjugate gradient steps ( = state space dimension). The algorithms are motivated by the special form of the Hessian matrix of the cost functional. The first algorithm exhibits a linear convergence rate and offers some advantages over steepest descent in certain cases such as when the system is unstable. The second algorithm requires second-order information with respect to the control variables at the beginning of each cycle and exhibits - step superlinear convergence rate. Furthermore, it solves a linear-quadratic problem in steps as compared with the steps ( = control space dimension, = number of stages) required by the ordinary conjugate gradient method.