There are two procedures for applying the method of conjugate gradients to the problem of minimizing a convex, nonlinear function: the “continued” method, and the “restarted” method in which all the data except the best previous point are discarded, and the procedure is begun anew from that point. It is demonstrated by example that in the absence of the standard initial starting condition on a quadratic function, the continued conjugate gradient method will converge to the solution no better than linearly. Furthermore, it is shown that for a general nonlinear function, the nonrestarted conjugate gradient method converges no worse than linearly.
Note: The Institute of Electrical and Electronics Engineers, Incorporated is distributing this Article with permission of the International Business Machines Corporation (IBM) who is the exclusive owner. The recipient of this Article may not assign, sublicense, lease, rent or otherwise transfer, reproduce, prepare derivative works, publicly display or perform, or distribute the Article.