Skip to Main Content
Finite difference methods are used to implement the solution to optimal control of distributed parameter systems. The control is assumed to be intrinsic to the partial differential equation (PDE) as well as continuous, such that the calculus of variations is used to obtain the control law. Several important principles are developed to formulate the difference approximations to the partial differential equations which describe the system and the control law. An iterative method of solution is employed on these two equations. The convergence of the iteration is assured by stability considerations of the finite difference expressions.