Abstract:
In the dynamic programming approach to deterministic optimal control, we attempt to characterize the cost to go function V(t, x) as a solution to the Hamilton-Jacobi-Bell...Show MoreMetadata
Abstract:
In the dynamic programming approach to deterministic optimal control, we attempt to characterize the cost to go function V(t, x) as a solution to the Hamilton-Jacobi-Bellman equation. It is commonly held that the Pontryagin Maximum Principle and Dynamic Programming are related according to the equation p(t) = Vx(t,x(t)) where p(.) is the costate variable and x(.) is the optimal trajectory under consideration. However this relationship has previously been established only under very restrictive hypotheses. We present recent results establishing the relationship, now expressed in terms of a generalized gradient of V(.,.), for a very large class of nonsmooth problems with endpoint constraints.
Published in: 1986 25th IEEE Conference on Decision and Control
Date of Conference: 10-12 December 1986
Date Added to IEEE Xplore: 02 April 2007