Loading [MathJax]/extensions/MathMenu.js
Solving Finite-Horizon HJB for Optimal Control of Continuous-Time Systems | IEEE Conference Publication | IEEE Xplore

Solving Finite-Horizon HJB for Optimal Control of Continuous-Time Systems


Abstract:

Hamilton-Jacobi-Bellman (HJB) equation is the sufficient and necessary condition for continuous-time optimal control problem (OCP). Different from HJB equation in infinit...Show More

Abstract:

Hamilton-Jacobi-Bellman (HJB) equation is the sufficient and necessary condition for continuous-time optimal control problem (OCP). Different from HJB equation in infinite horizon, finite-horizon HJB equation contains a time-dependent value function, whose partial derivative with respect to time is an intractable unknown term. My study has found that the partial derivative exactly equals the terminal-time utility function by analyzing the initial-time equivalency between fixed time horizon OCP and fixed terminal time OCP. We also provide another proof, which uses the definition of partial derivative. This finding allows reusing traditional approximate dynamic programming (ADP) algorithm to approximate optimal policy with a parameterized function like neural network, thus solving the continuous-time finite-horizon OCP. The correctness of our finding is evaluated by analyzing a linear quadratic problem.
Date of Conference: 08-10 January 2021
Date Added to IEEE Xplore: 10 February 2021
ISBN Information:
Conference Location: Shanghai, China

Contact IEEE to Subscribe

References

References is not available for this document.