Skip to Main Content
This paper presents a speed-gradient-based inverse optimal control approach for the asymptotic stabilization of discrete-time nonlinear systems. With the solution presented, we avoid to solve the associated Hamilton-Jacobi-Bellman equation, and a meaningful cost function is minimized. The proposed stabilizing optimal controller uses the speed-gradient algorithm and is based on the proposal of what is called a discrete-time control Lyapunov function. This combined approach is referred to as the speed-gradient inverse optimal control. An example is used to illustrate the methodology. Several simulations are provided.