Skip to Main Content
The authors present an approach for input-to-state stabilizing control of dynamic neural networks, which extends the existing result in the literature to a wider class of systems. This methodology is developed by using the Lyapunov technique, inverse optimality, and the Hamilton-Jacobi-Bellman equation. Depending on the dimensions of state and input, we construct two inverse optimal feedback laws to achieve the input-to-state stabilization of a wider class of dynamic neural network systems. With the help of the Sontag's formula, one of two control laws is developed from the creation of a scalar function to eliminate a restriction requiring the same number of states and inputs. In addition, the proposed designs achieve global asymptotic stability and global inverse optimality with respect to some meaningful cost functional. Numerical examples demonstrate the performance of the approach.