Skip to Main Content
We propose a framework of robust approximate dynamic programming (robust-ADP), which is aimed at computing globally asymptotically stabilizing, suboptimal, control laws with robustness to dynamic uncertainties, via on-line/off-line learning. The system studied in this paper is an interconnection of a linear model with fully measurable state and unknown dynamics, and a nonlinear system with unmeasured state and unknown system order and dynamics. Differently from other ADP schemes in the past literature, the robust-ADP framework allows for learning from an unknown environment in the presence of dynamic uncertainties. The main contribution of the paper is to show that robust optimal control problems can be solved by integration of ADP and small-gain techniques.