Skip to Main Content
A major difficulty associated with using an empirical nonlinear model for model-based control is that the model can be unduly extrapolated into regions of the state space where identification data were scarce or even nonexistent. Optimal control solutions obtained with such over-extrapolations can result in performances far worse than predicted by the model. In the multi-step predictive control setting, it is not straightforward to prevent such overuse of the model by forcing the optimizer to find a solution within the "trusted" regions of state space. Given the difficulty, we propose an approximate dynamic programming based approach for designing a model-based controller that avoids such abusage of an empirical model with respect to the distribution of the identification data. The approach starts with closed-loop test data obtained with some suboptimal controllers, e.g., PI controllers, and attempts to derive a new control policy that improves upon their performances. Iterative improvement based on successive closed-loop testing is possible. A diabatic CSTR example is provided to illustrate the proposed approach.