Input-output data-driven control through dissipativity learning | IEEE Conference Publication | IEEE Xplore

Input-output data-driven control through dissipativity learning


Abstract:

Data-driven control offers an alternative to traditional model-based. Most present data-driven control strategies either involve model identification or need to assume av...Show More

Abstract:

Data-driven control offers an alternative to traditional model-based. Most present data-driven control strategies either involve model identification or need to assume availability of state information. In this work, we develop an input-output data-driven control method through dissipativity learning. Specifically, the learning of the subsystems' dissipativity property using one-class support vector machine (OC-SVM) is combined with the controller design to minimize an upper bound of the L2-gain. The data-driven controller synthesis problem is then formulated as quadratic-semidefinite programming with linear and multilinear constraints, solved via the alternating direction method of multipliers (ADMM). The proposed method is illustrated with a polymerization reactor.
Date of Conference: 10-12 July 2019
Date Added to IEEE Xplore: 29 August 2019
ISBN Information:

ISSN Information:

Conference Location: Philadelphia, PA, USA

I. Introduction

Big data is believed to play a key role in the future transformation of industries [1], [2]. So far, extensive research has been carried out to develop data-driven versions of applied technologies in industrial processes, such as process monitoring [3], [4]. When it is difficult to establish an accurate model for the system of interest through first principles or system identification, or difficult to design a controller based on the model, such as in the presence of highly complex dynamics, data-driven control can be favorable since it avoids the modeling procedure [5]. However, most state-of-the-art data-driven control strategies still involve a model identification scheme and are hence not truly model-free (see, e.g., the review in [6]). Another line of research focuses on extending approximate dynamic programming (ADP) approaches, widely applied in Markov decision processes, to control systems with continuous states and control actions [7], [8]. Despite its model-free property, ADP usually requires the availability of all the state variables, which may be unrealistic. With these considerations, we are motivated to develop a model-free, input-output data-driven control strategy that is applicable to systems with complex dynamics.

Contact IEEE to Subscribe

References

References is not available for this document.