Abstract:
Deep sequence models are receiving significant interest in current machine learning research. By representing probability distributions that are fit to data using maximum...Show MoreMetadata
Abstract:
Deep sequence models are receiving significant interest in current machine learning research. By representing probability distributions that are fit to data using maximum likelihood estimation, such models can model data on general observation spaces (both continuous and discrete-valued). Furthermore, they can be applied to a wide range of modelling problems, including modelling of dynamical systems which are subject to control. The problem of learning data-driven models of systems subject to control is well studied in the field of system identification. In particular, there exist theoretical convergence and consistency results which can be used to analyze model behaviour and guide model development. However, these results typically concern models which provide point predictions of continuous-valued variables. Motivated by this, we derive convergence and consistency results for a class of nonlinear probabilistic models defined on a general observation space. The results rely on stability and regularity assumptions, and can be used to derive consistency conditions and bias expressions for nonlinear probabilistic models of systems under control. We illustrate the results on examples from linear system identification and Markov chains on finite state spaces.
Published in: 2024 IEEE 63rd Conference on Decision and Control (CDC)
Date of Conference: 16-19 December 2024
Date Added to IEEE Xplore: 26 February 2025
ISBN Information: