Skip to Main Content
The principal techniques used up to now for the analysis of stochastic adaptive control systems have been (i) super-martingale (often called stochastic Lyapunov) methods and (ii) methods relying upon the strong consistency of some parameter estimation scheme. Optimal stochastic control and filtering methods have also been employed. Although there have been some successes, the extension of these techniques to a broad class of adptive control problems, including the case of time varying parameters, has been difficult. In this paper a new approach is adopted: If an underlying Markovian state space system for the controlled process is available, and if this process possesses stationary transition probabilities, then the powerful ergodic theory of Markov processes may be applied. Subject to technical conditions one may deduce (amongst other facts) (i) the existence of an invariant measure ???? for the process and (ii) the convergence almost surely of the sample averages of a function of the state process (and of its expectation) to its conditional expectation [????] with respect to a sub-??-field of invariant sets ??I. The technique is illustrated by an application to a previously unsolved problem involving a linear system with unbounded random time varying parameters. Work suppoted by Canada NSERC Grant No.: 1329 and a UK SERC Visiting Research Fellowship.