Skip to Main Content
This paper employs an action functional approach to formulate partially observable nonlinear discrete-time stochastic minimax games. The maximizing players of the games are stochastic square summable disturbances, while the minimizing players are the control inputs. Associated with the action functional there is an information state and its adjoint, which are shown to satisfy certain recursions, derived through dynamic programming. These recursions are then employed to reformulate partially observable games as fully observable games, and to introduce the so-called separated control laws which are non-anticipative functionals of the information state. Subsequently, dynamic programming and verification theorems are derived for the separated control laws, while their relations to the design of controllers which render feedback systems dissipative are investigated.