Skip to Main Content
The partially observed control problem is considered for backward doubly stochastic systems with control entering into the diffusion and the observation. The maximum principle is proved for the partially observable optimal control problem. A pure probabilistic approach is used, and the adjoint processes are characterized as solutions of related forward doubly stochastic differential equations in finite-dimensional spaces. Most of the derivation is identified with that of the completely observable case. Then, our theoretical result is applied to study a partially-observed linear-quadratic optimal control problem for a backward doubly stochastic system.