Skip to Main Content
This paper proposes a novel approach for coupling perception and action through minimax dynamic programming. We tackle domains where the agent has some control over the observation process (e.g. via the manipulation of some sensors), and show how to transform the system so that an optimal control solution can be sought with standard algorithms. We demonstrate our method in a toy domain, where an agent guides two point masses (ldquohandsrdquo) to a target in a 2D scene with obstacles. The agent can direct the gaze of a virtual ldquoeyerdquo to different parts of the scene, thereby reducing the observation noise for elements of the scene in that vicinity and improving the quality of feedback control. In this manner, motor control of the eye allots attentional resources. We propose a unified framework that treats both perception and action as interdependent components of the same optimal control task. The implications of uncertainty on task performance are uncovered by deploying an adversary whose strength to do harm is proportional to the instantaneous level of state uncertainty. We transform the partially-observable system to a fully-observable by coupling the state dynamics with a state-estimation filter, and so augment the state space to include an explicit representation of the instantaneous state uncertainty. The augmented system is high-dimensional, but through minimax differential dynamic programming, a local method that is less susceptible to the curse of dimensionality, we are able to solve for the optimal control of the hands and the eye at the same time, allowing for the emergence of interesting phenomena such as hand-eye coordination, saccades and smooth pursuit.