Skip to Main Content
The visual acts theory aims to provide intelligent assistance for camera viewpoint selection during teleoperation. It combines top-down partitioning of a task and bottom-up monitoring of the operator to select task-relevant camera viewpoints. Previous experimental studies have shown that visual acts provides camera views of sufficient quality to allow an operator to complete a task. In cases where the camera system is complex and difficult to master, it selects better viewpoints than the operator. In this paper we present an alternative architecture incorporating a viewpoint selection algorithm that places emphasis on what the operator should do next, rather than on what he is currently doing. Experimental results are presented showing that this simpler algorithm performs as well as the more pedantic visual acts algorithm, and raises greater awareness of the operator to 3D information. The results contribute to a better understanding of human-robot interaction in telerobotic scenarios.