Skip to Main Content
The ability for tele-operators to derive remote robots' location is crucial: it allows them to develop strategies to perform high level interactions and mainly navigation. In normal conditions, remote robots can perform autonomously such a task by comparing an priori knowledge and local sensed data. For some typical situations, this approach is impossible. Indeed, when remote environments are changing or when inherent sensors uncertainty and errors are too high for correct localization, the robots are unable to derive locations and contexts. Following that, the intervention of tele-operators is needed to compensate automatic localization failures and misestimations: they use a map and available remote sensed data to estimate robot's position and orientation. The quality of this intervention is intimately linked to four factors: the pertinence of the remote data, the way these data are displayed to operators, the nature of the map the operators use and their ability to derive robot's position. This paper describes our investigations in quantifying the last two factors and their effects localization accuracy. We present preliminary results concerning a comparison between two localization methods. A tele-operator is wearing a helmet displaying a video stream coming from the robot. He or she can move the head freely to move the remote camera allowing him/her to discover the remote environment. Using a 2D map (a top view) or an interactive (the user can move within the virtual representation) 3D partial model of the remote world, the tele-operator has to specify the exact position/orientation of the robot.
Date of Conference: 19-23 Dec. 2009