Abstract:
Augmented reality (AR) in mobile devices (such as smartphones and tablets) is becoming more popular each day, and because of this many newer devices are starting to ship ...View moreMetadata
Abstract:
Augmented reality (AR) in mobile devices (such as smartphones and tablets) is becoming more popular each day, and because of this many newer devices are starting to ship with embedded depth sensors. This presents a great opportunity for the field of extended object tracking, whose algorithms are well-suited for dealing with varying measurement quality while requiring little CPU usage. In this paper, we present an application in the field of robotics, based on the idea of reconstructing the dynamic state of a robot (joint positions and velocities) simply by observing it with an AR device, and using only the robot specification (its URDF file) as prior knowledge, without requiring a connection to the robot's control system. This can allow the mobile device to identify where a robot is, or viceversa, without requiring markers such as QR codes. Additionally, this can serve as a stepping stone for more sophisticated assistance systems that can interact with the robot without requiring any access to its internals, which could otherwise make it difficult to deploy the AR app in sensitive systems. Using the iPad Pro 2020 as an example device, we examine the challenges involved in processing mobile depth images, how to develop a robust shape model and the corresponding estimator, and how the app can ask the user to help in its initialization using AR. We will also provide an evaluation with real data that shows how the proposed system can track a moving robot robustly even if measurement quality is reduced significantly.
Date of Conference: 04-07 July 2022
Date Added to IEEE Xplore: 09 August 2022
ISBN Information: