Skip to Main Content
We present an interaction technique that integrates spatial sensing to an interactive 3D city model in order to support efficient localization of objects (esp. buildings) known or remembered in the real world. The technique offers a unified 3D interaction scheme for both visible and remote objects. In the egocentric view, sensor data from the mobile device (accelometer, gyroscope, GPS) is utilized to couple the viewport to user's movement similarly as in mobile AR. The technique offers easy shifting from the viewport-coupled mode to a top-down view where movement is POI-based. A field experiment compared it to an exocentric technique resembling the traditional pan-and-zoom in 2D mobile maps. The two techniques showed differential benefits for target acquisition performance.