Skip to Main Content
Several human navigation services are currently available on the cellular phones that uses embedded GPS and 2-D map. However, 2-D map based human navigation is not always easy to understand for users because that is not intuitive. In order to realize more intuitive human navigation, AR (augmented reality) based navigation where guiding information is overlaid in the real image is expected to be the next generation navigation system. For AR navigation, the key problem is how to acquire the accurate position and posture of the embedded camera on the cellular phone. Many researchers have intensively tackled to the camera parameter estimation problem for AR in recent years. However, most of these methods cannot be applied to the current mobile devices because they are designed to treat video sequence where temporal information like camera parameter of the previous frame is known. In this research, we propose a novel method that estimates camera parameters of single input image using SIFT features and voting scheme.