Skip to Main Content
Online virtual globe applications such as Google Earth and Maps, Microsoft Virtual Earth, and Yahoo! Maps, allow users to explore realistic models of the Earth. To provide the ground-level detail of interest to users, it is necessary to serve and render high resolution images. For planetary coverage at high resolution, a very large number of images need to be acquired, stored, and transmitted, with consequent high costs and difficulty for the application provider, often resulting in lower than expected performance. In this work we propose a supplementary approach to render appropriate visual information in these applications. Using super-resolution techniques based on the combination and extension of known texture transfer and synthesis algorithms, we develop a system to efficiently synthesize fine detail consistent with the textures served. This approach dramatically reduces the operational cost of virtual globe displays, which are among the most image-intensive applications on the Internet, while at the same time improving their appearance. The proposed framework is fast and preserves the coherence between corresponding images at different resolutions, allowing consistent and responsive interactive zooming and panning operations. The framework is capable of adapting a library of multiscale textures to pre-segmented regions in the highest-resolution texture maps available. We also describe a simple interface to obtain class label information from contributing users. The presentation of the constituent techniques is complemented with examples simulating our framework embedded in Google Earth.