By Topic

Super-resolution texturing for online virtual globes

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Rother, D. ; Univ. of Minnesota, Minneapolis, MN ; Williams, L. ; Sapiro, G.

Online virtual globe applications such as Google Earth and Maps, Microsoft Virtual Earth, and Yahoo! Maps, allow users to explore realistic models of the Earth. To provide the ground-level detail of interest to users, it is necessary to serve and render high resolution images. For planetary coverage at high resolution, a very large number of images need to be acquired, stored, and transmitted, with consequent high costs and difficulty for the application provider, often resulting in lower than expected performance. In this work we propose a supplementary approach to render appropriate visual information in these applications. Using super-resolution techniques based on the combination and extension of known texture transfer and synthesis algorithms, we develop a system to efficiently synthesize fine detail consistent with the textures served. This approach dramatically reduces the operational cost of virtual globe displays, which are among the most image-intensive applications on the Internet, while at the same time improving their appearance. The proposed framework is fast and preserves the coherence between corresponding images at different resolutions, allowing consistent and responsive interactive zooming and panning operations. The framework is capable of adapting a library of multiscale textures to pre-segmented regions in the highest-resolution texture maps available. We also describe a simple interface to obtain class label information from contributing users. The presentation of the constituent techniques is complemented with examples simulating our framework embedded in Google Earth.

Published in:

Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on

Date of Conference:

23-28 June 2008