By Topic

Blending multiple views

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)

Current blending methods in image-based rendering use local information such as "deviations from the closest views" to find blending weights. They include approaches such as the view-dependent texture mapping and blending fields used in unstructured lumigraph rendering. However, in the presence of depth discontinuities, these techniques do not provide smooth transitions in the target image if the intensities of corresponding pixels in the source images are significantly different (e.g. due to specular highlights). In this paper, we present an image blending technique that allows the use of global visibility and occlusion constraints. Each blending weight now has a global component and a local component, which are due to the view-independent and view-dependent contributions of the source images, respectively. Being view-independent, the global components can be computed in a preprocessing stage. Traditional graphics hardware is exploited to accelerate the computation of the global blending weights.

Published in:

Computer Graphics and Applications, 2002. Proceedings. 10th Pacific Conference on

Date of Conference:

2002