Skip to Main Content
Current blending methods in image-based rendering use local information such as "deviations from the closest views" to find blending weights. They include approaches such as the view-dependent texture mapping and blending fields used in unstructured lumigraph rendering. However, in the presence of depth discontinuities, these techniques do not provide smooth transitions in the target image if the intensities of corresponding pixels in the source images are significantly different (e.g. due to specular highlights). In this paper, we present an image blending technique that allows the use of global visibility and occlusion constraints. Each blending weight now has a global component and a local component, which are due to the view-independent and view-dependent contributions of the source images, respectively. Being view-independent, the global components can be computed in a preprocessing stage. Traditional graphics hardware is exploited to accelerate the computation of the global blending weights.