By Topic

On the number of samples needed in light field rendering with constant-depth assumption

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Zhouchen Lin ; Peking Univ., China ; Heung-Yeung Shum

While several image-based rendering techniques have been proposed to successfully render scenes/objects from a large collection (e.g., thousands) of images without explicitly recovering 3D structures, the minimum number of images needed to achieve a satisfactory rendering result remains an open problem. This paper is the first attempt to investigate the lower bound for the number of samples needed in the Lumigraph/light field rendering. To simplify the analysis, we consider an ideal scene with only a point that is between a minimum and a maximum range. Furthermore, constant-depth assumption and bilinear interpolation are used for rendering. The constant-depth assumption serves to choose “nearby” rays for interpolation. Our criterion to determine the lower bound is to avoid horizontal and vertical double images, which are caused by interpolation using multiple nearby rays. This criterion is based on the causality requirement in scale-space theory, i.e., no “spurious details” should be generated while smoothing. Using this criterion, closed-form solutions of lower bounds are obtained for both 3D plenoptic function (Concentric Mosaics) and 4D plenoptic function (light field). The bounds are derived completely from the aspect of geometry and are closely related to the resolution of the camera and the depth range of the scene. These tower bounds are further verified by our experimental results

Published in:

Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on  (Volume:1 )

Date of Conference:

2000