Improving Neural Volume Rendering via Learning View-Dependent Integral Approximation | IEEE Journals & Magazine | IEEE Xplore

Improving Neural Volume Rendering via Learning View-Dependent Integral Approximation


Abstract:

Neural radiance fields (NeRFs) have achieved impressive view synthesis results by learning an implicit volumetric representation from multi-view images. To project the im...Show More

Abstract:

Neural radiance fields (NeRFs) have achieved impressive view synthesis results by learning an implicit volumetric representation from multi-view images. To project the implicit representation into an image, NeRF employs volume rendering that approximates the continuous integrals of rays as an accumulation of the colors and densities of the sampled points. Although this approximation enables efficient rendering, it ignores the direction information in point intervals, resulting in ambiguous features and limited reconstruction quality. In this paper, we propose a learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction. We model the volume rendering integral with a piecewise constant volume density and spherical harmonic-guided view-dependent features, facilitating ambiguity elimination while preserving the rendering efficiency. In addition, we introduce a regularization term that restricts the anisotropic representation effect to be local, with negligible effect on geometry representations, and that encourages recovering the correct geometry. Our method is flexible and can be plugged into NeRF-based frameworks. Extensive experiments show that the proposed representation can boost the rendering quality of various NeRFs and achieve state-of-the-art rendering performance on both synthetic and real-world scenes.
Page(s): 1 - 12
Date of Publication: 24 March 2025

ISSN Information:

PubMed ID: 40126957

Contact IEEE to Subscribe