Abstract:
Real-time rendering offers instantaneous visual feedback, making it crucial for mixed-reality applications. The light field captures both light intensity and direction in...Show MoreMetadata
Abstract:
Real-time rendering offers instantaneous visual feedback, making it crucial for mixed-reality applications. The light field captures both light intensity and direction in a 3D environment, serving as a data-rich medium to enhance mixed-reality experiences. However, two major challenges remain: 1) current light field rendering techniques are unsuitable for real-time computation, and 2) existing real-time methods cannot efficiently process high-dimensional light field data on GPU platforms. To overcome these challenges, we propose an framework utilizing a compact neural representation of light field data, implemented on a GPU platform for real-time rendering. This framework provides both compact storage and high-fidelity real-time computation. Specifically, we introduce a ray global alignment strategy to simplify the framework and improve practicality. This strategy enables the learning of an optimal embedding for all local rays in a globally consistent way, removing the need for camera pose calculations. To achieve effective compression, the neural light field is employed to map each embedded ray to its corresponding color. To enable real-time rendering, we design a novel super-resolution network to enhance rendering speed. Extensive experiments demonstrate that our framework significantly enhances compression efficiency and real-time rendering performance, achieving nearly 50\mathbf{\times} compression ratio and 100 FPS rendering.
Published in: IEEE Transactions on Computers ( Volume: 74, Issue: 4, April 2025)