NVIDIA Research
Inverse Global Illumination using a Neural Radiometric Prior

Inverse Global Illumination using a Neural Radiometric Prior

SIGGRAPH 2023 Conference Proceedings

We propose an inverse rendering method that uses a radiometric prior to account for global illumination as opposed to building and differentiating path integrals. Our method uses standard automatic differentiation (AD) to compute gradients with respect to the scene parameters, while satisfying the rendering equation using our radiometric prior, which is represented by a neural network. Here we compare a traditional auto-differentiable path tracer (AD-PT), an advanced technique (Path Replay Backpropagation, or PRB), and our method (AD-Ours) for recovering non-diffuse spatially varying BRDF properties (also represented as neural networks) under known illumination and geometry from 26 views of the Staircase scene. Despite its simplicity, our approach takes into account global illumination, and more faithfully recovers albedo and roughness compared to differentiable path tracing and PRB. Each method used a total of 16384 × 16 × 18000 (batch size × spp × steps) = 4.7B samples, i.e., 690 training samples per pixel (26 views × 512 × 512 pixels). We conducted all experiments with a single RTX3090 GPU, and the total runtimes for AD-PT, PRB, and our method were 760, 970, and 260 minutes, respectively.

Abstract


Inverse rendering methods that account for global illumination are becoming more popular, but current methods require evaluating and automatically differentiating millions of path integrals by tracing multiple light bounces, which remains expensive and prone to noise. Instead, this paper proposes a radiometric prior as a simple alternative to building complete path integrals in a traditional differentiable path tracer, while still correctly accounting for global illumination. Inspired by the Neural Radiosity technique, we use a neural network as a radiance function, and we introduce a prior consisting of the norm of the residual of the rendering equation in the inverse rendering loss. We train our radiance network and optimize scene parameters simultaneously using a loss consisting of both a photometric term between renderings and the multi-view input images, and our radiometric prior (the residual term). This residual term enforces a physical constraint on the optimization that ensures that the radiance field accounts for global illumination. We compare our method to a vanilla differentiable path tracer, and more advanced techniques such as Path Replay Backpropagation. Despite the simplicity of our approach, we can recover scene parameters with comparable and in some cases better quality, at considerably lower computation times.

Acknowledgements


This material is based upon work supported by the National Science Foundation under Grant No. IIS2126407. We would also like to thank Aaron Lefohn for his support, and NVIDIA for funding the work with an NVIDIA academic partnership.