We present a complete system for real-time rendering of scenes with complex
appearance previously reserved for offline use. This is achieved with a
combination of algorithmic and system level innovations.
Our appearance model utilizes learned hierarchical textures that are
interpreted using neural decoders, which produce reflectance values and
importance-sampled directions. To best utilize the modeling capacity of
the decoders, we equip the decoders with two graphics priors. The first
prior—transformation of directions into learned shading frames—facilitates
accurate reconstruction of mesoscale effects. The second prior—a microfacet
sampling distribution—allows the neural decoder to perform importance
sampling efficiently. The resulting appearance model supports anisotropic
sampling and level-of-detail rendering, and allows baking deeply layered
material graphs into a compact unified neural representation.
By exposing hardware accelerated tensor operations to ray tracing shaders,
we show that it is possible to inline and execute the neural decoders
efficiently inside a real-time path tracer. We analyze scalability with
increasing number of neural materials and propose to improve performance
using code optimized for coherent and divergent execution. Our neural
material shaders can be over an order of magnitude faster than non-neural
layered materials. This opens up the door for using film-quality visuals
in real-time applications such as games and live previews.
Video illustrating our system overview and presenting results targeting high-quality, layered
materials with a range of network configurations.
Feel free to download the video, native resolution: 1920x1080 pixels.
Target layered materials
Neural BRDF model
We want to thank Toni Bratincevic, Davide Di Giannantonio Potente, and Kevin Margo for their
help creating the reference objects, Yong He for evolving the Slang language to support this
project, Craig Kolb for his help with the 3D asset importer, Justin Holewinski and Patrick
Neill for low-level compiler and GPU driver support, and Karthik Vaidyanathan for providing
the TensorCore support in Slang. We also thank Eugene d'Eon, Thomas Müller, Marco Salvi,
and Bart Wronski for their valuable input. The material test blob was created by Robin Marin
and released under creative commons license