Neural rendering

Taming optimization variance in compact neural shading networks

We present a training algorithm to mitigate optimization instabilities in small neural networks, like those used in real-time neural shading applications. While large, overparameterized models exhibit predictable convergence, smaller architectures …

VideoNeuMat: Neural Material Extraction from Generative Video Models

Creating photorealistic materials for 3D rendering requires exceptional artistic skill. Generative models for materials could help, but are currently limited by the lack of high-quality training data. While recent video generative models effortlessly …

UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting

Abstract We address the challenge of relighting a single image or video, a task that demands precise scene intrinsic understanding and high-quality light transport synthesis. Existing end-to-end relighting models are often limited by the scarcity of …

An Introduction to Neural Shading

Neural shading offers a new paradigm for real-time graphics, replacing hand-crafted algorithms with compact neural networks that can be trained to reproduce complex appearance. In this three-hour course, we introduce the core principles behind neural …

Radiance Surfaces: Optimizing Surface Representations with a 5D Radiance Field Loss

We present a fast and simple technique to convert images into a radiance surface-based scene representation. Building on existing radiance volume reconstruction algorithms, we introduce a subtle yet impactful modification of the loss function …

DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models

Understanding and modeling lighting effects are fundamental tasks in computer vision and graphics. Classic physically-based rendering (PBR) accurately simulates the light transport, but relies on precise scene representations--explicit 3D geometry, …

Real-Time Neural Appearance Models

We present a complete system for real-time rendering of scenes with complex appearance previously reserved for offline use. This is achieved with a combination of algorithmic and system level innovations.Our appearance model utilizes learned …

MesoGAN: Generative Neural Reflectance Shells

We introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive …

Inverse Global Illumination Using a Neural Radiometric Prior

Inverse rendering methods that account for global illumination are becoming more popular, but current methods require evaluating and automatically differentiating millions of path integrals by tracing multiple light bounces, which remains expensive …

Lightweight Neural Basis Functions for All-Frequency Shading

Basis functions provide both the abilities for compact representation and the properties for efficient computation. Therefore, they are pervasively used in rendering to perform all-frequency shading. However, common basis functions, including …