Synthesizing Physical Character-Scene Interactions

In this work, we present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters that perform scene interaction tasks in a natural and life-like manner. Our method learns scene interaction behaviors from large unstructured motion datasets, without manual annotation of the motion data. These scene interactions are learned using an adversarial discriminator that evaluates the realism of a motion within the context of a scene.

Interactive Hair Simulation on the GPU Using ADMM

We devise a local–global solver dedicated to the simulation of Discrete Elastic Rods (DER) with Coulomb friction that can fully leverage the massively parallel compute capabilities of moderns GPUs. We verify that our simulator can reproduce analytical results on recently published cantilever, bend–twist, and stick–slip experiments, while drastically decreasing iteration times for high-resolution hair simulations.

Generalizing Shallow Water Simulations with Dispersive Surface Waves

This paper introduces a novel method for simulating large bodies of water as a height field. At the start of each time step,we partition the waves into a bulk flow (which approximately satisfies the assumptions of the shallow water equations) and surface waves (which approximately satisfy the assumptions of Airy wave theory). We then solve the two wave regimes separately using appropriate state-of-the-art techniques, and re-combine the resulting wave velocities at the end of each step.

Boundary Value Caching for Walk on Spheres

Grid-free Monte Carlo methods such as walk on spheres can be used to solve elliptic partial differential equations without mesh generation or global solves. However, such methods independently estimate the solution at every point, and hence do not take advantage of the high spatial regularity of solutions to elliptic problems. We propose a fast caching strategy which first estimates solution values and derivatives at randomly sampled points along the boundary of the domain (or a local region of interest).

Micro-Mesh Construction

Micro-meshes (𝜇-meshes) are a new structured graphics primitive supporting a large increase in geometric fidelity without commensurate memory and run-time processing costs, consisting of a base mesh enriched by a displacement map. A new generation of GPUs support this structure with native hardware 𝜇-mesh ray-tracing, which leverages a self-bounding, compressed displacement mapping scheme to achieve these efficiencies.

Inverse Global Illumination using a Neural Radiometric Prior

Inverse rendering methods that account for global illumination are becoming more popular, but current methods require evaluating and automatically differentiating millions of path integrals by tracing multiple light bounces, which remains expensive and prone to noise. Instead, this paper proposes a radiometric prior as a simple alternative to building complete path integrals in a traditional differentiable path tracer, while still correctly accounting for global illumination.

Recursive Control Variates for Inverse Rendering

We present a method for reducing errors---variance and bias---in physically based differentiable rendering (PBDR). Typical applications of PBDR repeatedly render a scene as part of an optimization loop involving gradient descent. The actual change introduced by each gradient descent step is often relatively small, causing a significant degree of redundancy in this computation. We exploit this redundancy by formulating a gradient estimator that employs a \emph{recursive control variate}, which leverages information from previous optimization steps.

SSIF: Single-shot Implicit Morphable Faces With Consistent Texture Parameterization

There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstruction, they cannot easily capture geometric and appearance details. Methods based on neural implicit representations, such as signed distance functions (SDF) or neural radiance fields, approach photo-realism, but are difficult to animate and do not generalize well to unseen data.

Live 3D Portrait: Real-Time Radiance Fields for Single-Image Portrait View Synthesis

We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e.g., face portrait) in real-time. Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering. Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.

Real-Time Neural Appearance Models

We present a complete system for real-time rendering of scenes with complex appearance previously reserved for offline use. This is achieved with a combination of algorithmic and system level innovations.