A unistable polyhedron with 14 faces

Unistable polyhedra are in equilibrium on only one of their faces. The smallest known homogeneous unistable polyhedron to date has 18 faces. Using a new optimization algorithm, we have found a unistable polyhedron with only 14 faces, which we believe to be a lower bound. Despite the simplicity of the formulation, computers were never successfully used for solving this problem due to the seemingly insurmountable dimensionality of the underlying mathematical apparatus.

Infinite Resolution Textures

We propose a new texture sampling approach that preserves crisp
silhouette edges when magnifying during close-up viewing, and benefits
from image pre-filtering when minifying for viewing at farther
distances.
During a pre-processing step, we extract curved silhouette edges from
the underlying images. These edges are used to adjust the texture
coordinates of the requested samples during magnification. The
original image is then sampled -- only once!

Phenomenological Transparency

Translucent objects such as fog, clouds, smoke, glass, ice, and liquids are pervasive in cinematic environments because they frame scenes in depth and create visually-compelling shots.

Hashed Alpha Testing

Renderers apply alpha testing to mask out complex silhouettes using alpha textures on simple proxy geometry. While widely used, alpha testing has a long-standing problem that is underreported in the literature, but observable in commercial games: geometry can entirely disappear as alpha mapped polygons recede with distance.

Real-Time Global Illumination using Precomputed Light Field Probes

We introduce a new data structure and algorithms that employ it to compute real-time global illumination from static environments. Light field probes encode a scene’s full light field and internal visibility. They extend current radiance and irradiance probe structures with per-texel visibility information similar to a G-buffer and variance shadow map. We apply ideas from screen-space and voxel cone tracing techniques to this data structure to efficiently sample radiance on world space rays, with correct visibility information, directly within pixel and compute shaders.

Multilayer and Multimodal Fusion of Deep Neural Networks for Video Classification

This paper presents a novel framework to combine multiple layers and modalities of deep neural networks for video classification. We first propose a multilayer strategy to simultaneously capture a variety of levels of abstraction and invariance in a network, where the convolutional and fully connected layers are effectively represented by the proposed feature aggregation methods. We further introduce a multimodal scheme that includes four highly complementary modalities to extract diverse static and dynamic cues at multiple temporal scales.

Deep G-Buffers for Stable Global Illumination Approximation

We introduce a new hardware-accelerated method for constructing Deep G-buffers that is 2x-8x faster than the previous depth peeling method and produces more stable results. We then build several high-performance shading algorithms atop our representation, including dynamic diffuse interreflection, ambient occlusion (AO), and mirror reflection effects.

Our construction method s order-independent, guarantees a minimum separation between layers, operates in a (small) bounded memory footprint, and does not require per-pixel sorting.

A Phenomenological Scattering Model for Order-Independent Transparency

Translucent objects such as fog, smoke, glass, ice, and liquids are pervasive in cinematic environments because they frame scenes in depth and create visually compelling shots. Unfortunately, they are hard to simulate in real-time and have thus previously been rendered poorly compared to opaque surfaces in games.

This paper introduces the first model for a real-time rasterization algorithm that can simultaneously approximate the following transparency phenomena: wavelength-varying ("colored") transmission, translucent colored shadows, caustics, partial coverage, diffusion, and refraction.

Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks

We present a real-time deep learning framework for video-based facial performance capture—the dense 3D tracking of an actor’s face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5–10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject.