Path space similarity determined by Fourier histogram descriptors

We propose a simple technique for the efficient estimation of the similarity of light transport paths. Considering descriptors of the incident radiance, we improve both filtering [Keller et al. 2014] and caching based [Ward et al. 1988] variance reduction techniques for image synthesis that so far could not measure variations of material and lighting as they only included geometric measures of similarity, such as the divergence of normals, irradiance gradients, and the distance between vertices storing information.

GI next: global illumination for production rendering on GPUs

The sheer size of texture data and the complexity of custom shaders in production rendering were the two major hurdles in the way of GPU acceleration. Requiring only tiny modifications of an existing production renderer, we are able to accelerate the computation of global illumination by more than an order of magnitude.

Path space filtering

Light transport simulation comprises of summing up the contributions of light transport paths that connect sensors and light sources. Such light transport paths may be sampled by following photon trajectories from the lights, tracing paths from the camera, and connecting such path segments by proximity (photon mapping) or shadow rays (both dashed in black). Smoothing the contribution of light transport paths before reconstructing the image can efficiently reduce the noise inherent to sampling.

Efficient stackless hierarchy traversal on GPUs with backtracking in constant time

The fastest acceleration schemes for ray tracing rely on traversing a bounding volume hierarchy (BVH) for efficient culling and use backtracking, which in the worst case may expose cost proportional to the depth of the hierarchy in either time or state memory. We show that the next node in such a traversal actually can be determined in constant time and state memory. In fact, our newly proposed parallel software implementation requires only a few modifications of existing traversal methods and outperforms the fastest stack-based algorithms on GPUs.

Stackless ray tracing of patches from feature-adaptive subdivision on GPUs

OpenSubdiv [Pixar 2012] is the de-facto industry standard for the representation of subdivision surfaces. Its feature-adaptive subdivision [Nießner 2013] allows for efficient display using rasterization hardware. Based on this feature-adaptive refinement of creases, semi-sharp edges, and irregular patches, we introduce an efficient algorithm for ray tracing the resulting patches up to almost floating point precision.

Towards Foveated Rendering for Gaze-Tracked Virtual Reality

Foveated rendering synthesizes images with progressively less detail outside the eye fixation region, potentially unlocking significant speedups for wide field-of-view displays, such as head mounted displays, where target framerate and resolution is increasing faster than the performance of traditional real-time renderers. To study and improve potential gains, we designed a foveated rendering user study to evaluate the perceptual abilities of human peripheral vision when viewing today's displays.

Improved Half Vector Space Light Transport

In this paper, we present improvements to half vector space light transport (HSLT) [KHD14], which make this approach more practical, robust for difficult input geometry, and faster. Our first contribution is the computation of half vector space ray differentials in a different domain than the original work. This enables a more uniform stratification over the image plane during Markov chain exploration.

Reflectance Modeling by Neural Texture Synthesis

We extend parametric texture synthesis to capture rich, spatially varying parametric reflectance models from a single image. Our input is a single head-lit flash image of a mostly flat, mostly stationary (textured) surface, and the output is a tile of SVBRDF parameters that reproduce the appearance of the material. No user intervention is required.

Perceptually-Based Foveated Virtual Reality

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery. We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker.

A Real-time Energy-Efficient Superpixel Hardware Accelerator for Mobile Computer Vision Applications

Superpixel generation is a common preprocessing step in vision processing aimed at dividing an image into non-overlapping regions. Simple Linear Iterative Clustering (SLIC) is a commonly used superpixel algorithm that offers a good balance between performance and accuracy. However, the algorithm’s high computational and memory bandwidth requirements result in performance and energy efficiency that do not meet the requirements of realtime embedded applications. In this work, we explore the design of an energy-efficient superpixel accelerator for real-time computer vision applications.