Towards Foveated Rendering for Gaze-Tracked Virtual Reality

"Towards Foveated Rendering for Gaze-Tracked Virtual Reality"
Anjul Patney (NVIDIA), Marco Salvi (NVIDIA), Joohwan Kim, Anton Kaplanyan (NVIDIA), Chris Wyman (NVIDIA), Nir Benty (NVIDIA), David Luebke (NVIDIA), Aaron Lefohn (NVIDIA), in ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2016, December 2016
Research Area: 3D Graphics
Stereoscopic 3D
Author(s): Anjul Patney (NVIDIA), Marco Salvi (NVIDIA), Joohwan Kim, Anton Kaplanyan (NVIDIA), Chris Wyman (NVIDIA), Nir Benty (NVIDIA), David Luebke (NVIDIA), Aaron Lefohn (NVIDIA)
Date: December 2016
ACM LinkFast Forward
Abstract: Foveated rendering synthesizes images with progressively less detail outside the eye fixation region, potentially unlocking significant speedups for wide field-of-view displays, such as head mounted displays, where target framerate and resolution is increasing faster than the performance of traditional real-time renderers. To study and improve potential gains, we designed a foveated rendering user study to evaluate the perceptual abilities of human peripheral vision when viewing today's displays. We determined that filtering peripheral regions reduces contrast, inducing a sense of tunnel vision. When applying a postprocess contrast enhancement, subjects tolerated up to 2x larger blur radius before detecting differences from a non-foveated ground truth. After verifying these insights on both desktop and head mounted displays augmented with high-speed gaze-tracking, we designed a perceptual target image to strive for when engineering a production foveated renderer. Given our perceptual target, we designed a practical foveated rendering system that reduces number of shades by up to 70% and allows coarsened shading up to 30 degrees closer to the fovea than Guenter et al.[2012] without introducing perceivable aliasing or blur. We filter both pre- and post-shading to address aliasing from undersampling in the periphery, introduce a novel multiresolution- and saccade-aware temporal antialising algorithm, and use contrast enhancement to help recover peripheral details that are resolvable by our eye but degraded by filtering. We validate our system by performing another user study. Frequency analysis shows our system closely matches our perceptual target. Measurements of temporal stability show we obtain quality similar to temporally filtered non-foveated renderings.