We introduce a new method for computing two-level Layered Depth Images (LDIs) [Shade et al. 1998] that is designed for modern GPUs. The method is order-independent, can guarantee a mini- mum depth separation between the layers, operates within small, bounded memory, and requires no explicit sorting. Critically, it also operates in a single pass over scene geometry. This is important because the cost of streaming geometry through a modern game engine pipeline can be high due to work expansion (from patches to triangles to pixels), matrix-skinning for animation, and the rel- ative scarcity of main memory bandwidth compared to caches and registers.
We apply the new LDI method to create Deep Geometry Buffers for deferred shading and show that two layers with a minimum depth separation make a variety of screen-space illumination effects surprisingly robust. We specifically demonstrate improved robustness for Scalable Ambient Occlusion [McGuire et al. 2012b], an extended multibounce screen-space radiosity [Soler et al. 2009], and screen-space reflection ray tracing. All of these produce results that are necessarily view-dependent, but in a manner that is plausible based on visible geometry and more temporally coherent than results without layers.