Spatiotemporal Blue Noise Masks

Publication image

Blue noise error patterns are well suited to human perception, and when applied to stochastic rendering techniques, blue noise masks can minimize unwanted low-frequency noise in the final image. Current methods of applying different blue noise masks to each rendered frame result in either white noise frequency spectra temporally, and thus poor convergence and stability, or lower quality spatially. We propose novel blue noise masks that retain high quality blue noise spatially, yet when animated produce values at each pixel that are well distributed over time. To do so, we create scalar valued masks by modifying the energy function of the void and cluster algorithm. To create uniform and nonuniform vector valued masks, we make the same modifications to the blue-noise dithered sampling algorithm. These masks exhibit blue noise frequency spectra in both the spatial and temporal domains, resulting in visually pleasing error patterns, rapid convergence speeds, and increased stability when filtered temporally. Since masks can be initialized with arbitrary sample sets, these improvements can be used on a large variety of problems, both uniformly and importance sampled. We demonstrate these improvements in volumetric rendering, ambient occlusion, and stochastic convolution. By extending spatial blue noise to spatiotemporal blue noise, we overcome the convergence limitations of prior blue noise works, enabling new applications for blue noise distributions. Usable masks and source code can be found at


Alan Wolfe (NVIDIA and EA SEED)
Nathan Morrical (NVIDIA and University of Utah)
Ravi Ramamoorthi (NVIDIA and University of California, San Diego)

Publication Date

Research Area

Uploaded Files

main.pdf11.21 MB