Compact neural graphics primitives (Ours) have an inherently small size across a variety of use cases with automatically chosen hyperparameters. In contrast to similarly compressed representations like JPEG for images (top) and masked wavelet representations [Rho et al. 2023] for NeRFs [Mildenhall et al. 2020] (bottom), our representation neither uses quantization nor coding, and hence can be queried without a dedicated decompression step. This is essential for level of detail streaming and working-memory-constrained environments such as video game texture compression. The compression artifacts of our method are easy on the eye: there is less ringing than in JPEG and less blur than in Rho et al. (though more noise). Compact neural graphics primitives are also fast: training is only 1.2-2.6x slower (depending on compression settings) and inference is faster than Instant NGP because our significantly reduced file size fits better into caches.
Abstract
Up to 4x smaller than Instant NGP at 1.2-2.6x training cost and no inference speed penalty
Neural graphics primitives are faster and achieve higher quality when their neural networks are augmented by spatial data structures that hold trainable features arranged in a grid. However, existing feature grids either come with a large memory footprint (dense or factorized grids, trees, and hash tables) or slow performance (index learning and vector quantization). In this paper, we show that a hash table with learned probes has neither disadvantage, resulting in a favorable combination of size and speed. Inference is faster than unprobed hash tables at equal quality while training is only 1.2-2.6x slower, significantly outperforming prior index learning approaches. We arrive at this formulation by casting all feature grids into a common framework: they each correspond to a lookup function that indexes into a table of feature vectors. In this framework, the lookup functions of existing data structures can be combined by simple arithmetic combinations of their indices, resulting in Pareto optimal compression and speed.
Paper
Compact Neural Graphics Primitves
with Learned Hash Probing
Towaki Takikawa, Thomas Müller, Merlin Nimier-David, Alex Evans, Sanja Fidler, Alec Jacobson, Alexander Keller
@inproceedings{takikawa2023compact,
title = {Compact Neural Graphics Primitives with Learned Hash Probing},
author = {Takikawa, Towaki and
M\"{u}ller, Thomas and
Nimier-David, Merlin and
Evans, Alex and
Fidler, Sanja and
Jacobson, Alec and
Keller, Alexander},
booktitle = {SIGGRAPH Asia 2023 Conference Papers},
year = {2023},
}
Results
Acknowledgements
We would like to thank David Luebke, Karthik Vaidyanathan, and Marco Salvi for useful discussions throughout the project.
The Lego Bulldozer scene of Figure 6 was created by Blendswap user Heinzelnisse. The Pluto image of Figure 8 was created by NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Alex Parker.