NVIDIA Research
Random-Access Neural Compression of Material Textures

Random-Access Neural Compression of Material Textures

NVIDIA
*Equal contributors
Accepted to Siggraph 2023

A rendered image of an inkwell. The cutouts demonstrate quality using, from left to right, GPU-based texture formats (BC high) at 1024x1024 resolution, our neural texture compression (NTC), and high-quality reference textures. Note that NTC provides a 4x higher resolution (16X texels) compared to BC high, despite using 30% less memory. PSNR and ꟻLIP quality metrics computed for the cutouts are shown above the respective images. The ꟻLIP error images are shown in the lower right corners, where brightness is proportional to error. Bottom row: two of the textures that were used for the renderings.

Abstract


The continuous advancement of photorealism in rendering is accompanied by a growth in texture data and, consequently, increasing storage and memory demands. To address this issue, we propose a novel neural compression technique specifically designed for material textures. We unlock two more levels of detail, i.e., 16X more texels, using low bitrate compression, with image quality that is better than advanced image compression techniques, such as AVIF and JPEG XL. At the same time, our method allows on-demand, real-time decompression with random access similar to block texture compression on GPUs, enabling compression on disk and memory. The key idea behind our approach is compressing multiple material textures and their mipmap chains together, and using a small neural network, that is optimized for each material, to decompress them. Finally, we use a custom training implementation to achieve practical compression speeds, whose performance surpasses that of general frameworks, like PyTorch, by an order of magnitude.




Video with audio describing our paper and results. Feel free to download the video, native resolution: 1920x1080 pixels.

Results


We've included an image viewer for comparing textures compressed with different methods described in the paper.