NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models

Publication image

Automatically generating high-quality real world 3D scenes is of enormous interest for applications such as virtual reality and robotics simulation. Towards this goal, we introduce NeuralField-LDM, a generative model capable of synthesizing complex 3D environments. We leverage Latent Diffusion Models that have been successfully utilized for efficient high-quality 2D content creation. We first train a scene auto-encoder to express a set of image and pose pairs as a neural field, represented as density and feature voxel grids that can be projected to produce novel views of the scene. To further compress this representation, we train a latent-autoencoder that maps the voxel grids to a set of latent representations. A hierarchical diffusion model is then fit to the latents to complete the scene generation pipeline. We achieve a substantial improvement over existing state-of-the-art scene generation models. Additionally, we show how NeuralField-LDM can be used for a variety of 3D content creation applications, including conditional scene generation, scene inpainting and scene style manipulation.

Authors

Seung Wook Kim (NVIDIA, Vector Institute, University of Toronto)
Bradley Brown (NVIDIA, University of Waterloo)
Kangxue Yin (NVIDIA)
Katja Schwarz (University of Tübingen, Tübingen AI Center)
Daiqing Li (NVIDIA)
Robin Rombach (LMU Munich)
Antonio Torralba (CSAIL, MIT)
Sanja Fidler (NVIDIA, Vector Institute, University of Toronto)

Publication Date