LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis

Publication image

Recent text-to-3D generation approaches produce impressive 3D results but require time-consuming optimization that can take up to an hour per prompt. Amortized methods like ATT3D optimize multiple prompts simultaneously to improve efficiency, enabling fast text-to-3D synthesis. However, ATT3D cannot capture high-frequency geometry and texture details and struggles to scale to large prompt sets, so it generalizes poorly. We introduce Latte3D, addressing these limitations to achieve fast, high-quality generation on a significantly larger prompt set. Key to our method is 1) building a scalable architecture for amortized learning and 2) leveraging 3D data during optimization through 3D-aware diffusion priors, shape regularization, and model initialization to achieve robustness to diverse and complex training prompts. Latte3D amortizes both neural field generation and textured surface generation to produce highly detailed textured meshes in a single forward pass. Latte3D generates 3D objects in 400ms, and can be further enhanced with fast test-time optimization.

Authors

Kevin Xie (NVIDIA)
Jonathan Lorraine (NVIDIA)
Tianshi Cao (NVIDIA)
Jun Gao (NVIDIA)
James Lucas (NVIDIA)
Antonio Torralba (NVIDIA)
Sanja Fidler (NVIDIA)
Xiaohui Zeng (NVIDIA)

Publication Date

Uploaded Files

teaser.pdf1.95 MB