Toronto AI Lab
Diffusion Texture Painting

Diffusion Texture Painting

1 NVIDIA
2 University of Toronto
3 Vector Institute

* Work done during internship at NVIDIA

SIGGRAPH 2024

Abstract


We present a technique that leverages 2D generative diffusion models (DMs) for interactive texture painting on the surface of 3D meshes. Unlike existing texture painting systems, our method allows artists to paint with any complex image texture, and in contrast with traditional texture synthesis, our brush not only generates seamless strokes in real-time, but can inpaint realistic transitions between different textures. To enable this application, we present a stamp-based method that applies an adapted pre-trained DM to inpaint patches in local render space, which is then projected into the texture image, allowing artists control over brush stroke shape and texture orientation. We further present a way to adapt the inference of a pre-trained DM to ensure stable texture brush identity, while allowing the DM to hallucinate infinite variations of the source texture. Our method is the first to use DMs for interactive texture painting, and we hope it will inspire work on applying generative models to highly interactive artist-driven workflows.

Demo Video


Method Overview


Our system employs an image generative model \(G\) for patch-based interactive painting on 2D canvas or on any UV-mapped 3D mesh. Unlike existing systems, our method allows painting with any complex open-domain texture using a reference image as conditioning \(b\). The inpainting generator produces successive overlapping image stamps that seamlessly tile with the content already painted. Rather than applying the generator directly on the texture image \(T\), which can introduce large distortions when projecting stamps back to the surface, the generator operates in local camera space unique to every stamp \(C_t\).

Progressively painting a continuous brush stroke (a) is challenging. Vanilla conditional inpainting diffusion models (b) drift from the starting image after the first few patches, even when using prompt inversion of the reference texture (c). Our method introduces an image encoder for more precise conditioning (d) and additional guidance method (e) to ensure consistent generation.

Applications


Developing bold garment prints by sourcing brushes from gaudi mozaics and paintings.

Detailing a photogrammetry asset using forest textures sampled from photos captured in the wild.
Tree Trunk 3D Model from Sketchfab by Andrei Alexandrescu

Source textures from a physical toy to develop a similar look on a 3D toy mesh.
Stuffed Dino Toy 3D Model from Sketchfab by Andrey.Chegodaev

Prototyping a fantasy garden and exploring gingerbread house looks.
Fantasy House 3D Model from Sketchfab by LowlyPoly

Citation



        @article{texturepainting2024,
	  author = {Hu, Anita and Desai, Nishkrit and Abu Alhaija, Hassan and Kim, Seung Wook 
	      and Shugrina, Maria},
	  title = {Diffusion Texture Painting},
	  booktitle = {ACM SIGGRAPH 2024 Conference Proceedings},
	  year = {2024},
	}