StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

Publication
SIGGRAPH 2022

Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained blindly? Leveraging the semantic power of large scale Contrastive-Language-Image-Pre-training (CLIP) models, we present a text-driven method that allows shifting a generative model to new domains, without having to collect even a single image from those domains. We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes. Notably, many of these modifications would be difficult or outright impossible to reach with existing methods. We conduct an extensive set of experiments and comparisons across a wide range of domains. These demonstrate the effectiveness of our approach and show that our shifted models maintain the latent-space properties that make generative models appealing for downstream tasks.

Cite the paper

If you use the contents of this project, please cite our paper. @article{gal2021stylegan-nada, title={StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators}, author={Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or}, journal={arXiv preprint arxiv:https://arxiv.org/abs/2108.00946}, year={2021} }

Related