Key-Locked Rank One Editing for Text-to-Image Personalization

Publication
ACM SIGGRAPH 2023 Conference Proceedings

Summary:

We present Perfusion, a new text-to-image personalization method. With only a 100KB model size, trained for roughly 4 minutes, Perfusion can creatively portray personalized objects. It allows significant changes in their appearance, while maintaining their identity, using a novel mechanism we call “Key-Locking”. Perfusion can also combine individually learned concepts into a single generated image. Finally, it enables controlling the trade-off between visual and textual alignment at inference time, covering the entire Pareto front with just a single trained model.

Abstract:

Text-to-image models (T2I) offer a new level of flexibility by allowing users to guide the creative process through natural language. However, personalizing these models to align with user-provided visual concepts remains a challenging problem. The task of T2I personalization poses multiple hard challenges, such as maintaining high visual fidelity while allowing creative control, combining multiple personalized concepts in a single image, and keeping a small model size. We present Perfusion, a T2I personalization method that addresses these challenges using dynamic rank-1 updates to the underlying T2I model. Perfusion avoids overfitting by introducing a new mechanism that “locks” new concepts’ cross-attention Keys to their superordinate category. Additionally, we develop a gated rank-1 approach that enables us to control the influence of a learned concept during inference time and to combine multiple concepts. This allows runtime-efficient balancing of visual-fidelity and textual-alignment with a single 100KB trained model, which is five orders of magnitude smaller than the current state of the art. Moreover, it can span different operating points across the Pareto front without additional training. Finally, we show that Perfusion outperforms strong baselines in both qualitative and quantitative terms. Importantly, key-locking leads to novel results compared to traditional approaches, allowing to portray personalized object interactions in unprecedented ways, even in one-shot settings.

Cite the paper:

If you use the contents of this project, please cite our paper.

@inproceedings{tewel2023keylocked,
      author = {Tewel, Yoad and Gal, Rinon and Chechik, Gal and Atzmon, Yuval},
      title = {Key-Locked Rank One Editing for Text-to-Image Personalization},
      year = {2023},
      booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},
      location = {Los Angeles, CA, USA},
      series = {SIGGRAPH '23}
}

Related