Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models

1NVIDIA, 2Tel Aviv University, 3Bar-Ilan University

TLDR: We present Add-it, a training-free method for adding objects to images based on text prompts. Add-it works well on real and generated images. Our approach leverages an existing text-to-image model (FLUX.1-dev) without requiring additional training.


We present Add-it, a training-free approach that enables adding objects to images (both real and generated), from a simple text prompt. Add-it leverages text-to-image diffusion models to incorporate attention from three sources: the original image, the text prompt, and the generated image. This approach, which does not require fine-tuning, ensures structural consistency and realistic object placement. To tackle the challenges of object insertion we introduce a weighted attention mechanism, a subject-guided latent blending technique, and a noise structure transfer step. Add-it outperforms previous supervised methods and introduces the "Additing Affordance Benchmark" to assess object placement plausibility, achieving state-of-the-art results across multiple benchmarks.

Teaser.

Generations with Add-it

Abstract

Adding Object into images based on text instructions is a challenging task in semantic image editing, requiring a balance between preserving the original scene and seamlessly integrating the new object in a fitting location. Despite extensive efforts, existing models often struggle with this balance, particularly with finding a natural location for adding an object in complex scenes. We introduce Add-it, a training-free approach that extends diffusion models' attention mechanisms to incorporate information from three key sources: the scene image, the text prompt, and the generated image itself. Our weighted extended-attention mechanism maintains structural consistency and fine details while ensuring natural object placement. Without task-specific fine-tuning, Add-it achieves state-of-the-art results on both real and generated image insertion benchmarks, including our newly constructed "Additing Affordance Benchmark" for evaluating object placement plausibility, outperforming supervised methods. Human evaluations show that Add-it is preferred in over 80% of cases, and it also demonstrates improvements in various automated metrics.

How does it work?

Architecture outline: Given a tuple of source noise XTsource , target noise XTtarget , and a text prompt Ptarget , we first apply Structure Transfer to inject the source image's structure into the target image. We then extend the self-attention blocks so that XTtarget pulls keys and values from both Ptarget and XTsource, with each source weighted separately. Finally, we use Subject Guided Latent Blending to retain fine details from the source image.

Comparison To Current Methods

Qualitative comparison of Add-It with other baselines, on real images (left) and generated images (right).

Quantitative Evaluation

Quantitative comparison of Add-It with other baselines on human-preference (left) and automatic metrics (right)


Step-by-Step Generation

Add-it can generate images Step-by-Step, allowing the final image to better adapt to user preferences at each step.




Refined Affordance maps

Images generated by Add-it with and without the latent blending step, along with the resulting affordance map. The latent blending block helps align fine details from the source image, such as removing the girl’s glasses or adjusting the shadows of the bicycles.


Non-Photorealistic Images

Add-it can operate on non-photorealistic source images.


Limitations

Add-it may fail to add a subject that already exists in the source image. When prompted to add another dog to the image, Add-it generates the same dog instead, though it successfully adds a person behind the dog.

BibTeX

If you find our work useful, please cite our paper:

@misc{tewel2024addit,
        title={Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models},
        author={Yoad Tewel and Rinon Gal and Dvir Samuel Yuval Atzmon and Lior Wolf and Gal Chechik},
        year={2024},
        eprint={2411.07232},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
    }