The correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials, as well as the image formation process. While recent large-scale diffusion models have shown strong generative and inpainting capabilities, we find that current models do not sufficiently “understand” the scene shown in a single picture to generate consistent lighting effects (shadows, bright reflections, etc.) while preserving the identity and details of the composited object. We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process. Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes. Our physically based pipeline further enables automatic materials and tone-mapping refinement.
Method overview. Given an input image, we first construct a virtual 3D scene with a virtual object and proxy plane. Our physically-based renderer then differentiably simulates the interactions of the optimizable environment map with the inserted virtual object and its effect on the background scene (shadowing) (left). At each iteration, the rendered image is diffused and passed through a personalized diffusion model (middle). The gradient of the adapted Score Distillation formulation is propagated back to the environment map and the tone-mapping curve through the differentiable renderer. Upon convergence, we recover lighting and tone-mapping parameters, which allow photorealistic compositing of virtual objects from a single image (right).
We demonstrate the effectiveness of our method on a variety of indoor and outdoor scenes. We use Waymo outdoor driving scenes and unwrapped indoor HDRI panoramas as our target background images for evaluation. Our method can more accurately estimate the lighting conditions for the virtual 3D objects to be inserted into the background images.
Our diffusion-guided lighting optimization process for the inserted virtual object in the Waymo scene.
We either animate the background image or move object position to create dynamic scenes.
We extend the insertion into multiple camera views from Waymo scenes
We use our method to optimize differentiable material properties from inserted objects.
We use our method to optimize differentiable tone-mapping curves to improve the realism.
Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering
Ruofan Liang, Zan Gojcic, Merlin Nimier-David, David Acuna, Nandita Vijaykumar, Sanja Fidler, Zian Wang
@article{liang2024photorealistic,
author = {Ruofan Liang and Zan Gojcic and Merlin Nimier-David and David Acuna
and Nandita Vijaykumar and Sanja Fidler and Zian Wang},
title = {Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering},
journal = {arXiv preprint},
year = {2024}
}
|