Point-Cloud Completion with Pretrained Text-to-image Diffusion Models

Publication
2023 Conference on Neural Information Processing Systems

Abstract

Point-cloud data collected in real-world applications are often incomplete, because objects are being observed from specific viewpoints, which only capture one perspective. Data can also be incomplete due to occlusion and low-resolution sampling. Existing approaches to completion rely on training models with datasets of predefined objects to guide the completion of point clouds. Unfortunately, these approaches fail to generalize when tested on objects or real-world setups that are poorly represented in their training set. Here, we leverage recent advances in text-guided 3D shape generation, showing how to use image priors for generating 3D objects. We describe an approach called SDS-Complete that uses a pre-trained text-to-image diffusion model and leverages the text semantics of a given incomplete point cloud of an object, to obtain a complete surface representation. SDS-Complete can complete a variety of objects using test-time optimization without expensive collection of 3D data. We evaluate SDS-Complete on a collection of incomplete scanned objects, captured by real-world depth sensors and LiDAR scanners. We find that it effectively reconstructs objects that are absent from common datasets, reducing Chamfer loss by about 50% on average compared with current methods.

Cite the paper

If you use the contents of this project, please cite our paper.

@article{kasten2023point,
title={Point-Cloud Completion with Pretrained Text-to-image Diffusion Models},
author={Kasten, Yoni and Rahamim, Ohad and Chechik, Gal},
journal={arXiv preprint arXiv:2306.10533},
year={2023}

}

Related