StructDiffusion: Language-Guided Creation of Physically-Valid Structures using Unseen Objects

Publication image

Robots operating in human environments must be able to rearrange objects into semantically-meaningful configurations, even if these objects are previously unseen. We focus on the problem of building physically-valid structures without step-by-step instructions.

We propose StructDiffusion, which combines a diffusion model and an object-centric transformer to construct structures given partial-view point clouds and high-level language goals, such as "set the table" and "make a line".

StructDiffusion improves success rate on assembling physically-valid structures out of unseen objects by on average 16% over an existing multi-modal transformer model, while allowing us to use one multi-task model to produce a wider range of different structures. We show experiments on held-out objects in both simulation and on real-world rearrangement tasks.

Authors

Weiyu Liu (Georgia Tech)
Yilun Du (MIT)
Sonia Chernova (Georgia Tech)
Chris Paxton (Meta AI)

Publication Date

Research Area

Uploaded Files