Toronto AI Lab NVIDIA Research
PartField: Learning 3D Feature Fields for Part Segmentation and Beyond

PartField: Learning 3D Feature Fields for
Part Segmentation and Beyond

Minghua Liu1,4 * ^       Mikaela Angelina Uy1 *       Donglai Xiang1       Hao Su4     
Sanja Fidler1,2,3       Nicholas Sharp1       Jun Gao1,2,3      
1NVIDIA       2University of Toronto       3Vector Institute       4UCSD      
* equal contribution       ^ Work done during an internship at Nvidia

PartField is a feedforward model that predicts part-based feature fields for 3D shapes. Our learned features can be clustered to yield a high-quality part decomposition that outperforms the latest open-world 3D part segmentation approaches in both quality and speed. Partfield can be applied to a wide variety of inputs in terms of modality, semantic class, and style. The learned feature field exhibits consistency across shapes, enabling applications such as cosegmentation, interactive selection, and correspondence.

Abstract


We propose PartField, a feedforward approach for learning part-based 3D features, which captures the general concept of parts and their hierarchy without relying on predefined templates or text-based names, and can be applied to open-world 3D shapes across various modalities. PartField requires only a 3D feedforward pass at inference time, significantly improving runtime and robustness compared to prior approaches. Our model is trained by distilling 2D and 3D part proposals from a mix of labeled datasets and image segmentations on large unsupervised datasets, via a contrastive learning formulation. It produces a continuous feature field which can be clustered to yield a hierarchical part decomposition. Comparisons show that PartField is up to 20% more accurate and often orders of magnitude faster than other recent class-agnostic part-segmentation methods. Beyond single-shape part decomposition, consistency in the learned field emerges across shapes, enabling tasks such as co-segmentation and correspondence, which we demonstrate in several applications of these general-purpose, hierarchical, and consistent 3D feature fields.

We train a feedforward model that takes a point-sampled 3D shape as input (which could come from a mesh, Gaussian splats, or other representations) and predicts a feature field represented by a triplane. These features can then be clustered to generate parts at various scales. Our model is trained with a contrastive loss on both open-world data, distilled from image-space masks, which need not be consistent, and 3D supervision when available.

Results


Key Result 1: Open-world Part Segmentation. PartField can be applied to shapes from various categories.

We significantly outperform existing baselines that either require additional text input or per-shape inference-time optimization in both quality and speed.


Key Result 2: Hierarchical Segmentation. PartField implicitly learns a hierarchy of multi-scale parts through large-scale contrastive learning on diverse 2D and 3D data.



Key Result 3: Generalization Across Various 3D Input Modalities. We use 3D shapes from various sources, including generated assets, CAD models, and reconstructed Gaussian splats.

i) AI generated meshes from Edify3D
ii) CAD models from the ABC dataset
iii) 3D Gaussian splatting reconstructions

Key Result 4: Emergent Cross-Shape Consistency. While we do not explicitly incorporate any cross-shape supervision, we find that consistency surprisingly emerges in the learned feature space across different shapes. We explore this phenomenon and visualize similarities across the field relative to a selected location. This property enables various applications such as shape co-segmentation and correspondence.

Applications


We evaluate the properties of the learned feature field in various applications.


Application 1: Co-segmentation. We further explore the consistency of our feature field across shapes through a co-segmentation task. Specifically, we co-segment the shapes in the top row with their corresponding shapes in the bottom row.
This application further enables us to annotate a large number of 3D assets with just a few clicks. See our demo UI below.

Application 2: Correspondences. The cross-shape consistency of PartField enables it to serve as a prior for fine-grained point-to-point correspondence learning. We demonstrate this with Functional Maps as a promising initial example, fitting correspondences between source and target shapes.

Citation


@misc{partfield2025,
  author = {Minghua Liu and Mikaela Angelina Uy and Donglai Xiang and Hao Su and Sanja Fidler 
    and Nicholas Sharp and Jun Gao},
  title = {PartField: Learning 3D Feature Fields for Part Segmentation and Beyond},
  year = {2025}
} 

Acknowledgements


We would like to additionally thank Masha Shugrina, Vismay Modi and team, for 3D scanned Gaussian splat assets and helpful discussions; and the Edify3D team, for the Edify assets and insightful discussions.