Spatial Intelligence Lab NVIDIA Research
Learning Convex Decomposition via Feature Fields

Learning Convex Decomposition via Feature Fields

Yuezhi Yang1,2 ^      Qixing Huang2       Mikaela Angelina Uy1 *       Nicholas Sharp1 *      
1NVIDIA       2University of Texas Austin      
* equal contribution       ^ Work done during an internship at Nvidia

Our method takes an input shape (left), infers features from an open-world model learned with our new self-supervised geometric loss (middle), and clusters those features to fit the shape with a collection of tight convex bounding proxies (right).

Abstract


This work proposes a new formulation to the long-standing problem of convex decomposition through learning feature fields, enabling the first feed-forward model for open-world convex decomposition. Our method produces high-quality decompositions of 3D shapes into a union of convex bodies, which are essential to accelerate collision detection in physical simulation, amongst many other applications. The key insight is to adopt a feature learning approach and learn a continuous feature field that can later be clustered to yield a good convex decomposition via our self-supervised, purely-geometric objective derived from the classical definition of convexity. Our formulation can be used for single shape optimization, but more importantly, feature prediction unlocks scalable, self-supervised learning on large datasets resulting in the first learned open-world for convex decomposition. Experiments show that our decompositions are higher-quality than alternatives and generalize across open-world objects as well as across representations to meshes, CAD models, and even Gaussian splats.

Method Overview: An overview of our convex decomposition pipeline. We train a feedforward model that takes a point-sampled 3D shape as input and predicts a feature field represented defined over the object. At training time, these features are fit with a self-supervised geometric objective derived from the definition of convexity. At inference time, the features are clustered to split the shape into components, and the convex hull of each component becomes the decomposition. Note that feature colors are visualized by running PCA on the feature field.



Self-supervised Geometric Objective: Our method is inspired by a classic geometric definition of convexity. A shape is convex if for any two points, the line segment connecting them is entirely contained inside the shape, and we refer to such points as convex pairs. We use contrastive triplet learning to train/optimize our feature field, where positives and negatives are convex and non-convex pairs (left), respectively. Contrastive training triplets are formed by a source point, a positive pair generated by casting a ray in a random inward direction (middle), and a negative pair rejection-sampled from all points on the surface weighted to prefer nearby points (right).


Results


Key Result 1: Open-world Convex Decomposition. Our approach can be applied to diverse shapes, yield higher-quality decompositions compared to both existing traditional and learning-based approaches.

Key Result 2: Multi-granularity Decomposition. By adjusting the clustering threshold, our method can generate decompositions at varying granularity, all from the same feature field.


Key Result 3: Generalizes across various 3D input modalities. We take 3D shapes from various inputs, ranging from CAD models, real 3D scans or reconstructed gaussian splats.

Application to Collision Detection. We showcase our convex decompositions with a rigid body simulation in Newton, as shown below. Here, our convex approximation yields a 5x faster simulation step vs collisions with the original meshes (8ms vs 40ms). We are currently working to test this more broadly across various simulators and collision schemes.

Citation


@misc{learningconvexdecomp2025,
  author = {Yuezhi Yang and Qixing Huang and Mikaela Angelina Uy and Nicholas Sharp},
  title = {Learning Convex Decomposition via Feature Fields},
  year = {2025}
} 

Acknowledgements


We would like to additionally thank Anka Chen and Jun Gao helpful discussions.