Neural Kernel Surface Reconstruction

Kernel Surface Reconstruction (**NKSR**) recovers a 3D surface from an input point cloud.Trained directly from dense points, our method reaches state-of-the-art reconstruction quality and scalability. All the meshes in this figure are reconstructed using a single trained model.
Kernel Surface Reconstruction (NKSR) recovers a 3D surface from an input point cloud.Trained directly from dense points, our method reaches state-of-the-art reconstruction quality and scalability. All the meshes in this figure are reconstructed using a single trained model.

We present a novel method for reconstructing a 3D implicit surface from a large-scale, sparse, and noisy point cloud. Our approach builds upon the recently introduced Neural Kernel Fields (NKF) representation. It enjoys similar generalization capabilities to NKF, while simultaneously addressing its main limitations: (a) We can scale to large scenes through compactly supported kernel functions, which enable the use of memory-efficient sparse linear solvers. (b) We are robust to noise, through a gradient fitting solve. (c) We minimize training requirements, enabling us to learn from any dataset of dense oriented points, and even mix training data consisting of objects and scenes at different scales. Our method is capable of reconstructing millions of points in a few seconds, and handling very large scenes in an out-of-core fashion. We achieve state-of-the-art results on reconstruction benchmarks consisting of single objects, indoor scenes, and outdoor scenes.

Method

Our method accepts an oriented point cloud and predicts a sparse hierarchy of voxel grids containing features as well as normals in each voxel. We then construct a sparse linear system and solve for a set of per-voxel coefficients $\alpha$. The linear system corresponds to the gram matrix arising from a kernel which depends on the predicted features, illustrated as $\mathbf{L}$ and $v$ above (described in the paper). To extract the predicted surface, we evaluate the function values at the voxel corners using a linear combination of the learned kernel basis functions, followed by dual marching cubes. Watch the explanatory video to gain more intuition of our method design.

Qualitative Resutls

NKSR is accurate, generalizable and scalable. These three main properties are demonstrated using the following three types of datasets.

Single Objects

InputPOCONeural Kernel FieldNeural GalerkinOurs (NKSR)

Indoor Scenes

All the methods below are trained only on ShapeNet dataset and are directly tested on ScanNet and Matterport3D datasets. Hover over each individual image to visualize the normal map.

Local Implicit Grid Neural Kernel Field POCO Ours (NKSR)
Local Implicit Grid Dual Octree GNN POCO Ours (NKSR)

Driving Scenes

We synthesize the first dataset for benchmarking large-scale surface reconstruction using the CARLA simulator.​ We accumulate the LiDAR points from the sensor and crop the geometries into 51.2m × 51.2m chunks. Move the slider to see side-by-side comparisons with the baselines (refresh your browser if the videos do not align).

CARLA origin subset:

CARLA novel subset:

The following shows direct generalization results to Waymo Open Dataset. During training time the model has never seen real outdoor AV data.​ Click to zoom in.

Explanatory Video

Citation

If you find our work interesting, please consider citing us:

@inproceedings{huang2023nksr,
  title={Neural Kernel Surface Reconstruction},
  author={Huang, Jiahui and Gojcic, Zan and Atzmon, Matan and Litany, Or and Fidler, Sanja and Williams, Francis},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4369--4379},
  year={2023}
}