Using numerical gradients with step size matching the spatial resolutions of hash grid optimizes beyond the local cells. The numerical gradients act as a smoothing operation on the SDF in comparison to the analytical gradients.
By progressively decreasing the step size for numerical gradient and enabling higher resolution hash grids, the optimization landscape is better shaped to recover both large smooth surfaces and fine geometric details. Such a learning curriculum enables progressive level of details.
Neuralangelo uses three optimization objectives:
$$\mathcal{L} = \mathcal{L}_{rgb} + w_\text{eik} \mathcal{L}_{eik} + w_\text{curv}
\mathcal{L}_{curv}.$$
- RGB synthesis loss \( \mathcal{L}_{rgb} \) : RGB reconstruction loss between the input image and synthesized images.
- Eikonal loss \( \mathcal{L}_{eik} \) : regularize underlying SDF such that the surface normals are unit-norm.
- Curvature loss \( \mathcal{L}_{curv} \) : regularize underlying SDF such that the mean-curvature is not arbitrarily large.