Toronto AI Lab NVIDIA Research
Stochastic Preconditioning for Neural Field Optimization

Stochastic Preconditioning for
Neural Field Optimization

SIGGRAPH 2025

Selena Ling1,2       Merlin Nimier-David1       Alec Jacobson2,3       Nicholas Sharp1     
1NVIDIA       2University of Toronto       3Adobe Research     
overview

Abstract


Neural fields are a highly effective representation across visual computing. This work observes that fitting these fields is greatly improved by incorporating spatial stochasticity during training, and that this simple technique can replace or even outperform custom-designed hierarchies and frequency-space constructions. The approach is formalized as implicitly operating on a blurred version of the field, evaluated in-expectation by sampling with Gaussian-distributed offsets. Querying the blurred field during optimization greatly improves convergence and robustness, akin to the role of preconditioners in numerical linear algebra. This implicit, sampling-based perspective fits naturally into the neural field paradigm, comes at no additional cost, and is extremely simple to implement. We describe the basic theory of this technique, including details such as handling boundary conditions, and extending to a spatially-varying blur. Experiments demonstrate this approach on representations including coordinate MLPs, neural hashgrids, triplanes, and more, across tasks including surface reconstruction and radiance fields. In settings where custom-designed hierarchies have already been developed, stochastic preconditioning nearly matches or improves their performance with a simple and unified approach; in settings without existing hierarchies it provides an immediate boost to quality and robustness.

SDF Fitting from Oriented Point Clouds

We apply our technique to the well-studied problem of surface reconstruction from oriented point clouds.


Main Code Change (PyTorch)


Optimization Comparisons

Here we show the surface throughout the optimization process with and without stochastic preconditioning. With stochastic preconditioning, the optimization biases toward a blurred solution first and successfully avoids local minima manifesting as floaters in the end.


Results

Geometric initialization is a commonly-used technique to accelerate SDF field fitting, yet it often results in disastrous artifacts for non-object centric scenes. Stochastic preconditioning helps to avoid floaters both with and without geometric initialization.
Stochastic preconditioning generalizes across a wide variety of existing neural field encodings.

Hash Grid Encodings
Baseline
Baseline+SP
Baseline [Müller et al. 2022]
+ Stochastic Preconditioning
Baseline
Baseline+SP
Baseline [Müller et al. 2022]
+ Stochastic Preconditioning
Baseline
Baseline+SP
Baseline [Müller et al. 2022]
+ Stochastic Preconditioning

Fourier Feature Encodings
Baseline
Baseline+SP
Baseline
+ Stochastic Preconditioning
Baseline
Baseline+SP
Baseline
+ Stochastic Preconditioning
Baseline
Baseline+SP
Baseline
+ Stochastic Preconditioning

Triplane-based Encodings
Baseline
Baseline+SP
Baseline
+ Stochastic Preconditioning
Baseline
Baseline+SP
Baseline
+ Stochastic Preconditioning
Baseline
Baseline+SP
Baseline
+ Stochastic Preconditioning


Neural Radiance Field with Sparse Supervision

We also tackle the challenging scenario of optimizing neural radiance field with sparse supervision with the setup from FreeNeRF implemented in JAX [Yang et al.2023].

Main Code Change (JAX)


Results

DTU Scan 30
DTU Scan 55
DTU Scan 21
DTU Scan 63


Neural Radiance Field with ReLU Field [Karnewar et al.2022]

We also experiment with optimizing ReLU Field, a grid-based representation for radiance field, with and without the original hierarchical training scheme. Stochastic preconditioning achieves quality on-par or superior than the original staged hierarchical training scheme.


Optimization Comparisons

Citation



    @article{ling2025stochastic
        author = {Ling, Selena and Nimier-David, Merlin and Jacobson, Alec and Sharp, Nicholas},
        title = {Stochastic Preconditioning for Neural Field Optimization},
        journal = {ACM Trans. Graph.},
        volume = {44},
        number = {4},
        year = {2025},
        publisher = {ACM},
        address = {New York, NY, USA},
    }

Acknowledgements


We thank Zan Gojcic, Thomas Müller, Zian Wang, Sanja Fidler and Alex Keller for their help throughout this work, as well as John Hancock the Dynamic Graphics Project for computing support. We are grateful to the artists of 3D models used for demonstrations in this paper, including Turbosquid users Marc Mons, A_Akhtar, charbavito, Stasma, sougatadhar16, Poly Forge, Vadim Manoli and Digital Fashionwear BD. Our research is funded in part by NSERC Discovery (RGPIN–2022–04680), the Ontario Early Research Award program, the Canada Research Chairs Program, a Sloan Research Fellowship, the DSI Catalyst Grant program and gifts by Adobe Inc.