Toronto AI Lab U of T DGP
Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation

Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation

1 University of Toronto
2 NVIDIA
3 Texas A&M University

Simplicits: Mesh-free volumetric simulation of objects represented by explicit triangle meshes, point clouds, implicit functions, Computed Tomography volumes,NeRFs, and Gaussian Splats, all produced using our data-free neural fields-based simulation algorithm. Here we show frames from 60 of the 140+ simulations performed.

Abstract


The proliferation of 3D representations, from explicit meshes to implicit neural fields and more, motivates the need for simulators agnostic to rep- resentation. We present a data-, mesh-, and grid-free solution for elastic simulation for any object in any geometric representation undergoing large, nonlinear deformations. We note that every standard geometric representa- tion can be reduced to an occupancy function queried at any point in space, and we define a simulator atop this common interface. For each object, we fit a small implicit neural network encoding spatially varying weights that act as a reduced deformation basis. These weights are trained to learn physically significant motions in the object via random perturbations. Our loss ensures we find a weight-space basis that best minimizes deformation energy by stochastically evaluating elastic energies through Monte Carlo sampling of the deformation volume. At runtime, we simulate in the reduced basis and sample the deformations back to the original domain. Our experiments demonstrate the versatility, accuracy, and speed of this approach on data including signed distance functions, point clouds, neural primitives, tomog- raphy scans, radiance fields, Gaussian splats, surface meshes, and volume meshes, as well as showing a variety of material energies, contact models, and time integration schemes.

Video


Method


Pipeline overview. First, the skinning weights \(\mathbf{W}_\theta\) are learned by minimizing the potential and orthogonality losses over randomized deformations (\(N\) is the number of samples, and \(n\) is the number of skinning handles). Then, given physical material properties and scene conditions, keyframes of transformation-per-handle \(\mathbf{Z}\) are generated, and finally combined into an animated object.

Results


Gaussian Splat Ficus

Gaussian Splat Bulldozer

Point-Sampled NERF Frog

CT Scanned Heterogenous Skull/Brain

Thin Mesh Ribbon

Signed Distance Function Mandelbulb

Citation



        @article{modi2024Simplicits,
            title={Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation}, 
            author={Vismay Modi, Nicholar Sharp, Or Perel, David I. W. Levin, Shinjiro Sueda},
            journal={arXiv preprint},
            year={2024}
        }
        

Paper



Acknowledgment


This work is funded in part by NSF (1846368, 2313076), NSERC Discovery, Ontario Early Researchers Award Program, the Canada Research Chairs Program, gifts by Adobe and Autodesk. We appreciate invaluable feedback from Otman Benchekroun, as well as Abhishek Madan. We thank John Hancock for IT support. Finally, we thank anonymous reviewers for their helpful comments and suggestions.