NVIDIA Toronto AI Lab
NVIDIA Toronto AI Lab
Home
News
Members
Research
Projects
Publications
Contact
Light
Dark
Automatic
Sanja Fidler
Latest
DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models
Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models
GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control
ReMatching Dynamic Reconstruction Flow
SCube: Instant Large-Scale Scene Reconstruction using VoxSplats
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes
SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes
LATTE3D: Large-scale Amortized Text-To-Enhanced3D Synthesis
Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering
fVDB: A Deep-Learning Framework for Sparse, Large-Scale, and High-Performance Spatial Intelligence
SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation
Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies
Outdoor Scene Extrapolation with Hierarchical Generative Cellular Automata
Adaptive Shells for Efficient Neural Radiance Field Rendering
Compact Neural Graphics Primitives with Learned Hash Probing
TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models
ATT3D: Amortized Text-to-3D Object synthesis
DreamTeacher: Pretraining Image Backbones with Deep Generative Models
Learning Human Dynamics in Autonomous Driving Scenarios
Neural LiDAR Fields for Novel View Synthesis
Towards Viewpoint Robustness in Bird's Eye View Segmentation
Bridging the Sim2Real gap with CARE: Supervised Detection Adaptation with Conditional Alignment and Reweighting
Flexible Isosurface Extraction for Gradient-Based Mesh Optimization
Learning Physically Simulated Tennis Skills from Broadcast Videos
Synthesizing Physical Character-Scene Interactions
Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models
Magic3D: High-Resolution Text-to-3D Content Creation
Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes
Neural Kernel Surface Reconstruction
NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models
Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion
VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion
Neural Brushstroke Engine: Learning a Latent Style Space of Interactive Drawing Tools
GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images
LION: Latent Point Diffusion Models for 3D Shape Generation
Optimizing data collection for machine learning
PADL: Language-Directed Physics-Based Character Control
XDGAN: Multi-Modal 3D Shape Generation in 2D Space
Kaolin Wisp: A PyTorch Library and Engine for Neural Fields Research
MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D Segmentation
Learning Smooth Neural Functions via Lipschitz Regularization
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters
Variable Bitrate Neural Fields
Polymorphic-GAN: Generating Aligned Samples across Multiple Domains with Learned Morph Maps
Extracting Triangular 3D Models, Materials, and Lighting From Images
AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis
BigDatasetGAN: Synthesizing ImageNet with Pixel-wise Annotations
Frame Averaging for Equivariant Shape Space Learning
Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic Prior
How Much More Data Do I Need? Estimating Requirements for Downstream Tasks
Neural Fields as Learnable Kernels for 3D Reconstruction
Domain Adversarial Training: A Game Perspective
Low-Budget Active Learning via Wasserstein Distance: An Integer Programming Approach
ATISS: Autoregressive Transformers for Indoor Scene Synthesis
Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis
Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation
DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer
Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
EditGAN: High-Precision Semantic Image Editing
Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting
3DStyleNet: Creating 3D Shapes with Geometric and Texture Style Variations
Physics-based Human Motion Estimation and Synthesis from Videos
f-Domain-Adversarial Learning: Theory and Algorithms
Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort
DriveGAN: Towards a Controllable High-Quality Neural Simulation
Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces
Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalization
Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering
gradSim: Differentiable simulation for system identification and visuomotor control
UniCon: Universal Neural Controller For Physics-based Character Motion
Kaolin: A PyTorch Library for Accelerating 3D Deep Learning Research
Variational Amodal Object Completion
Learning Deformable Tetrahedral Meshes for 3D Reconstruction
Federated Simulation or Medical Imaging
Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation
Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D
Learning to Evaluate Perception Models Using Planner-Centric Metrics
Learning to Simulate Dynamic Environments with GameGAN
Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
Neural Turtle Graphics for Modeling City Road Layouts
Meta Sim: Learning to Generate Synthetic Datasets
Gated-SCNN Gated Shape CNNs for Semantic Segmentation
Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations
Cite
×