Home
Research Area
News
Featured Projects
Members
Publications
Contact
NVIDIA Research
Light
Dark
Automatic
Animesh Garg
NVIDIA
University of Toronto
Vector Institute
Latest
Discovering Robotic Interaction Modes with Discrete Representation Learning
SPIRE: Synergistic Planning, Imitation, and Reinforcement for Long-Horizon Manipulation
Fast Explicit-Input Assistance for Teleoperation in Clutter
SuFIA: Language-Guided Augmented Dexterity for Robotic Surgical Assistants
Adaptive Horizon Actor-Critic for Policy Learning in Contact-Rich Differentiable Simulation
HandyPriors: Physically Consistent Perception of Hand-Object Interactions with Differentiable Priors
ORBIT-Surgical: An Open-Simulation Framework for Learning Surgical Augmented Dexterity
DexGrasp-1M: Dexterous Multi-finger Grasp Generation Through Differentiable Simulation
MVTrans: Multi-View Perception of Transparent Objects
nerf2nerf: Pairwise Registration of Neural Radiance Fields
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
Self-Supervised Learning of Action Affordances as Interaction Modes
Learning Achievement Structure for Structured Exploration in Domains with Sparse Reward
SlotFormer: Unsupervised Visual Dynamics Simulation with Object-Centric Models
ORBIT : A Unified Simulation Framework for Interactive Robot Learning Environments
Bayesian Object Models for Robotic Interaction with Differentiable Probabilistic Programming
Breaking Bad: A Dataset for Geometric Fracture and Reassembly
MoCoDA: Model-based Counterfactual Data Augmentation
RoboTube: Learning Household Manipulation from Human Videos with Simulated Twin Environments
SMPL: Simulated Industrial Manufacturing and Process Control Learning Environments
Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile Manipulation.
Grasp'D: Differentiable Contact-rich Grasp Synthesis for Multi-fingered Hands
Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger
DiSECt: A Differentiable Simulator for Parameter Inference and Control in Robotic Cutting
Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with a Centroidal Model
Modular Action Concept Grounding in Semantic Video Prediction
Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors
PlaTe: Visually-Grounded Planning with Transformers in Procedural Tasks
Accelerated Policy Learning with Parallel Differentiable Simulation
Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning
Value Gradient weighted Model-Based Reinforcement Learning
Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers
Dynamic Bottleneck for Robust Self-Supervised Exploration
Neural Hybrid Automata: Learning Dynamics With Multiple Modes and Stochastic Transitions
A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution
S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning in Robotics
Seeing Glass: Joint Point-Cloud and Depth Completion for Transparent Objects
Uniform Priors for Data-Efficient Transfer
Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos
A differentiable simulator for robotic cutting
Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition.
DiSeCT: A differentiable simulation engine for autonomous robotic cutting
GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels.
Principled Exploration via Optimistic Bootstrapping and Backward Induction.
Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning
Dynamics Randomization Revisited: A Case Study for Quadrupedal Locomotion
Emergent Hand Morphology and Control from Optimizing Robust Grasps of Diverse Objects
LASER: Learning a Latent Action Space for Efficient Reinforcement Learning
LEAF: Latent Exploration Along the Frontier.
C-Learning: Horizon-Aware Cumulative Accessibility Estimation
Conservative Safety Critics for Exploration
Skill Transfer via Partially Amortized Hierarchical Planning
DIBS: Diversity inducing Information Bottleneck in Model Ensembles
Unsupervised Disentanglement of Pose, Appearance and Background from Images and Videos
Causal Discovery in Physical Systems from Videos.
Counterfactual Data Augmentation using Locally Factored Dynamics
Curriculum By Smoothing
D2RL: Deep Dense Architectures in Reinforcement Learning
Emergent Hand Morphology and Control from Optimizing Robust Grasps of Diverse Objects
Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
Visuomotor Mechanical Search: Learning to Retrieve Target Objects in Clutter
Ocean: Online Task Inference for Compositional Tasks with Context Adaptation
A Programmable Approach To Model Compression
Angular Visual Hardness
Semi-Supervised StyleGAN for Disentanglement Learning
Controlling Assistive Robots with Learned Latent Actions
Guided Uncertainty-Aware Policy Optimization: Combining Model-Free and Model-Based Strategies for Sample-Efficient Learning
Implicit Reinforcement without Interaction at Scale: Leveraging Large-Scale Robot Manipulation Datasets for Control
Motion Reasoning for Goal-Based Imitation Learning
Combining Model-Free and Model-Based Strategies for Sample-Efficient Reinforcement Learning
InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers
Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning
Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity
Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks
AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers
Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation
Video Interpolation and Prediction with Unsupervised Landmarks
Cite
×