Accelerated Policy Learning with Parallel Differentiable Simulation

Deep reinforcement learning can generate complex control policies, but requires large amounts of training data to work effectively. Recent work has attempted to address this issue by leveraging differentiable simulators. However, inherent problems such as local minima and exploding/vanishing numerical gradients prevent these methods from being generally applied to control tasks with complex contact-rich dynamics, such as humanoid locomotion in classical RL benchmarks.

Jie Xu

Jie Xu is a Research Scientist in Seattle Robotics Lab, NVIDIA Research. His research mainly focuses on the intersection of Robotics, Machine Learning, and Computer Graphics.  Prior to NVIDIA, he received his Ph.D. degree in Computer Science in 2022 at MIT CSAIL in the Computational Design and Fabrication Group (CDFG) and obtained his bachelor's degree from the Department of Computer Science and Technology at Tsinghua University with honors in 2016. 

Yue Wang

My research lies in the intersection of computer vision, computer graphics, and robotics. My goal is to use machine learning to enable robot intelligence with minimal human supervision. I study how to design 3D learning systems which leverage geometry, appearance, and any other cues that are naturally available in sensory inputs. I am also broadly interested in eclectic applications on top of these systems. More info can be found in my website.

Dennis Abts

Dennis has three decades of experience building large-scale parallel computers that are uniquely capable of tackling the most demanding AI and HPC workloads. Previously, as the Chief Architect at Groq he worked on large-scale parallel architectures for machine learning, and at Google he worked on warehouse-scale topologies for energy-proportional networking, and Cray, where he was a Sr.

Apoorva Sharma

Apoorva Sharma is a Research Scientist in the Autonomous Vehicles Group at NVIDIA Research. His research interests focus on quantifying uncertainty in machine learning, with application towards building safe ML-enabled autonomous systems.

Machine Learning and Algorithms: Let Us Team Up for EDA

Machine learning (ML) has been applied to many EDA problems in recent years. We can classify these applications into three major categories: Predictor, Optimizer and Generator based on the role of ML played in these applications and the ML techniques used. Ideally one would like to adopt the Optimizer and Generator approaches to solve a hard EDA problem directly with ML, and we call these ML-alone approach. It is very challenging, however, to scale the ML-alone approach to solve real world EDA problems.

From RTL to CUDA: A GPU Acceleration Flow for RTL Simulation with Batch Stimulus

High-throughput RTL simulation is critical for verifying today’s highly complex SoCs. Recent research has explored accelerating RTL simulation by leveraging event-driven approaches or partitioning heuristics to speed up simulation on a single stimulus.

Placement Optimization via PPA-Directed Graph Clustering

In this paper, we present the first Power, Performance, and Area (PPA)-directed, end-to-end placement optimization framework that provides cell clustering constraints as placement guidance to advance commercial placers. Specifically, we formulate PPA metrics as Machine Learning (ML) loss functions, and use graph clustering techniques to optimize them by improving clustering assignments.