MTP: Multi-Hypothesis Tracking and Prediction for Reduced Error Propagation

Recently, there has been tremendous progress in developing each individual module of the standard perception-planning robot autonomy pipeline, including detection, tracking, prediction of other agents’ trajectories, and ego-agent trajectory planning. Nevertheless, there has been less attention given to the principled integration of these components, particularly in terms of the characterization and mitigation of cascading errors. This paper addresses the problem of cascading errors by focusing on the coupling between the tracking and prediction modules.

Injecting Planning-Awareness into Prediction and Detection Evaluation

Detecting other agents and forecasting their behavior is an integral part of the modern robotic autonomy stack, especially in safety-critical scenarios entailing human-robot interaction such as autonomous driving. Due to the importance of these components, there has been a significant amount of interest and research in perception and trajectory forecasting, resulting in a wide variety of approaches. Common to most works, however, is the use of the same few accuracy-based evaluation metrics, e.g., intersection-over-union, displacement error, log-likelihood, etc.

Ray Tracing of Signed Distance Function Grids

We evaluate the performance of a wide set of combinations of traversal and voxel intersection testing of signed distance function grids in a path tracing setting. In addition, we present an optimized way to compute the intersection between a ray and the surface defined by trilinear interpolation of signed distances at the eight corners of a voxel. We also provide a novel way to compute continuous normals across voxels and an optimization for shadow rays.

Accelerated Policy Learning with Parallel Differentiable Simulation

Deep reinforcement learning can generate complex control policies, but requires large amounts of training data to work effectively. Recent work has attempted to address this issue by leveraging differentiable simulators. However, inherent problems such as local minima and exploding/vanishing numerical gradients prevent these methods from being generally applied to control tasks with complex contact-rich dynamics, such as humanoid locomotion in classical RL benchmarks.

Jie Xu

Jie Xu is a Research Scientist in Seattle Robotics Lab, NVIDIA Research. His research mainly focuses on the intersection of Robotics, Machine Learning, and Computer Graphics.  Prior to NVIDIA, he received his Ph.D. degree in Computer Science in 2022 at MIT CSAIL in the Computational Design and Fabrication Group (CDFG) and obtained his bachelor's degree from the Department of Computer Science and Technology at Tsinghua University with honors in 2016. 

Yue Wang

My research lies in the intersection of computer vision, computer graphics, and robotics. My goal is to use machine learning to enable robot intelligence with minimal human supervision. I study how to design 3D learning systems which leverage geometry, appearance, and any other cues that are naturally available in sensory inputs. I am also broadly interested in eclectic applications on top of these systems. More info can be found in my website.