Verification and Synthesis of Robust Control Barrier Functions: Multilevel Polynomial Optimization and Semidefinite Relaxation

We study the problem of verification and synthesis of robust control barrier functions (CBF) for control-affine polynomial systems with bounded additive uncertainty and convex polynomial constraints on the control. We first formulate robust CBF verification and synthesis as multilevel polynomial optimization problems (POP), where verification optimizes – in three levels – the uncertainty, control, and state, while synthesis additionally optimizes the parameter of a chosen parametric CBF candidate.

Interpretable Trajectory Prediction for Autonomous Vehicles via Counterfactual Responsibility

The ability to anticipate surrounding agents’ behaviors is critical to enable safe and seamless autonomous vehicles (AVs). While phenomenological methods have successfully predicted future trajectories from scene context, these predictions lack interpretability. On the other hand, ontological approaches assume an underlying structure able to describe the interaction dynamics or agents’ internal decision processes. Still, they often suffer from poor scalability or cannot reflect diverse human behaviors.

Bayesian Reparameterization of Reward-Conditioned Reinforcement Learning with Energy-based Models

Recently, reward-conditioned reinforcement learning (RCRL) has gained popularity due to its simplicity, flexibility, and off-policy nature. However, we will show that current RCRL approaches are fundamentally limited and fail to address two critical challenges of RCRL – improving generalization on high reward-to-go (RTG) inputs, and avoiding out-of-distribution (OOD) RTG queries during testing time. To address these challenges when training vanilla RCRL architectures, we propose Bayesian Reparameterized RCRL (BR-RCRL), a novel set of inductive biases for RCRL inspired by Bayes’ theorem.

Refining Obstacle Perception Safety Zones via Maneuver-Based Decomposition

A critical task for developing safe autonomous driving stacks is to determine whether an obstacle is safety-critical, i.e., poses an imminent threat to the autonomous vehicle. Our previous work showed that Hamilton Jacobi reachability theory can be applied to compute interaction-dynamics-aware perception safety zones that better inform an ego vehicle’s perception module which obstacles are considered safety-critical.

Simon Cooksey

Investigating memory consistency models in hardware and in programming models.

Yen-Chen Lin

I am interested in generative AI for 2D images, 3D models and their applications for robotics.

Max Zhaoshuo Li

I am a Research Scientist at NVIDIA Research, working on improving AI's understanding of 3D. I received my PhD from Johns Hopkins University and my Bachelor's degree from the University of British Columbia. 

Jaesung Choe

Hi, I am Jaesung Choe. My research lies in 3D computer vision. Please visit my personal website! https://jaesung-choe.github.io/

Generative Novel View Synthesis with 3D-Aware Diffusion Models

We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image. Our model samples from the distribution of possible renderings consistent with the input and, even in the presence of ambiguity, is capable of rendering diverse and plausible novel views. To achieve this, our method makes use of existing 2D diffusion backbones but, crucially, incorporates geometry priors in the form of a 3D feature volume.