Autonomous Vehicle Research Group

Autonomous Vehicle Research Group

Welcome!

Welcome to the homepage of the NVIDIA Research Autonomous Vehicle Research Group led by Dr. Marco Pavone.

We are a new team within NVIDIA Research that brings together a diverse and interdisciplinary set of researchers to address core topics in vehicle autonomy, ranging from perception and prediction to planning and control, as well as advance the state of the art in a number of critical related fields such as decision making under uncertainty, reinforcement learning, and the verification and validation of safety-critical AI systems.

Highlights

January 2023 - All 9 of our recently-submitted papers have been accepted!! We’ll be presenting 7 papers at ICRA 2023, 1 paper at ICLR 2023, and 1 paper at ACC 2023! Also, welcome back to the group Yulong Cao!

October 2022 - We’ve had a productive summer! Capping it off by welcoming Heng Yang and Apoorva Sharma to the group, having 3 papers accepted to CoRL (including one oral!), 6 NeurIPS workshop papers, and 6 new preprints released online! Check them out below in our Publications section!

Research Areas

Our group addresses challenges in AI-powered autonomous vehicles through a three-pronged research agenda, which, collectively, is aimed at laying the foundation for the next generation of AV autonomy stacks.

  • Human-centered autonomy: Interaction-aware decision making for safety-critical systems: A distinctive feature of autonomous vehicles is that they will be operating alongside humans for the most part, and thus they will need to negotiate behaviors with their human counterparts. Central to this task is effectively predicting the intent of other decision-making agents, forecasting their trajectories within a scene, and leveraging these predictions for safe and interaction-aware decision making. Specifically,

    • We leverage recent advances in deep learning to develop human (e.g., pedestrian) motion prediction models that are specifically attuned to the task of real-time, interaction-aware decision making and that can transfer to new settings with only a handful of location-specific data.
    • We investigate human models for the purposes of simulation and validation and devise plausible and controllable simulation agents.
    • We are investigating methods rooted in formal methods (e.g., reachability theory) to imbue certifiable safety when (i) planning with learning-based prediction and perception models and (ii) dealing with variability in agent behavior.
  • Next-generation autonomy architectures: Highly integrated yet modular autonomy stacks: We advocate the use of modular architectures due to their unparalleled reusability and interpretability properties, which are especially important in a safety-critical context. At the same time, to remove information bottlenecks and harness maximum efficiency, we are investigating tools and methods to design much more integrated autonomy stacks, by exploring four main avenues: metrics that capture upstream and downstream information flows, thereby coupling the design of the different modules; more expressive and learned representations of information flows at multiple levels of the AV stack; statistical techniques for producing and propagating calibrated uncertainty measures; and algorithmic approaches to co-designing the different modules by leveraging the aforementioned coupling metrics and learned representations. The end goal is a novel, highly integrated architecture that inherits the advantages of classical decoupled designs while harnessing the benefits of end-to-end architectures.

  • Assured autonomy: Safety assurances with machine learning models in the loop: Data-driven methods based on machine learning (ML) have enabled tasks beyond what was considered possible with their traditional, non-learning-based counterparts. However, ML-based algorithms can suffer from unpredictability and erratic behavior—a showstopper in the context of safety-critical systems. To enable the confident infusion of ML models within the decision-making loop of an autonomous vehicle, we research tools and methods for providing both offline and online algorithmic assurances for components of a learning-enabled autonomy stack. For example, we are designing run-time monitors that can detect and identify faults in ML-based components and inform downstream decision-making, and integrating learning-based components into the decision-making module in a way that allows for uncertainty reasoning and performance guarantees

In pursuit of these research agenda, we leverage and combine expertise from a number of fields, including optimal control, decision-making, reinforcement learning, deep learning, machine learning, robotics, computer vision, and formal methods.

Our research efforts are enriched by a close collaboration with NVIDIA’s AV product team.

Publications

Quickly discover relevant content by filtering publications.
*