Welcome to the homepage of the NVIDIA Seattle Robotics Lab led by Professor Dieter Fox. Our research group was founded in 2017, and is primarily based in Seattle.
The charter of the lab is to drive breakthrough robotics research to enable the next generation of robots that perform complex manipulation tasks to safely work alongside humans and transform industries such as manufacturing, logistics, healthcare, and more.
Enabling the next generation of robots requires progress in multiple areas of research, including robot control and reinforcement learning, computer vision, human-robot interaction, deep learning, and physics-based simulation. The Seattle Robotics Lab (SRL) brings together experts from these disciplines to work toward the joint goal of robots that can interact with the physical world and collaborate with people. Here are some of our key research areas:
To ground our research, we integrate individual components into large scale manipulation systems that solve complex tasks in the real world. Building systems also lets us investigate the boundaries of what’s possible and develop prototypes for commercial use cases of manipulation systems.
Simulation has several key benefits for robotics, and we work closely with our simulation experts to develop simulators that are well suited for robotics. We investigate how to leverage large-scale parallelization, photo-realism, and physical accuracy of modern simulators. We also leverage simulators to automatically annotate data and provide demonstrations for tasks that cannot yet be solved in the real world. Sim2Real transfer and learning representations from large-scale data are other key areas of investigation. Furthermore, simulation enables controlled benchmarking to speed up progress in robotics research.
Vision is crucial for providing the necessary context for manipulation tasks. We develop novel techniques for object detection and tracking, semantic understanding, and large-scale pre-training of robust representations of objects, states, and tasks. Beyond vision, we investigate multi-modal reasoning combining touch, force, RGB, and depth. Language understanding is important for teaching and instructing robots, as well as providing a structural prior for higher-level reasoning.
We investigate how to leverage GPU-based parallelism for highly efficient planning and model-predictive control, as well as task and motion planning to solve complex, long-range manipulation tasks. Learning manipulation skills and behaviors both from few real-world demonstrations and from vast-scale simulated experience is another important area of our research.
We aim to develop manipulators that can operate safely alongside people. Fluent and real-time interaction requires advances in perception, prediction, and behavior generation. We investigate language grounding, human body, hand, and object tracking along with predictive control techniques.