Multi-view Fusion for Multi-level Robotic Scene Understanding

We present a system for multi-level scene awareness for robotic manipulation. Given a sequence of camera-in-hand RGB images, the system calculates three types of information: 1) a point cloud representation of all the surfaces in the scene, for the purpose of obstacle avoidance. 2) the rough pose of unknown objects from categories corresponding to primitive shapes (e.g., cuboids and cylinders), and 3) full 6-DoF pose of known objects. By developing and fusing recent techniques in these domains, we provide a rich scene representation for robot awareness. We demonstrate the importance of each of these modules, their complementary nature, and the potential benefits of the system in the context of robotic manipulation.

Authors: 
Yunzhi Lin (NVIDIA, Georgia Institute of Technology)
Patricio A. Vela (Georgia Institute of Technology)
Publication Date: 
Wednesday, September 15, 2021
Research Area: