DeXtreme: Transfer of Agile In-Hand Manipulation from Simulation to Reality

Publication image

Recent work has demonstrated the ability of deep reinforcement learning (RL) algorithms to learn complex robotic behaviours in simulation, including in the domain of multi-fingered manipulation. However, such models can be challenging to transfer to the real world due to the gap between simulation and reality. In this paper, we present our techniques to train a) a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand and b) a robust pose estimator suitable for providing reliable real-time information on the state of the object being manipulated. Our policies are trained to adapt to a wide range of conditions in simulation. Consequently, our vision-based policies significantly outperform the best vision policies in the literature on the same reorientation task and are competitive with policies that are given privileged state information via motion capture systems. Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups, and in our case, with the Allegro Hand and Isaac Gym GPU-based simulation. Furthermore, it opens up possibilities for researchers to achieve such results with commonly-available, affordable robot hands and cameras. Videos of the resulting policy and supplementary information, including experiments and demos, can be found at dextreme.org

Authors

Arthur Allshire (NVIDIA)
Viktor Makoviychuk (NVIDIA)
Aleksei Petrenko (Apple)
Ritvik Singh (NVIDIA)
Jingzhou Liu (NVIDIA)
Denys Makoviichuk (Snap)
Alexander Zhurkevich (NVIDIA)
Jean-Francois Lafleche (NVIDIA)
Gavriel State (NVIDIA)

Publication Date

Research Area