UniCon: Universal Neural Controller For     Physics-based Character Motion

Tingwu Wang1,2,3
Yunrong Guo1
Maria Shugrina1,2,3
Sanja Fidler1,2,3
Nvidia1    &    University of Toronto2   &   Vector Institute3



The field of physics-based animation is gaining importance due to the increasing demand for realism in video games and films, and has recently seen wide adoption of data-driven techniques, such as deep reinforcement learning (RL), which learn control from (human) demonstrations. While RL has shown impressive results at reproducing individual motions and interactive locomotion, existing methods are limited in their ability to generalize to new motions and their ability to compose a complex motion sequence interactively. In this paper, we propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets. UniCon is a two-level framework that consists of a high-level motion scheduler and an RL-powered low-level motion executor, which is our key innovation. By systematically analyzing existing multi-motion RL frameworks, we introduce a novel objective function and training techniques which make a significant leap in performance. Once trained, our motion executor can be combined with different high-level schedulers without the need to retrain, enabling a variety of real-time interactive applications. We show that UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar. Numerical and qualitative results demonstrate a significant improvement in efficiency, robustness and generalizability of UniCon over prior state-of-the-art.


Paper

[Paper 8.2MB]          

Citation [Bibtex] [Arxiv]
 
UniCon: Universal Neural Controller For Physics-based Character Motion

Tingwu Wang, Yunrong Guo, Maria Shugrina and Sanja Fidler.



Result Video and Overview

30-second teaser video

Full 4-min demo


Overview Figure: Our model consists of (right) an RL-powered low-level motion executor that is able to physically animate a given (non-physically plausible) sequence of target motion frames. The low-level motion executor can work in conjunction with a plethora of (left) high-level motion schedulers which produce target motion frames.

Zeroshot Robustness

In this project, we introduce zero-shot robustness, where the agent never sees the perturbation or retargeting information during training, and is asked to perform tasks under perturbations or using different humanoid models with varying masses. We argue that a robust controller with the ability to combat unseen perturbations and retargeting problems will have a much broader potential for real-life applications.
(1) Zeroshot Speed Adaptation
(2) Zeroshot Projectile Resistance
(3) Zeroshot Model Retargeting



In this table we show the zero-shot robustness of UniCon compared with DeepMimic. The keyword "Speed" represents the experiments where we modify the speed of the reference motion with certain ratio, and "Proj" represents the experiments where we modify how frequently the projectiles are thrown at the agents (i. e. how many time-steps there are between two projectiles are thrown at the agent). "heavy" and "light" represent the two experiments where we use different humanoid models. In the table we show the relative performance compared to the original performance.



Keyboard Driven Interactive Control

We use phase-functioned neural networks (PFNN) to process the keyboard commands and generate future states. One can control the walking direction of the agent, and choose the walking style from walking, jogging, crouching, etc.




Snapshots of the keyboard driven interactive control application. On the left is the target agent states, and on the right, the yellow agents represent ones that are physically simulated by our algorithm. Note that our controller is real-time responsive to keyboard commands.

Interactive Video Controlled Animation

We show how our algorithm can be used to real-time teleport the motions captured from a remote host, to its physics-based avatars in the simulated environment.




In the snapshots, we show how our controller reacts real-time to the remote host captured by a camera. It successfully reproduces waving, walking, turning and jumping behaviors.

Interactive Motion Stitching

We also show the results where we randomly select a motion from a motion dataset, which our algorithm will respond to real time.




Snapshots of interactive motion stitching with unseen motions. Our controller can react real-time to generate getting up, walking and boxing behaviors.

Numerical Comparison


The numerical training performance for our algorithm and baselines. We show the average sum of reward per episode. Our algorithm obtains better sample efficiency and performance.


The numerical testing and transfer learning performance for our algorithm and baselines. We show that UniCon is having better performance, indicating it learns transferable features and does not overfit.

Last update: Sept, 2020
web page template: this