|
|
|
|
|
|
|
|
|
|
|
SIGGRAPH 2023
Our framework enables physically simulated characters to perform scene interaction tasks in a natural and life-like manner.We demonstrate the effectiveness of our approach through three challenging scene interaction tasks: carrying, sitting, and lying down, which require coordination of a character’s movements in relation to objects in the environment. |
Movement is how people interact with and affect their environment. For realistic character animation, it is necessary to synthesize such interactions between virtual characters and their surroundings. Despite recent progress in character animation using machine learning, most systems focus on controlling an agent's movements in fairly simple and homogeneous environments, with limited interactions with other objects. Furthermore, many previous approaches that synthesize human-scene interactions require significant manual labeling of the training data. In contrast, we present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters that perform scene interaction tasks in a natural and life-like manner. Our method learns scene interaction behaviors from large unstructured motion datasets, without manual annotation of the motion data. These scene interactions are learned using an adversarial discriminator that evaluates the realism of a motion within the context of a scene. The key novelty involves conditioning both the discriminator and the policy networks on scene context. We demonstrate the effectiveness of our approach through three challenging scene interaction tasks: carrying, sitting, and lying down, which require coordination of a character's movements in relation to objects in the environment. Our policies learn to seamlessly transition between different behaviors like idling, walking, and sitting. By randomizing the properties of the objects and their placements during training, our method is able to generalize beyond the objects and scenarios depicted in the training dataset, producing natural character-scene interactions for a wide variety of object shapes and placements.
Synthesizing Physical Character-Scene Interactions Mohamed Hassan, Yunrong Guo, Tingwu Wang Michael Black, Sanja Fidler, Xue Bin Peng ACM SIGGRAPH 2023 Conference Proceedings [Paper] For feedback and questions please reach out to Xue Bin Peng. |
||||
If you find this work useful for your research, please consider citing it as:
@article{
InterPhysHassan2023,
author = {Hassan, Mohamed and Guo, Yunrong and Wang, Tingwu and Black, Michael and Fidler, Sanja and Peng, Xue Bin},
title = {Synthesizing Physical Character-Scene Interactions},
year = {2023},
isbn = {9798400701597},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3588432.3591525},
doi = {10.1145/3588432.3591525},
booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},
articleno = {63},
numpages = {9},
keywords = {character animation, reinforcement learning, unsupervised reinforcement learning, adversarial imitation learning},
location = {Los Angeles, CA, USA},
series = {SIGGRAPH '23}
}