Compositional Video Synthesis with Action Graphs

Publication
International Conference on Machine Learning

Video

Abstract

Videos of actions are complex signals, containing rich compositional structure. Current video generation models are limited in their ability to generate such videos. To address this challenge, we introduce a generative model (AG2Vid) that can be conditioned on an Action Graph, a structure that naturally represents the dynamics of actions and interactions between objects. Our AG2Vid model disentangles appearance and position features, allowing for more accurate generation. AG2Vid is evaluated on the CATER and Something-Something datasets and outperforms other baselines. Finally, we show how Action Graphs can be used for generating novel compositions of actions.

Cite the paper

If you use the contents of this project, please cite our paper. @article{bar2020compositional, title={Compositional video synthesis with action graphs}, author={Bar, Amir and Herzig, Roei and Wang, Xiaolong and Chechik, Gal and Darrell, Trevor and Globerson, Amir}, journal={arXiv preprint arXiv:2006.15327}, year={2020} }

Related