View Generalization for Single Image Textured 3D Models
Published in CVPR 2021, 2021
Recommended citation: Anand Bhattad, Aysegul Dundar, Guilin Liu, Andrew Tao, Bryan Catanzaro, View Generalization for Single Image Textured 3D Models, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR) 2021.
Abstract
Humans can easily infer the underlying 3D geometry and texture of an object only from a single 2D image. Current computer vision methods can do this, too, but suffer from view generalization problems - the models inferred tend to make poor predictions of appearance in novel views. As for generalization problems in machine learning, the difficulty is balancing single-view accuracy (cf. training error; bias) with novel view accuracy (cf. test error; variance). We describe a class of models whose geometric rigidity is easily controlled to manage this tradeoff. We describe a cycle consistency loss that improves view generalization (roughly, a model from a generated view should predict the original view well). View generalization of textures requires that models share texture information, so a car seen from the back still has headlights because other cars have headlights. We describe a cycle consistency loss that encourages model textures to be aligned, so as to encourage sharing. We compare our method against the state-of-the-art method and show both qualitative and quantitative improvements.
Video
Code
Sample Results
Authors
Citation
@article{Bhattad2021View,
author = {Anand Bhattad and Aysegul Dundar and Guilin Liu and Andrew Tao and Bryan Catanzaro},
title = {View Generalization for Single Image Textured 3D Models},
journal = {arXiv},
year = {2021},
}
Acknowledgements
We thank David A. Forsyth for insightful discussions. We also thank Yuxuan Zhang and Wenzheng Chen for providing us baseline code of DIBR.