DreamTeacher: Pretraining Image Backbones with Deep Generative Models

Publication image

In this work, we introduce a self-supervised feature representation learning framework DreamTeacher that utilizes generative networks for pre-training downstream image backbones. We propose to distill knowledge from a trained generative model into standard image backbones that have been well engineered for specific perception tasks. We investigate two types of knowledge distillation: 1) distilling learned generative features onto target image backbones as an alternative to pretraining these backbones on large labeled datasets such as ImageNet, and 2) distilling labels obtained from generative networks with task heads onto logits of target backbones. We perform extensive analyses on multiple generative models, dense prediction benchmarks, and several pre-training regimes. We empirically find that our DreamTeacher significantly outperforms existing self-supervised representation learning approaches across the board. Unsupervised ImageNet pre-training with DreamTeacher leads to significant improvements over ImageNet classification pre-training on downstream datasets, showcasing generative models, and diffusion generative models specifically, as a promising approach to representation learning on large, diverse datasets without requiring manual annotation.


Daiqing Li (NVIDIA)
Huan Ling (NVIDIA, Vector Institute, University of Toronto)
Amlan Kar (NVIDIA, Vector Institute, University of Toronto)
David Acuna (NVIDIA, Vector Institute, University of Toronto)
Seung Wook Kim (NVIDIA, Vector Institute, University of Toronto)
Antonio Torralba (MIT)
Sanja Fidler (NVIDIA, Vector Institute, University of Toronto)

Publication Date