f-Domain-Adversarial Learning
Theory and Algorithms

David Acuna 1,2,3
Guojun Zhang 3,4
Marc T. Law1
Sanja Fidler1,2,3

1NVIDIA
2University of Toronto
3Vector Institute
4University of Waterloo
ICML 2021



Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset. In this paper, we introduce a novel and general domain-adversarial framework. Specifically, we derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences. It recovers the theoretical results from Ben-David et al. (2010a) as a special case and supports divergences used in practice. Based on this bound, we derive a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016). We show that many regularizers and ad-hoc objectives introduced over the last years in this framework are then not required to achieve performance comparable to (if not better than) state-of-the-art domain-adversarial methods. Experimental analysis conducted on real-world natural language and computer vision datasets show that our framework outperforms existing baselines, and obtains the best results for f-divergences that were not considered previously in domain-adversarial learning.



News



Paper

David Acuna , Guojun Zhang , Marc T. Law, Sanja Fidler

f-Domain-Adversarial Learning: Theory and Algorithms

ICML, 2021

[Preprint]
[Bibtex]
[Video]