FoVA-Depth: Field-of-View Agnostic Depth Estimation for Cross-Dataset Generalization

University of Maryland NVIDIA
3DV 2024 (Oral)
Teaser Image

We can train stereo depth estimation models only on the widely available pinhole datasets and enable zero-shot generalization to images captured with different FoVs, including unseen camera models, such as fisheye and 360° panorama images.

Abstract

Wide field-of-view (FoV) cameras efficiently capture large portions of the scene, which makes them attractive in multiple domains, such as automotive and robotics. For such applications, estimating depth from multiple images is a critical task, and therefore, a large amount of ground truth (GT) data is available. Unfortunately, most of the GT data is for pinhole cameras, making it impossible to properly train depth estimation models for large-FoV cameras. We propose the first method to train a stereo depth estimation model on the widely available pinhole data, and to generalize it to data captured with larger FoVs. Our intuition is simple: We warp the training data to a canonical, large-FoV representation and augment it to allow a single network to reason about diverse types of distortions that otherwise would prevent generalization. We show strong generalization ability of our approach on both indoor and outdoor datasets, which was not possible with previous methods.

Results

nvTorchCam

Alongside the project, we developed and released nvTorchCam, a library providing camera-agnostic 3D geometry functions. As showcased in FoVA-Depth, nvTorchCam is particularly useful for developing PyTorch models that leverage plane-sweep volumes (PSV) and related concepts such as sphere-sweep volumes or epipolar attention.


Key features of nvTorchCam include:

  • An abstraction of camera projection models that facilitates implententation of camera-agnostic algorithms;
  • Geometry functions that are fully differentiable, allowing for use cases such as optimization of camera parameters;
  • Support for heterogeneous batches of cameras, e.g., mixing pinhole, fisheye, 360° cameras within the same batch.

Check out the library 👉👉 here!

BibTeX


@inproceedings{lichy2024fova,
  title     = {{FoVA-Depth}: {F}ield-of-View Agnostic Depth Estimation for Cross-Dataset Generalization},
  author    = {Lichy, Daniel and Su, Hang and Badki, Abhishek and Kautz, Jan and Gallo, Orazio},
  booktitle = {International Conference on 3D Vision (3DV)},
  year      = {2024}
}