GeoMan: Temporally Consistent Human Geometry Estimation using Image-to-Video Diffusion

1Seoul National University 2NVIDIA

(*Work done while at NVIDIA.)

GeoMan provides accurate and temporally stable geometric predictions for human videos, surpassing existing methods. Also, our root-relative depth representation preserves critical human size information, enabling metric depth estimation and 3D reconstruction.

Abstract

Estimating accurate and temporally consistent 3D human geometry from videos is a challenging problem in computer vision. Existing methods, primarily optimized for single images, often suffer from temporal inconsistencies and fail to capture fine-grained dynamic details. To address these limitations, we present GeoMan, a novel architecture designed to produce accurate and temporally consistent depth and normal estimations from monocular human videos. GeoMan addresses two key challenges: the scarcity of high-quality 4D training data and the need for metric depth estimation to accurately model human size. To overcome the first challenge, GeoMan employs an image-based model to estimate depth and normals for the first frame of a video, which then conditions a video diffusion model, reframing video geometry estimation task as an image-to-video generation problem. This design offloads the heavy lifting of geometric estimation to the image model and simplifies the video model’s role to focus on intricate details while using priors learned from large-scale video datasets. Consequently, GeoMan improves temporal consistency and generalizability while requiring minimal 4D training data. To address the challenge of accurate human size estimation, we introduce a root-relative depth representation that retains critical human-scale details and is easier to be estimated from monocular inputs, overcoming the limitations of traditional affine-invariant and metric depth representations. GeoMan achieves state-of-the-art performance in both qualitative and quantitative evaluations, demonstrating its effectiveness in overcoming longstanding challenges in 3D human geometry estimation from videos.

Method

{"overview"}

Overview of GeoMan: (a) Given a video sequence as input, we first use I2G to estimate the normal or depth of the first frame. This initial prediction is then used to condition the V2G model, which generates predictions for the entire input sequence. GeoMan seamlessly handles both depth and normal estimation tasks using the same model weights, requiring only a replacement of the input condition for the first frame. (b) We propose a human-centered root-relative depth representation, which retains the human scale information and enables better temporal modeling.

Video with Narration

GeoMan provides accurate and temporally stable geometric predictions for human videos, surpassing existing methods.




Zero-Shot Normal Estimation

Compared to baseline methods, GeoMan achieves both high-quality and temporal consistency.

In-the-Wild Videos

ActorsHQ Dataset




Zero-Shot Depth Estimation

Compared to baselines, GeoMan produces more temporally stable and high-quality depth.
* To ensure consistency, the predicted depth maps in all visualizations are renormalized using sequence-wise min-max scaling within the human mask.

In-the-Wild Videos

ActorsHQ Dataset




Human Geometry Estimation on Long Video

Our method is trained to generate 12 frames for efficiency, but it can be generalized to geometry estimation on arbitrary-length videos.




Human Geometry Estimation on Multi-Human Video

Our method can be extended to multi-human geometry estimation even when it is trained for single-human geometry estimation.




BibTeX

@misc{kim2025geoman,
      title={GeoMan: Temporally Consistent Human Geometry Estimation using Image-to-Video Diffusion},
      author={Gwanghyun Kim, Xueting Li, Ye Yuan, Koki Nagano, Tianye Li, Jan Kautz, Se Young Chun, Umar Iqbal},
      year={2025},
      eprint={2505.23085},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}