Home
News
Members
Publications
NVIDIA Research
Light
Dark
Automatic
Pavlo Molchanov
NVIDIA
Interests
Machine Learning for CV
Latest
VILA: On pretraining for vision language models
PACE: Human and Camera Motion Estimation from in-the-wild Videos
RANA: Relightable and Articulated Neural Avatars
Global Context Vision Transformers
Global Vision Transformer Pruning with Hessian-Aware Saliency
Heterogeneous Continual Learning
Recurrence without Recurrence: Stable Video Landmark Detection with Deep Equilibrium Models
Structural Pruning via Latency-Saliency Knapsack
Towards Annotation-efficient Segmentation via Image-to-image Translation
LANA: Latency Aware Network Acceleration
Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks
A-ViT: Adaptive Tokens for Efficient Vision Transformer
GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras
GradViT: Gradient Inversion of Vision Transformers
When to Prune? A Policy towards Early Structural Pruning
DRaCoN--Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars
Do Gradient Inversion Attacks Make Federated Learning Unsafe?
KAMA: 3D Keypoint Aware Body Mesh Articulation
NViT: Vision Transformer Compression and Parameter Redistribution
Optimizing Selective Protection for CNN Resilience
Adversarial Motion Modelling Helps Semi-Supervised Hand Pose Estimation
DexYCB: A Benchmark for Capturing Hand Grasping of Objects
Optimal Quantization Using Scaled Codebook
See through Gradients: Image Batch Recovery via GradInversion
Weakly-Supervised 3D Human Pose Learning via Multi-view Images in the Wild
Cite
×