NVILA: Efficient Frontier Visual Language Models

Abstract

Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces NVILA, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This “scale-then-compress” approach enables NVILA to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5×, fine-tuning memory usage by 3.4×, pre-filling latency by 1.6-2.2×, and decoding latency by 1.2-2.8×. We make our code and models available to facilitate reproducibility.

Publication
Proceedings of the Computer Vision and Pattern Recognition Conference
Zhijian Liu
Zhijian Liu
Senior Research Scientist

Senior Research Scientist at NVIDIA Research.

Ligeng Zhu
Ligeng Zhu
Senior Research Scientist

Senior Research Scientist at NVIDIA Research.

Yukang Chen
Yukang Chen
Senior Research Scientist

Senior Research Scientist at NVIDIA Research.

Song Han
Song Han
Associate Professor

Song Han is an associate professor at MIT EECS.

Yao (Jason) Lu
Yao (Jason) Lu
Senior Research Scientist

Senior Research Scientist at NVIDIA Research.