TimeOmni-1: Incentivizing Complex Reasoning with Time Series in Large Language Models

Recent advances in multimodal time series learning underscore a paradigm shift from analytics centered on basic patterns toward advanced time series understanding and reasoning. However, existing multimodal time series datasets mostly remain at the level of surface alignment and question answering, without reaching the depth of genuine reasoning. The absence of well-defined tasks that genuinely require time series reasoning, along with the scarcity of high-quality data, has limited progress in building practical time series reasoning models (TSRMs).

Test-Time Alignment for Large Language Models via Textual Model Predictive Control

Abstract Aligning Large Language Models (LLMs) with human preferences through finetuning is resource-intensive, motivating lightweight alternatives at test time. We address test-time alignment through the lens of sequential decision making, a perspective that reveals two fundamental challenges. When actions are defined at the token level, as in guided decoding, alignment suffers from the curse of horizon. Conversely, when actions are at the response level, as in traditional iterative refinement, the curse of dimensionality emerges.

SPEED-Bench: A Unified and Diverse Benchmark for Speculative Decoding

Speculative Decoding (SD) has emerged as a critical technique for accelerating Large Language Model (LLM) inference. Unlike deterministic system optimizations, SD performance is inherently data-dependent, meaning that diverse and representative workloads are essential for accurately measuring its effectiveness. Existing benchmarks suffer from limited task diversity, inadequate support for throughput-oriented evaluation, and a reliance on high-level implementations that fail to reflect production environments.

Elie Aljalbout

I'm a research scientist at the Seattle Robotics Lab. Prior to joining NVIDIA, I was working on world modeling for robotics at Meta FAIR. Before that, I was a postdoctoral researcher in Zurich, Switzerland, working on agile robotics. I completed my PhD at TU Munich while working as research scientist at the Volkswagen Machine Learning Research Lab.

Dvir Samuel

Dvir Samuel joined NVIDIA Research as a Research Scientist in 2026. His main fields of interest are machine learning and generative modeling. In particular, he studies diffusion- and flow-matching methods for image and video generation and editing, with an emphasis on personalization, controllability, and learning under long-tailed data regimes.

Dvir completed his Ph.D. at Bar-Ilan University under the supervision of Prof. Gal Chechik. His research spans long-tail and few-shot learning, as well as modern generative approaches for visual content creation and manipulation.