Gian Marti

Gian Marti is a Research Scientist at NVIDIA. He earned his B.Sc. and M.Sc. degrees in Electrical Engineering from ETH Zurich in 2017 and 2019, and completed his Ph.D. there in 2025. From 2019 to early 2026, he worked at ETH Zurich’s Signal and Information Processing Laboratory and later at the Integrated Information Processing Group. He has also interned at ABB, Kistler, and NVIDIA.

Jef Packer

Jef Packer is a Research Engineer who has helped bring many projects from research to production. After receiving his B.S. and M.Eng. in Mechanical and Aerospace Engineering from UC Davis, he spent over a decade building autonomous driving systems. He started at Tesla, where he worked on Autopilot v1, then spent six years at Zoox progressing from firmware and classical planning to leading collision avoidance and ML-based planning research.

TimeOmni-1: Incentivizing Complex Reasoning with Time Series in Large Language Models

Recent advances in multimodal time series learning underscore a paradigm shift from analytics centered on basic patterns toward advanced time series understanding and reasoning. However, existing multimodal time series datasets mostly remain at the level of surface alignment and question answering, without reaching the depth of genuine reasoning. The absence of well-defined tasks that genuinely require time series reasoning, along with the scarcity of high-quality data, has limited progress in building practical time series reasoning models (TSRMs).

Test-Time Alignment for Large Language Models via Textual Model Predictive Control

Abstract Aligning Large Language Models (LLMs) with human preferences through finetuning is resource-intensive, motivating lightweight alternatives at test time. We address test-time alignment through the lens of sequential decision making, a perspective that reveals two fundamental challenges. When actions are defined at the token level, as in guided decoding, alignment suffers from the curse of horizon. Conversely, when actions are at the response level, as in traditional iterative refinement, the curse of dimensionality emerges.