Ximing Lu

Ximing Lu is a research staff of Large Language Model (LLM) Research at NVIDIA and a Ph.D. candidate at the University of Washington, advised by Professor Yejin Choi. She previously earned her B.S. degree in Computer Science at University of Washington. Her research interest centers around data synthesis, model architecture, science of LLMs, commonsense reasoning, knowledge acquisition, and multimodality. She is a co-recipient of the Best Paper Award at NAACL 2022 and the Outstanding Paper Award at EMNLP 2023.

Shuran Song

Shuran received her Ph.D. in Computer Science at Princeton University, BEng. at HKUST. Her research interests lie at the intersection of machine learning, computer vision, and robotics. Song’s research has been recognized through several awards, including the Best Paper Awards at RSS’22 and T-RO’20, Best System Paper Awards at CoRL’21, RSS’19, and finalists at RSS, ICRA, CVPR, and IROS. To learn more about Shuran’s work, please visit: https://shurans.github.io/

Cosmos World Foundation Model Platform for Physical AI

Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a
digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model
Platform to help developers build customized world models for their Physical AI setups. We position
a world foundation model as a general-purpose world model that can be fine-tuned into customized
world models for downstream applications. Our platform covers a video curation pipeline, pre-trained

Julius Berner

Julius Berner is a research scientist in NVIDIA’s Fundamental Generative AI Research (GenAIR) Group. He did his postdoc at Caltech and received his PhD from the University of Vienna in 2023. His research focuses on (probabilistic) machine learning with applications in the natural sciences, including generative modeling, sampling, and neural solvers for partial differential equations and inverse problems. More information can be found on his personal website.