Reinforcement Learning

ThinkAct: Vision-Language-Action Reasoning via Reinforced Visual Latent Planning

Vision-language-action (VLA) reasoning tasks require agents to interpret multimodal instructions, perform long-horizon planning, and act adaptively in dynamic environments. Existing approaches typically train VLA models in an end-to-end fashion, …

Unified Reinforcement and Imitation Learning for Vision-Language Models

Vision-Language Models (VLMs) have achieved remarkable progress, yet their large scale often renders them impractical for resource-constrained environments. This paper introduces Unified Reinforcement and Imitation Learning (RIL), a novel and …