Minki Kang is a Research Intern at NVIDIA and Ph.D. student at KAIST. Previously, he was a research intern at Microsoft Research. His primary research focuses on building efficient language model and vision-language model agents, specifically through model distillation and context compression. For more details about his research background, please visit his personal homepage.