Yejin Choi

Yejin Choi is a distinguished scientist of Language and Cognition Research at NVIDIA. Her current research focuses on large language models, large reasoning models, and alternative architectures. She is a MacArthur Fellow (class of 2022), named among Time100 Most Influential People in AI in 2023, and a co-recipient of 2 Test-of-Time awards (ACL 2021 and CVPR 2021) and 8 Best and Outstanding Paper Awards at ACL, EMNLP, NAACL, ICML, NeurIPS, and AAAI.

VerilogCoder: Autonomous Verilog Coding Agents with Graph-based Planning and Abstract Syntax Tree (AST)-based Waveform Tracing Tool

Due to the growing complexity of modern Integrated Circuits (ICs), automating hardware design can prevent a significant amount of human error from the engineering process and result in less errors. Verilog is a popular hardware description language for designing and modeling digital systems; thus, Verilog generation is one of the emerging areas of research to facilitate the design process.

Large Language Model (LLM) for Standard Cell Layout Design Optimization

Standard cells are essential components of modern digital circuit designs. With process technologies advancing toward 2nm, more routability issues have arisen due to the decreasing number of routing tracks, increasing number and complexity of design rules, and strict patterning rules. The state-of-the-art standard cell design automation framework is able to automatically design standard cell layouts in advanced nodes, but it is still struggling to generate highly competitive Performance-Power-Area (PPA) and routable cell layouts for complex sequential cell designs.

Kilometer-Scale Convection Allowing Model Emulation using Generative Diffusion Modeling

Storm-scale convection-allowing models (CAMs) are an important tool for predicting the evolution of thunderstorms and mesoscale convective systems that result in damaging extreme weather. By explicitly resolving convective dynamics within the atmosphere they afford meteorologists the nuance needed to provide outlook on hazard. Deep learning models have thus far not proven skilful at km-scale atmospheric simulation, despite being competitive at coarser resolution with state-of-the-art global, medium-range weather forecasting.

GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators

Recent advances in large language models (LLMs) have stepped forward the development of multilingual speech and machine translation by its reduced representation errors and incorporated external knowledge. However, both translation tasks typically utilize beam search decoding and top-1 hypothesis selection for inference. These techniques struggle to fully exploit the rich information in the diverse N-best hypotheses, making them less optimal for translation tasks that require a single, high-quality output sequence.

Ido Greenberg

Ido Greenberg is a Senior Research Scientist at NVIDIA's AI Research Lab at Tel-Aviv.
His research focuses on making the extraordinary achievements of the RL literature more applicable to real-world problems.

Ido completed his PhD in EE at the Technion, his MSc in Applied Math at Tel Aviv University, and his BSc in Math and Physics in The Hebrew University of Jerusalem, as part of Talpiot program.

Hasan Nazim Genc

Hasan Genc's research focuses on DNN accelerators and agile hardware design methodologies. He has built open-source hardware implementations of DNN accelerators, helped create programming languages and sparsity formats for such accelerators, and built automated tools that help others design, evaluate, and generate accelerators. He has a PhD from the University of California, Berkeley, and a Bachelor’s degree from the University of Texas at Austin.

Marina Neseem

Marina is a Research Scientist working with the Accelerators and VLSI Research Group. Her research focuses on hardware-software co-design and efficient deep learning. This includes designing efficient model architectures, implementing dynamic pruning and adaptive inference techniques, and creating memory and parameter-efficient training methods.