Huck Yang

Huck Yang

NVIDIA

Huck Yang is a Senior Research Scientist at NVIDIA Research, where he focuses on large-scale sequence modeling, speech-language representation alignment, and data unlearning. He obtained his Ph.D. and Master’s degrees from the Georgia Institute of Technology, Atlanta, GA, USA, as a recipient of the Wallace H. Coulter Fellowship. His experience includes serving as a research scientist intern at Google Bard and DeepMind, Amazon Alexa Language Modeling, and the Hitachi Central Research Lab. Prior to joining NVIDIA, he worked full time at Amazon Alexa USA from 2022 to 2023. He is open to collaboration with straightforward and highly motivated researchers and is interested in working on open-source projects.

Academic Service:

  • Session Chair, In-Context Learning for Speech Recognition, ICASSP 2024
  • Session Chair, Quantum Machine Learning, ICASSP 2022
  • Senior Technical Committee in Applied Signal Processing Systems of IEEE SPS

Recent Invited Talks:

  • “Characterizing Large LMs for Generative Speech Recognition Error Correction,” MIT CSAIL, MA, USA, 2023
  • “Trainable Input Perturbation as Frozen Pre-trained Model Adaptation,” Mila, Montreal, Canada, 2022

Selected Tutorials:

  • “Large-Scale and Parameter-Efficient Language Modeling for Speech Processing,” ASRU 2023 Tutorial
  • “Resource-Efficient and Cross-Modal Learning Toward Foundation Models,” Interspeech 2023 Tutorial
  • Model Robustness, Reprogramming and Prompting for Speech and Language Processing,” ICASSP 2022 Tutorial
  • “Quantum Neural Networks for Speech and Language Processing,” IJCAI 2021 Tutorial

NVIDIA Research Page

Interests

  • Representation Tokenization
  • Speech-Language Alignment
  • Frozen Pre-trained Model Adaptation
  • Privacy-Preserving Data Unlearning

Latest