Edward Suh

G. Edward Suh is a Senior Director of Research, and leads a group in security and privacy research.

He is also an Adjunct Professor in the School of Electrical and Computer Engineering at Cornell University, where he served on the faculty from 2007 to 2023. Before joining NVIDIA, he was a Research Scientist in the Fundamental AI Research (FAIR) team at Meta. He earned a B.S. in Electrical Engineering from Seoul National University and an M.S. and a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT). 

Jiaojiao Fan

Jiaojiao receives her PhD in Machine Learning from the Georgia Institute of Technology. Her early research focused on scaling neural network-based optimal transport and MCMC sampling, with recent work emphasizing controllable generation in generative image and video models. Her contributions have been recognized through publications at leading conferences like ICML, AISTATS, and COLT.

DoRA: Weight-Decomposed Low-Rank Adaptation

In this ICML'24 Oral paper, we first introduce a novel weight decomposition analysis to investigate the inherent differences between FT and LoRA. Aiming to resemble the learning capacity of FT from the findings, we propose Weight-Decomposed LowRank Adaptation (DoRA). DoRA decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically employing LoRA for directional updates to efficiently minimize the number of trainable parameters.

Align Your Steps: Optimizing Sampling Schedules in Diffusion Models

Diffusion models (DMs) have established themselves as the state-of-the-art generative modeling approach in the visual domain and beyond. A crucial drawback of DMs is their slow sampling speed, relying on many sequential function evaluations through large neural networks. Sampling from DMs can be seen as solving a differential equation through a discretized set of noise levels known as the sampling schedule.

Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models

Text-guided diffusion models have revolutionized image and video generation and have also been successfully used for optimization-based 3D object synthesis. Here, we instead focus on the underexplored text-to-4D setting and synthesize dynamic, animated 3D objects using score distillation methods with an additional temporal dimension.