Multimodal Unsupervised Image-to-Image Translation

Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image.

Localization-Aware Active Learning for Object Detection

Active learning - a class of algorithms that iteratively searches for the most informative samples to include in a training dataset - has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult.

Context-aware Synthesis and Placement of Object Instances

Learning to insert an object instance into an image in a semantically coherent manner is a challenging and interesting problem. Solving it requires (a) determining a location to place an object in the scene and (b) determining its appearance at the location. Such an object insertion model can potentially facilitate numerous image editing and scene parsing applications. In this paper, we propose an end-to-end trainable neural network for the task of inserting an object instance mask of a specified class into the semantic label map of an image.

Video-to-Video Synthesis

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature.

Machine Learning and Rendering

Machine learning techniques just recently enabled dramatic improvements in both realtime and offline rendering. In this course, we introduce the basic principles of machine learning and review their relations to rendering. Besides fundamental facts like the mathematical identity of reinforcement learning and the rendering equation, we cover efficient and surprisingly elegant solutions to light transport simulation, participating media, noise removal, and anti-aliasing.

Gal Chechik

Gal Chechik is a Sr. Director of AI research, leading NVIDIA research in Israel.

Gal is also a Professor of computer science at Bar-Ilan University. Before joining NVIDIA, he was a Staff Research Scientist at Google, a postdoctoral research associate at Stanford University, and received his PhD from the Hebrew University of Jerusalem. Gal published ~160 papers, including publications in Nature Biotechnology, Cell and PNAS, and holds 50 issued patents. His work won awards for outstanding papers at NeurIPS and ICML. 

Jan Novák

Jan is a senior research scientist working on topics at the cross-section of rendering and machine learning. He joined NVIDIA Research in 2018 after five years at Disney Resarch where he led the Rendering Group and worked on various techniques for efficient rendering and physically based simulation of light transport. He initiated and led several projects employing deep learning for image denoising, radiance estimation, and importance sampling. He also developed a number of techniques for rendering scenes with participating media.

Statistical Nearest Neighbors for Image Denoising

Non-local-means image denoising is based on processing a set of neighbors for a given reference patch. few nearest neighbors (NN) can be used to limit the computational burden of the algorithm. Resorting to a toy problem, we show analytically that sampling neighbors with the NN approach introduces a bias in the denoised patch. We propose a different neighbors’ collection criterion to alleviate this issue, which we name statistical NN (SNN). Our approach outperforms the traditional one in case of both white and colored noise: fewer SNNs can be used to generate images of superior quality, at a lower computational cost. A detailed investigation of our toy problem explains the differences between NN and SNN from a grounded point of view. The intuition behind SNN is quite general, and it leads to image quality improvement also in the case of bilateral filtering. The MATLAB code to replicate the results presented in the paper is freely available at https://github.com/NVlabs/SNN.

Kayotee: A Fault Injection-based System to Assess the Safety and Reliability of Autonomous Vehicles to Faults and Errors

Fully autonomous vehicles (AVs), i.e., AVs with autonomy level 5, are expected to dominate road transportation in the near future and contribute trillions of dollars to the global economy. The general public, government organizations, and manufacturers all have significant concern regarding resiliency and safety standards of the autonomous driving system (ADS) of AVs.