Exploiting Idle Resources in a High-Radix Switch for Supplemental Storage

A general-purpose switch for a high-performance network is usually designed with symmetric ports providing credit-based flow control and error recovery via link-level retransmission. Because port buffers must be sized for the longest links and modern asymmetric network topologies have a wide range of link lengths, we observe that there can be a significant amount of unused buffer memory, particularly in edge switches. We also observe that the tiled architecture used in many high-radix switches contains an abundance of internal bandwidth.

Phantom Ray-Hair Intersector

We present a new approach to ray tracing swept volumes along trajectories defined by cubic Bézier curves. It performs at two-thirds of the speed of ray-triangle intersection, allowing essentially even treatment of such primitives in ray tracing applications that require hair, fur, or yarn rendering.

 

FocusAR: Auto-focus Augmented Reality Eyeglasses for both Real World and Virtual Imagery

We describe a system which corrects dynamically for the focus of the real world surrounding the near-eye display of the user and simultaneously the internal display for augmented synthetic imagery, with an aim of completely replacing the user prescription eyeglasses. The ability to adjust focus for both real and virtual stimuli will be useful for a wide variety of users, but especially for users over 40 years of age who have limited accommodation range.

A Closed-form Solution to Photorealistic Image Stylization

Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic. While several photorealistic image stylization methods exist, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In this paper, we propose a method to address these issues. The proposed method consists of a stylization step and a smoothing step.

Multimodal Unsupervised Image-to-Image Translation

Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image.

Localization-Aware Active Learning for Object Detection

Active learning - a class of algorithms that iteratively searches for the most informative samples to include in a training dataset - has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult.

Context-aware Synthesis and Placement of Object Instances

Learning to insert an object instance into an image in a semantically coherent manner is a challenging and interesting problem. Solving it requires (a) determining a location to place an object in the scene and (b) determining its appearance at the location. Such an object insertion model can potentially facilitate numerous image editing and scene parsing applications. In this paper, we propose an end-to-end trainable neural network for the task of inserting an object instance mask of a specified class into the semantic label map of an image.

Video-to-Video Synthesis

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature.

Machine Learning and Rendering

Machine learning techniques just recently enabled dramatic improvements in both realtime and offline rendering. In this course, we introduce the basic principles of machine learning and review their relations to rendering. Besides fundamental facts like the mathematical identity of reinforcement learning and the rendering equation, we cover efficient and surprisingly elegant solutions to light transport simulation, participating media, noise removal, and anti-aliasing.

Gal Chechik

Gal Chechik is a Sr. Director of AI research, leading NVIDIA research in Israel.

Gal is also a Professor of computer science at Bar-Ilan University. Before joining NVIDIA, he was a Staff Research Scientist at Google, a postdoctoral research associate at Stanford University, and received his PhD from the Hebrew University of Jerusalem. Gal published ~160 papers, including publications in Nature Biotechnology, Cell and PNAS, and holds 50 issued patents. His work won awards for outstanding papers at NeurIPS and ICML.