Learning Affinity via Spatial Propagation Networks

In this paper, we propose spatial propagation networks for learning the affinity matrix for vision tasks. We show that by constructing a row/column linear propagation model, the spatially varying transformation matrix exactly constitutes an affinity matrix that models dense, global pairwise relationships of an image.

Jan Issac

In Memoriam: Jan Issac

We are saddened to share the loss of our friend and colleague Jan Issac, who passed away on Sunday, April 15. 

Jan joined NVIDIA in 2017 as a researcher and software engineer on the Robotics team in Seattle. A talented engineer, he made many contributions to our state-of-the-art experimental platform for robotics manipulation. 

An intelligent and inquisitive man, he loved good food, was an enthusiastic photographer, and enjoyed playing the piano.

Jan will be remembered for his persistence, his amazing ability to learn diverse new technologies, and his dedication to helping others.

Jan will be deeply missed. Please keep him and his family in your thoughts.

Benjamin Eckart

Ben Eckart received his Ph.D. in Robotics at Carnegie Mellon University in 2017, as well as an M.S. in Electrical Engineering, B.S. in Computer Science, and B.S. in Computer Engineering. He was an NVIDIA Graduate Fellow in 2014, and upon graduation joined NVIDIA as a Post-Doctorate Researcher in 2017. His research explores methods to represent and operate on 3D point cloud data and its applications to robotics, computer vision, augmented reality, and autonomous driving.  

Main Field of Interest: 

Progressive Growing of GANs for Improved Quality, Stability, and Variation

We train generative adversarial networks in a progressive fashion, enabling us to generate high-resolution images with high quality.

Consistent Video Filtering for Camera Arrays

Visual formats have advanced beyond single-view images and videos: 3D movies are commonplace, researchers have developed multi-view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter.

Learning to Super-Resolve Blurry Face and Text Images

We present an algorithm to directly restore a clear high-resolution image from a blurry low-resolution input. This problem is highly ill-posed and the basic assumptions for existing super-resolution methods (requiring clear input) and deblurring methods (requiring high-resolution input) no longer hold. We focus on face and text images and adopt a generative adversarial network (GAN) to learn a category-specific prior to solve this problem. However, the basic GAN formulation does not generate realistic high-resolution images.

Cascaded Scene Flow Prediction using Semantic Segmentation

Given two consecutive frames from a pair of stereo cameras, 3D scene flow methods simultaneously estimate the 3D geometry and motion of the observed scene. Many existing approaches use superpixels for regularization, but may predict inconsistent shapes and motions inside rigidly moving objects. We instead assume that scenes consist of foreground objects rigidly moving in front of a static background, and use semantic cues to produce pixel-accurate scene flow estimates.

Semantic Video CNNs through Representation Warping

In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very little extra computational cost. This module is called NetWarp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network representations across time.

Tensor Contractions with Extended BLAS Kernels on CPU and GPU

Tensor contractions constitute a key computational ingredient of numerical multi-linear algebra. However, as the order and dimension of tensors grow, the time and space complexities of tensor-based computations grow quickly. In this paper, we propose and evaluate new BLAS-like primitives that are capable of performing a wide range of tensor contractions on CPU and GPU efficiently. We begin by focusing on single- index contractions involving all the possible configurations of second-order and third-order tensors.

Pages

Subscribe to Research RSS