Matthijs Van Keirsbilck

Matthijs joined Nvidia Research in 2018 and is working on foundations of neural networks and machine learning.
He received his Master's degree in Electrical Engineering from KU Leuven (Belgium) in 2017.

Additional Research Areas: 

Machine Learning and Integral Equations

As both light transport simulation and reinforcement learning are ruled by the same Fredholm integral equation of the second kind, machine learning techniques can be used for efficient photorealistic image synthesis: Light transport paths are guided by an approximate solution to the integral equation that is learned during rendering.

Beyond the socket: NUMA-aware GPUs

GPUs achieve high throughput and power efficiency by employing many small single instruction multiple thread (SIMT) cores. To minimize scheduling logic and performance variance they utilize a uniform memory system and leverage strong data parallelism exposed via the programming model. With Moore's law slowing, for GPUs to continue scaling performance (which largely depends on SIMT core count) they are likely to embrace multi-socket designs where transistors are more readily available.

Ben Boudaoud

Ben joined NVIDIA in January 2018 as research staff in the New Experiences Research group. Prior to joining NVIDIA he worked on ultra-low power circuit and system design for medical products including wearable and implantable monitors for the cardiac space. He received his MS from the University of Virginia in 2014 where his work focused on development and deployment of wearable 6 and 9 DoF motion sensing platforms for clinical applications.

Ben's research interests include techniques for low power, high efficiency circuit and system design as well as applications of low power sensors and systems within the VR/AR space. 

Main Field of Interest: 

Ankur Handa

Additional Research Areas: 

Toward Standardized Near-Data Processing with Unrestricted Data Placement for GPUs

3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack.

Fine-Grained DRAM: Energy-Efficient DRAM for Extreme Bandwidth Systems

Future GPUs and other high-performance throughput processors will require multiple TB/s of bandwidth to DRAM. Satisfying this bandwidth demand within an acceptable energy budget is a challenge in these extreme bandwidth memory systems. We propose a new high-bandwidth DRAM architecture, Fine-Grained DRAM (FGDRAM), which improves bandwidth by 4× and improves the energy efficiency of DRAM by 2× relative to the highest-bandwidth, most energy-efficient contemporary DRAM, High Bandwidth Memory (HBM2).

Pages

Subscribe to Research RSS