Implementing Sparse Matrix-Vector Multiplication on Throughput-Oriented Processors

Sparse matrix-vector multiplication (SpMV) is of singular importance in sparse linear algebra. In contrast to the uniform regularity of dense linear algebra, sparse operations encounter a broad spectrum of matrices ranging from the regular to the highly irregular. Harnessing the tremendous potential of throughput-oriented processors for sparse operations requires that we expose substantial fine-grained parallelism and impose sufficient regularity on execution paths and memory access patterns.

Ming-Yu Liu

Ming-Yu Liu is a principal research scientist at NVIDIA Research. Before joining NVIDIA in 2016, he was a principal research scientist at Mitsubishi Electric Research Labs (MERL). He received his Ph.D. from the Department of Electrical and Computer Engineering at the University of Maryland College Park in 2012. His object pose estimation system was awarded one of hundred most innovative technology products by the R&D magazine in 2014. His street scene understanding paper was selected in the best paper finalist in the 2015 Robotics Science and System (RSS) conference. In CVPR 2018, he won the 1st place in both the Domain Adaptation for Semantic Segmentation Competition in the WAD challenge and the Optical Flow Competition in the Robust Vision Challenge. His research focus is on generative models for image generation and understanding. His goal is to enable machines superhuman-like imagination capabilities.

For more details about my research including full publicaiton list and opensource projects, please visit



NVIDIA Demo Wins the Laval Virtual Award at the SIGGRAPH 2016 Emerging Technologies Event:

Thursday, July 28, 2016

NVIDIA Demo Wins the Laval Virtual Award at the SIGGRAPH 2016 Emerging Technologies Event:

Anjul Patney, Joohwan Kim, Marco Salvi, Anton Kaplanyan, Chris Wyman, Nir Benty, Aaron Lefohn, David Luebke, “Perceptually-Based Foveated Virtual Reality”, Emerging Technologies, SIGGRAPH, Anaheim, CA, July 24-28, 2016

NVIDIA Secures Runner-Up Best Paper Position at ASYNC 2016

Wednesday, May 11, 2016

NVIDIA Secures Runner-Up Best Paper Position at ASYNC 2016 along with the University of Virginia:

Divya Akella Kamakshi (U. Virginia), Matthew Fojtik (NVIDIA), Brucek Khailany (NVIDIA), Sudhir Kudva (NVIDIA), Yaping Zhou (U. Virginia), Benton H. Calhoun (U. Virginia), “Modeling and Analysis of Power Supply Noice Tolerance with Fine-grained GALS Adaptive Clocks

Morgan McGuire

Morgan McGuire researches technology for creating the next generation of augmented and virtual reality systems. He's worked with NVIDIA since 2009.

In addition to NVIDIA technology, he previously contributed to products in the graphics industry including the Skylanders®, Call of Duty®, Marvel Ultimate Alliance®, and Titan Quest® series of video games series, the Unity game engine, the E Ink display used in the Amazon Kindle®, and the PeakStream GPU computing architecture acquired by Google.

He chaired the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, the ACM SIGGRAPH Symposium on Non-Photorealistic Animation and Rendering, and the ACM SIGGRAPH / EuroGraphics High Performance Rendering conference and was the founding Editor-in-Chief of the Journal of Computer Graphics Techniques. He is the author or coauthor of "the bible" of 3D, Computer Graphics: Principles & Practice 3rd EditionThe Graphics CodexCreating Games: Mechanics, Content, and Technology, the G3D Innovation Engine, the Markdeep document system, and chapters of several GPU Gems, ShaderX and GPU Pro volumes.

Morgan McGuire also holds faculty positions as an Associate Professor at Williams College, adjunct professor at the University of Waterloo in the school of Computer Science, and adjunct professor at McGill University in the department of Electrical Engineering.

Main Field of Interest: 

Saurav Muralidharan

Saurav Muralidharan joined NVIDIA Research in August 2016 after completing his Ph.D. in Computer Science from the University of Utah. His research interests are broadly in the areas of parallel and high performance computing, with a particular focus on scalable deep learning, parallel programming models, machine learning-based autotuning, and optimizing compilers.

Sophia Shao

Sophia Shao joined NVIDIA Research in July 2016. Her research interests include specialized architectures, machine learning hardware, architectural modeling, and VLSI design methodology. She received her B.S. in Electrical Engineering from Zhejiang University, China and her S.M. and Ph.D. in Computer Science from Harvard University, working with Professors David Brooks and Gu-Yeon Wei. Her work was selected as one of the TopPicks in Computer Architecture in 2015. She is a Siebel Scholar and a recipient of the IBM Ph.D. Fellowship.

Main Field of Interest: 


Subscribe to Research RSS