Sparse Matrix-Vector Multiplication on Multicore and Accelerators
"Sparse Matrix-Vector Multiplication on Multicore and Accelerators"
Sam Williams, Nathan Bell (NVIDIA), Jee Whan Choi, Michael Garland (NVIDIA), Leonid Oliker, Richard Vuduc, in Scientific Computing on Multicore and Accelerators, December 2010
|Author(s):||Sam Williams, Nathan Bell (NVIDIA), Jee Whan Choi, Michael Garland (NVIDIA), Leonid Oliker, Richard Vuduc|
This chapter summarizes recent work on the development of highperformance multicore and accelerator-based implementations of sparse matrix-vector multiplication (SpMV). As an object of study, SpMV is an interesting computation for at least two reasons. First, it appears widely in applications in scienti c and engineering computing, nancial and economic modeling, and information retrieval, among others, and is therefore of great practical interest. Secondly, it is both simple to describe but challenging to implement well, since its performance is limited by a variety of factors, including low computational intensity, potentially highly irregular memory access behavior, and a strong input-dependence that be known only at run-time. Thus, we believe SpMV both practically important as well as insightful for understanding the algorithmic and implementation principles necessary to making eff ective use of state-of-the-art systems.