Sparse Matrix-Vector Multiplication on Multicore and Accelerators

This chapter summarizes recent work on the development of highperformance multicore and accelerator-based implementations of sparse matrix-vector multiplication (SpMV). As an object of study, SpMV is an interesting computation for at least two reasons. First, it appears widely in applications in scienti c and engineering computing, nancial and economic modeling, and information retrieval, among others, and is therefore of great practical interest. Secondly, it is both simple to describe but challenging to implement well, since its performance is limited by a variety of factors, including low computational intensity, potentially highly irregular memory access behavior, and a strong input-dependence that be known only at run-time. Thus, we believe SpMV both practically important as well as insightful for understanding the algorithmic and implementation principles necessary to making eff ective use of state-of-the-art systems.

Authors

Sam Williams (NVIDIA)
Nathan Bell (NVIDIA)
Jee Whan Choi (NVIDIA)
Leonid Oliker (NVIDIA)
Richard Vuduc (NVIDIA)

Publication Date