ExTensor: An Accelerator for Sparse Tensor Algebra

Generalized tensor algebra is a prime candidate for acceleration via customized ASICs. Modern tensors feature a wide range of data sparsity, with the density of non-zero elements ranging from 10^-6% to 50%. This paper proposes a novel approach to accelerate tensor kernels based on the principle of hierarchical elimination of computation in the presence of sparsity. This approach relies on rapidly finding intersections -- situations where both operands of a multiplication are non-zero -- enabling new data fetching mechanisms and avoiding memory latency overheads associated with sparse kernels implemented in software. We propose the ExTensor accelerator, which builds these novel ideas on handling sparsity into hardware to enable better bandwidth utilization and compute throughput. We evaluate ExTensor on several kernels relative to industry libraries (Intel MKL) and state-of-the-art tensor algebra compilers (TACO). When bandwidth normalized, we demonstrate an average speedup of 3.4x, 1.3x, 2.8x, 24.9x, and 2.7x on SpMSpM, SpMM, TTV, TTM, and SDDMM kernels respectively over a server class CPU.

 

Authors

Kartik Hegde (University of Illinois at Urbana-Champaign)
Hadi Asghari-Moghaddam (University of Illinois at Urbana-Champaign)
Edgar Solomonik (University of Illinois at Urbana-Champaign)
Christopher W. Fletcher (University of Illinois Urbana-Champaign)

Publication Date

Research Area

Uploaded Files

Award

IEEE Micro Top Picks in Computer Architecture (Honorable Mention)