Parallel Computing Experiences with CUDA

The CUDA programming model provides a straightforward means of describing inherently parallel computations, and NVIDIA’s Tesla GPU architecture delivers high computational throughput on massively parallel problems. This article surveys experiences gained in applying CUDA to a diverse set of problems and the parallel speedups over sequential codes running on traditional CPU architectures attained by executing key computations on the GPU.

Authors: 
Scott Le Grand (NVIDIA)
John Nickolls (NVIDIA)
Joshua Anderson (Iowa State)
Jim Hardwick (TechniScan Medical Systems)
Scott Morton (Hess)
Everett Phillips (UC Davis)
Yao Zhang (UC Davis)
Vasily Volkov (UC Berkeley)
Publication Date: 
Friday, August 1, 2008