Task Bench: A Parameterized Benchmark for Evaluating Parallel Runtime Performance

Publication image

We present Task Bench, a parameterized benchmark designed to explore the performance of parallel and distributed programming systems under a variety of application scenarios. Task Bench lowers the barrier to benchmarking multiple programming systems by making the implementation for a given system orthogonal to the benchmarks themselves: every benchmark constructed with Task Bench runs on every Task Bench implementation. Furthermore, Task Bench's parameterization enables a wide variety of benchmark scenarios that distill the key characteristics of larger applications.  We conduct a comprehensive study with implementations of Task Bench in 15 programming systems on up to 256 Haswell nodes of the Cori supercomputer. We introduce a novel metric, minimum effective task granularity to study the baseline runtime overhead of each system. We show that when running at scale, 100 {\mu}s is the smallest granularity that even the most efficient systems can reliably support with current technologies. We also study each system's scalability, ability to hide communication and mitigate load imbalance.

Authors

Elliott Slaughter (SLAC National Accelerator Laboratory)
Wei Wu (Los Alamos National Laboratory)
Yuankun Fu (Purdue University)
Legend Brandenburg (Stanford University)
Nicolai Garcia (Stanford University)
Wilhem Kautz (Stanford University)
Emily Marx (Stanford University)
Kaleb S. Morris (Stanford University)
Qinglei Cao (University of Tennesse, Knoxville)
George Bosilca (University of Tennesse, Knoxville)
Seema Mirchandaney (SLAC National Accelerator Laboratory)
Wonchan Lee (NVIDIA)
Sean Treichler (NVIDIA)
Patrick McCormick (Los Alamos National Laboratory)
Alex Aiken (Stanford University)

Publication Date