Towards Precision-Aware Fault Tolerance Approaches for Mixed-Precision Applications
Publication image

Graphics Processing Units (GPUs), the dominantly adopted accelerators in HPC systems, are susceptible to a transient hardware fault. A new generation of GPUs features mixed-precision architectures such as NVIDIA Tensor Cores to accelerate matrix multiplications. While widely adapted, how they would behave under transient hardware faults remain unclear. In this study, we conduct large-scale fault injection experiments on GEMM kernels implemented with different floating-point data types on the V100 and A100 Tensor Cores and show distinct error resilience characteristics for the GEMMS with different formats. We plan to explore this space in the future by building precision-aware floating-point fault tolerance techniques for applications such as DNNs that exercise low-precision computations.

Bo Fang (Pacific Northwest National Laboratory)
Xinyi Li (University of Utah)
Ganesh Gopalakrishnan (University of Utah)
Ignacio Laguna (Lawrence Livermore National Laboratory)
Kevin Barker (Pacific Northwest National Laboratory)
Ang Li (Pacific Northwest National Laboratory)
Publication Date
Uploaded Files