Towards Precision-Aware Fault Tolerance Approaches for Mixed-Precision Applications
Graphics Processing Units (GPUs), the dominantly adopted accelerators in HPC systems, are susceptible to a transient hardware fault. A new generation of GPUs features mixed-precision architectures such as NVIDIA Tensor Cores to accelerate matrix multiplications. While widely adapted, how they would behave under transient hardware faults remain unclear. In this study, we conduct large-scale fault injection experiments on GEMM kernels implemented with different floating-point data types on the V100 and A100 Tensor Cores and show distinct error resilience characteristics for the GEMMS with different formats. We plan to explore this space in the future by building precision-aware floating-point fault tolerance techniques for applications such as DNNs that exercise low-precision computations.
This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to email@example.com.