Feature Map Vulnerability Evaluation in CNNs

As Convolutional Neural Networks (CNNs) are increasingly being employed in safety-critical applications, it is important that they behave reliably in the face of hardware errors. Transient hardware errors may percolate undesirable state during execution, resulting in software-manifested errors which can adversely affect high-level decision making. We present HarDNN, a software-directed approach to identify vulnerable computations during a CNN inference and selectively protect them based on their propensity towards corrupting the inference output in the presence of a hardware error. We show that HarDNN can accurately estimate relative vulnerability of a feature map in CNNs using a statistical error injection campaign, and explore heuristics for fast vulnerability assessment. Based on these results, we analyze the tradeoff between error coverage and computational overhead that the system designers can use to employ selective protection. Results show that the improvement in resilience for the added computation is superlinear in many cases with HarDNN. For example, HarDNN improves SqueezeNet’s resilience by 10× with just 30% additional computations.


Abdulrahman Mahmoud (University of Illinois at Urbana-Champaign)
Christopher W. Fletcher (University of Illinois at Urbana-Champaign)
Sarita V. Adve (University of Illinois at Urbana-Champaign)
Charbel Sakr (University of Illinois at Urbana-Champaign)
Naresh Shanbhag (University of Illinois at Urbana-Champaign)
Timothy Tsai (NVIDIA)

Publication Date

Uploaded Files