Partial Convolution based Padding
Published in arXiv, 2018
Recommended citation: Guilin Liu, Kevin J. Shih, Ting-Chun Wang, Fitsum A. Reda, Karan Sapra, Zhiding Yu, Andrew Tao, Bryan Catanzaro, Partial Convolution based Padding, arXiv:1811.11718, 2018. https://arxiv.org/abs/1811.11718
In this paper, we present a simple yet effective padding scheme that can be used as a drop-in module for existing convolutional neural networks. We call it partial convolution based padding, with the intuition that the padded region can be treated as holes and the original input as non-holes. Specifically, during the convolution operation, the convolution results are re-weighted near image borders based on the ratios between the padded area and the convolution sliding window area. Extensive experiments with various deep network models on ImageNet classification and semantic segmentation demonstrate that the proposed padding scheme consistently outperforms standard zero padding with better accuracy.
Comparison with Zero Padding
Comparison with Zero Padding, Reflection Padding and Replication Padding for 5 runs
The best top-1 accuracies for each run with 1-crop testing. *_zero, *_pd, *_ref and *_rep indicate the corresponding model with zero padding, partial convolution based padding, reflection padding and replication padding respectively. *_best means the best validation score for each run of the training. Average represents the average accuracy of the 5 runs. Column diff represents the difference with corresponding network using zero padding. Column stdev represents the standard deviation of the accuracies from 5 runs. PT_official represents the corresponding official accuracies published on the PyTorch website
Code
https://github.com/NVIDIA/partialconv
Related
Image Inpainting for Irregular Holes Using Partial Convolutions