Optimal Quantization Using Scaled Codebook

Publication image

We study the problem of quantizing N sorted, scalar datapoints with a fixed codebook containing K entries that are allowed to be rescaled. The problem is defined as finding the optimal scaling factor alpha and the datapoint assignments into the alpha-scaled codebook to minimize the squared error between original and quantized points. Previously, the globally optimal algorithms for this problem were derived only for certain codebooks (binary and ternary) or under the assumption of certain distributions (Gaussian, Laplacian). By studying the properties of the optimal quantizer, we derive an O(NK log K) algorithm that is guaranteed to find the optimal quantization parameters for any fixed codebook regardless of data distribution. We apply our algorithm to synthetic and real-world neural network quantization problems and demonstrate the effectiveness of our approach.

Authors

Yerlan Idelbayev (UC Merced)
Maying Shen (NVIDIA)
Miguel A. Carreira-Perpinan (UC Merced)
Jose M. Alvarez (NVIDIA)

Publication Date

Research Area