QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions

Publication image

We propose a new end-to-end neural acoustic model for automatic speech recognition. The model is composed of multiple blocks with residual connections between them. Each block consists of one or more modules with 1D time-channel separable convolutional layers, batch normalization, and ReLU layers. It is trained with CTC loss. The proposed network achieves near state-of-the-art accuracy on LibriSpeech and Wall Street Journal, while having fewer parameters than all competing models. We also demonstrate that this model can be effectively fine-tuned on new datasets.

Authors

Samuel Kriman (Univ. of Illinois Urbana-Champaign)
Stanislav Beliaev (High School of Economics, Univ. of Saint Petersburg)
Boris Ginsburg (NVIDIA)
Jocelyn Huang (NVIDIA)
Oleksii Kuchaiev (NVIDIA)
Vitaly Lavrukhin (NVIDIA)
Ryan Leary (NVIDIA)
Jason Li (NVIDIA)
Yang Zhang (NVIDIA)

Publication Date

Research Area