Abstract | ||
---|---|---|
End-to-end automatic speech recognition (ASR) models with a single neural network have recently demonstrated state-of-the-art results compared to conventional hybrid speech recognizers. Specifically, recurrent neural network transducer (RNN-T) has shown competitive ASR performance on various benchmarks. In this work, we examine ways in which RNN-T can achieve better ASR accuracy via performing auxiliary tasks. We propose (i) using the same auxiliary task as primary RNN-T ASR task, and (ii) performing context-dependent graphemic state prediction as in conventional hybrid modeling. In transcribing social media videos with varying training data size, we first evaluate the streaming ASR performance on three languages: Romanian, Turkish and German. We find that both proposed methods provide consistent improvements. Next, we observe that both auxiliary tasks demonstrate efficacy in learning deep transformer encoders for RNN-T criterion, thus achieving competitive results -2.0%/4.2% WER on LibriSpeech test-clean/other - as compared to prior top performing models. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/SLT48900.2021.9383548 | 2021 IEEE Spoken Language Technology Workshop (SLT) |
Keywords | DocType | ISSN |
recurrent neural network transducer,speech recognition,auxiliary learning | Conference | 2639-5479 |
ISBN | Citations | PageRank |
978-1-7281-7067-1 | 1 | 0.35 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chunxi Liu | 1 | 8 | 4.20 |
Frank Zhang | 2 | 10 | 6.00 |
Duc-Trong Le | 3 | 15 | 6.08 |
Suyoun Kim | 4 | 35 | 6.15 |
Yatharth Saraf | 5 | 3 | 1.07 |
Geoffrey Zweig | 6 | 3406 | 320.25 |