Papers
arxiv:2305.05084

Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition

Published on May 8, 2023
Authors:
,
,
,
,
,
,

Abstract

A redesigned Conformer model with a novel downsampling schema and limited context attention enables faster training and inference, supports scaling to billion parameters, and achieves top accuracy in speech processing tasks.

AI-generated summary

Conformer-based models have become the dominant end-to-end architecture for speech processing tasks. With the objective of enhancing the conformer architecture for efficient training and inference, we carefully redesigned Conformer with a novel downsampling schema. The proposed model, named Fast Conformer(FC), is 2.8x faster than the original Conformer, supports scaling to Billion parameters without any changes to the core architecture and also achieves state-of-the-art accuracy on Automatic Speech Recognition benchmarks. To enable transcription of long-form speech up to 11 hours, we replaced global attention with limited context attention post-training, while also improving accuracy through fine-tuning with the addition of a global token. Fast Conformer, when combined with a Transformer decoder also outperforms the original Conformer in accuracy and in speed for Speech Translation and Spoken Language Understanding.

Community

Sign up or log in to comment

Models citing this paper 54

Browse 54 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2305.05084 in a dataset README.md to link it from this page.

Spaces citing this paper 80

Collections including this paper 1