SONAR: Sentence-Level Multimodal and Language-Agnostic Representations
Abstract
SONAR is a multilingual and multimodal fixed-size sentence embedding space that outperforms existing embeddings and speech encoders, enabling competitive zero-shot translation and text-to-speech capabilities.
We introduce SONAR, a new multilingual and multimodal fixed-size sentence embedding space. Our single text encoder, covering 200 languages, substantially outperforms existing sentence embeddings such as LASER3 and LabSE on the xsim and xsim++ multilingual similarity search tasks. Speech segments can be embedded in the same SONAR embedding space using language-specific speech encoders trained in a teacher-student setting on speech transcription data. Our encoders outperform existing speech encoders on similarity search tasks. We also provide a text decoder for 200 languages, which allows us to perform text-to-text and speech-to-text machine translation, including for zero-shot language and modality combinations. Our text-to-text results are competitive compared to the state-of-the-art NLLB~1B model, despite the fixed-size bottleneck representation. Our zero-shot speech-to-text translation results compare favorably with strong supervised baselines such as Whisper.
Community
arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/sonar-sentence-level-multimodal-and-language-agnostic-representations
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper