How do Multimodal Foundation Models Encode Text and Speech? An Analysis of Cross-Lingual and Cross-Modal Representations
Abstract
Research on multimodal foundation models shows that cross-modal representations converge over layers, with length adaptation crucial for reducing cross-modal gaps, especially in high-resource languages, and speech showing greater cross-lingual differences than text.
Multimodal foundation models aim to create a unified representation space that abstracts away from surface features like language syntax or modality differences. To investigate this, we study the internal representations of three recent models, analyzing the model activations from semantically equivalent sentences across languages in the text and speech modalities. Our findings reveal that: 1) Cross-modal representations converge over model layers, except in the initial layers specialized at text and speech processing. 2) Length adaptation is crucial for reducing the cross-modal gap between text and speech, although current approaches' effectiveness is primarily limited to high-resource languages. 3) Speech exhibits larger cross-lingual differences than text. 4) For models not explicitly trained for modality-agnostic representations, the modality gap is more prominent than the language gap.
Community
This is a starting point for discussion on this paper.
Start discussing about this paper
arXiv explained breakdown of this paper ๐ https://arxivexplained.com/papers/how-do-multimodal-foundation-models-encode-text-and-speech-an-analysis-of-cross-lingual-and-cross-modal-representations
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper