Papers
arxiv:2509.14882

Llama-Mimi: Speech Language Models with Interleaved Semantic and Acoustic Tokens

Published on Sep 18
Authors:
,
,
,

Abstract

Llama-Mimi, a unified speech language model using a single Transformer decoder, achieves top performance in acoustic consistency and speaker identity while balancing acoustic fidelity and linguistic coherence.

AI-generated summary

We propose Llama-Mimi, a speech language model that uses a unified tokenizer and a single Transformer decoder to jointly model sequences of interleaved semantic and acoustic tokens. Comprehensive evaluation shows that Llama-Mimi achieves state-of-the-art performance in acoustic consistency and possesses the ability to preserve speaker identity. Our analysis further demonstrates that increasing the number of quantizers improves acoustic fidelity but degrades linguistic performance, highlighting the inherent challenge of maintaining long-term coherence. We additionally introduce an LLM-as-a-Judge-based evaluation to assess the spoken content quality of generated outputs. Our models, code, and speech samples are publicly available.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.14882 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.14882 in a Space README.md to link it from this page.

Collections including this paper 1