matthieufp commited on
Commit
35bab0d
·
verified ·
1 Parent(s): 103e697

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ [Github link to use the model](https://github.com/MatthieuFP/MAD_Speech)
6
+
7
+ MAD Speech is compatible with Python 3.10, we did not test any other python versions.
8
+
9
+ [Read the paper (acl anthology)](https://aclanthology.org/2025.naacl-long.11.pdf)
10
+
11
+ <p align="justify"> Generative spoken language models produce speech in a wide range of voices, prosody, and recording conditions, seemingly approaching the diversity of natural speech. However, the extent to which generated speech is acoustically diverse remains unclear due to a lack of appropriate metrics. We address this gap by developing lightweight metrics of acoustic diversity, which we collectively refer to as MAD Speech. We focus on measuring five facets of acoustic diversity: voice, gender, emotion, accent, and background noise. We construct the metrics as a composition of specialized, per-facet embedding models and an aggregation function that measures diversity within the embedding space. Next, we build a series of datasets with a priori known diversity preferences for each facet. Using these datasets, we demonstrate that our proposed metrics achieve a stronger agreement with the ground-truth diversity than baselines. Finally, we showcase the applicability of our proposed metrics across several real-life evaluation scenarios. </p>
12
+
13
+ ## Usage
14
+
15
+ MAD Speech outputs a diversity score for each facet of acoustic diversity given a set of audio samples. These samples are expected to be in *wav* format and in the same directory. Below is the usage of the package:
16
+
17
+ ```
18
+ python diversity_scores.py --path_audio /PATH/TO/AUDIO/FOLDER/
19
+ ```
20
+
21
+ ## Reference
22
+
23
+ ```
24
+ @inproceedings{futeral-etal-2025-mad,
25
+ title = "{MAD} Speech: Measures of Acoustic Diversity of Speech",
26
+ author = "Futeral, Matthieu and
27
+ Agostinelli, Andrea and
28
+ Tagliasacchi, Marco and
29
+ Zeghidour, Neil and
30
+ Kharitonov, Eugene",
31
+ editor = "Chiruzzo, Luis and
32
+ Ritter, Alan and
33
+ Wang, Lu",
34
+ booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
35
+ month = apr,
36
+ year = "2025",
37
+ address = "Albuquerque, New Mexico",
38
+ publisher = "Association for Computational Linguistics",
39
+ url = "https://aclanthology.org/2025.naacl-long.11/",
40
+ doi = "10.18653/v1/2025.naacl-long.11",
41
+ pages = "222--235",
42
+ ISBN = "979-8-89176-189-6",
43
+ abstract = "Generative spoken language models produce speech in a wide range of voices, prosody, and recording conditions, seemingly approaching the diversity of natural speech. However, the extent to which generated speech is acoustically diverse remains unclear due to a lack of appropriate metrics. We address this gap by developing lightweight metrics of acoustic diversity, which we collectively refer to as MAD Speech. We focus on measuring five facets of acoustic diversity: voice, gender, emotion, accent, and background noise. We construct the metrics as a composition of specialized, per-facet embedding models and an aggregation function that measures diversity within the embedding space. Next, we build a series of datasets with a priori known diversity preferences for each facet. Using these datasets, we demonstrate that our proposed metrics achieve a stronger agreement with the ground-truth diversity than baselines. Finally, we showcase the applicability of our proposed metrics across several real-life evaluation scenarios. MAD Speech is made publicly available."
44
+ }
45
+ ```