Update README.md
Browse files
README.md
CHANGED
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# NonverbalTTS Dataset π΅π£οΈ
|
2 |
+
|
3 |
+
[](https://doi.org/10.5281/zenodo.15274617)
|
4 |
+
[](https://huggingface.co/datasets/deepvk/NonverbalTTS)
|
5 |
+
|
6 |
+
**NonverbalTTS** is a 17-hour open-access English speech corpus with aligned text annotations for **nonverbal vocalizations (NVs)** and **emotional categories**, designed to advance expressive text-to-speech (TTS) research.
|
7 |
+
|
8 |
+
## Key Features β¨
|
9 |
+
|
10 |
+
- **17 hours** of high-quality speech data
|
11 |
+
- **10 NV types**: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing
|
12 |
+
- **8 emotion categories**: Angry, disgusted, fearful, happy, neutral, sad, surprised, other
|
13 |
+
- **Diverse speakers**: 2296 speakers (60% male, 40% female)
|
14 |
+
- **Multi-source**: Derived from [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) and [Expresso](https://arxiv.org/abs/2308.05725) corpora
|
15 |
+
- **Rich metadata**: Emotion labels, NV annotations, speaker IDs, audio quality metrics
|
16 |
+
|
17 |
+
<!-- ## Dataset Structure π
|
18 |
+
|
19 |
+
NonverbalTTS/
|
20 |
+
βββ wavs/ # Audio files (16-48kHz WAV format)
|
21 |
+
β βββ ex01_sad_00265.wav
|
22 |
+
β βββ ...
|
23 |
+
βββ .gitattributes
|
24 |
+
βββ README.md
|
25 |
+
βββ metadata.csv # Metadata annotations -->
|
26 |
+
|
27 |
+
|
28 |
+
## Metadata Schema (`metadata.csv`) π
|
29 |
+
|
30 |
+
| Column | Description | Example |
|
31 |
+
|--------|-------------|---------|
|
32 |
+
| `index` | Unique sample ID | `ex01_sad_00265` |
|
33 |
+
| `file_name` | Audio file path | `wavs/ex01_sad_00265.wav` |
|
34 |
+
| `Emotion` | Emotion label | `sad` |
|
35 |
+
| `Initial text` | Raw transcription | `"So, Mom, π¬οΈ how've you been?"` |
|
36 |
+
| `Annotator response {1,2,3}` | Refined transcriptions | `"So, Mom, how've you been?"` |
|
37 |
+
| `Result` | Final fused transcription | `"So, Mom, π¬οΈ how've you been?"` |
|
38 |
+
| `dnsmos` | Audio quality score (1-5) | `3.936982` |
|
39 |
+
| `duration` | Audio length (seconds) | `3.6338125` |
|
40 |
+
| `speaker_id` | Speaker identifier | `ex01` |
|
41 |
+
| `data_name` | Source corpus | `Expresso` |
|
42 |
+
| `gender` | Speaker gender | `m` |
|
43 |
+
|
44 |
+
**NV Symbols**: π¬οΈ=Breath, π=Laughter, etc. (See [Annotation Guidelines](https://zenodo.org/records/15274617))
|
45 |
+
|
46 |
+
## Loading the Dataset π»
|
47 |
+
|
48 |
+
```python
|
49 |
+
from datasets import load_dataset
|
50 |
+
|
51 |
+
dataset = load_dataset("deepvk/NonverbalTTS")
|
52 |
+
```
|
53 |
+
|
54 |
+
<!-- # Access train split
|
55 |
+
```print(dataset["train"][0])```
|
56 |
+
|
57 |
+
# Output: {'index': 'ex01_sad_00265', 'file_name': 'wavs/ex01_sad_00265.wav', ...}
|
58 |
+
-->
|
59 |
+
|
60 |
+
## Annotation Pipeline π§
|
61 |
+
|
62 |
+
1. **Automatic Detection**
|
63 |
+
- NV detection using [BEATs](https://arxiv.org/abs/2409.09546)
|
64 |
+
- Emotion classification with [emotion2vec+](https://arxiv.org/abs/2402.XXX)
|
65 |
+
- ASR transcription via Canary model
|
66 |
+
|
67 |
+
2. **Human Validation**
|
68 |
+
- 3 annotators per sample
|
69 |
+
- Filtered non-English/multi-speaker clips
|
70 |
+
- NV/emotion validation and refinement
|
71 |
+
|
72 |
+
3. **Fusion Algorithm**
|
73 |
+
- Majority voting for final transcriptions
|
74 |
+
- Pyalign-based sequence alignment
|
75 |
+
- Multi-annotator hypothesis merging
|
76 |
+
|
77 |
+
|
78 |
+
## Benchmark Results π
|
79 |
+
|
80 |
+
|
81 |
+
Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems:
|
82 |
+
|Metric | NVTTS | CosyVoice2 |
|
83 |
+
| ------- | ------- | ------- |
|
84 |
+
|Speaker Similarity | 0.89 | 0.85 |
|
85 |
+
|NV Jaccard (Laugh) | 0.92 | 0.74 |
|
86 |
+
|Human Preference | 33.4% | 35.4% |
|
87 |
+
|
88 |
+
|
89 |
+
## Use Cases π‘
|
90 |
+
- Training expressive TTS models
|
91 |
+
- Zero-shot NV synthesis
|
92 |
+
- Emotion-aware speech generation
|
93 |
+
- Prosody modeling research
|
94 |
+
|
95 |
+
## License π
|
96 |
+
- Annotations: CC BY-NC-SA 4.0
|
97 |
+
- Audio: Adheres to original source licenses (VoxCeleb, Expresso)
|
98 |
+
|
99 |
+
|
100 |
+
## Citation π
|
101 |
+
|
102 |
+
```
|
103 |
+
@dataset{nonverbaltts2024,
|
104 |
+
author = {Anonymous},
|
105 |
+
title = {NonverbalTTS Dataset},
|
106 |
+
month = December,
|
107 |
+
year = 2024,
|
108 |
+
publisher = {Zenodo},
|
109 |
+
version = {1.0},
|
110 |
+
doi = {10.5281/zenodo.15274617},
|
111 |
+
url = {https://zenodo.org/records/15274617}
|
112 |
+
}
|
113 |
+
```
|
114 |
+
|