File size: 4,536 Bytes
959cd4d 36b4d10 de245c4 959cd4d c457ed6 de245c4 c457ed6 ee4ce2e c457ed6 de1dacb c457ed6 62d9a85 c457ed6 bf48210 c457ed6 bf48210 c457ed6 bf48210 c457ed6 bf6549c c457ed6 5397e5c c457ed6 4232953 c457ed6 de245c4 c457ed6 de245c4 c457ed6 de245c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
tags:
- audio
license: apache-2.0
language:
- en
pretty_name: NonverbalTTS
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: default/train/**
- split: dev
path: default/dev/**
- split: test
path: default/test/**
- split: other
path: default/other/**
task_categories:
- text-to-speech
---
# NonverbalTTS Dataset π΅π£οΈ
[](https://arxiv.org/abs/2507.13155)
[](https://huggingface.co/datasets/deepvk/NonverbalTTS)
**NonverbalTTS** is a 17-hour open-access English speech corpus with aligned text annotations for **nonverbal vocalizations (NVs)** and **emotional categories**, designed to advance expressive text-to-speech (TTS) research.
## Key Features β¨
- **17 hours** of high-quality speech data
- **10 NV types**: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing
- **8 emotion categories**: Angry, disgusted, fearful, happy, neutral, sad, surprised, other
- **Diverse speakers**: 2296 speakers (60% male, 40% female)
- **Multi-source**: Derived from [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) and [Expresso](https://speechbot.github.io/expresso/) corpora
- **Rich metadata**: Emotion labels, NV annotations, speaker IDs, audio quality metrics
- **Sampling rate**: 16kHz for audio from VoxCeleb, 48kHz for audio from Expresso
<!-- ## Dataset Structure π
NonverbalTTS/
βββ wavs/ # Audio files (16-48kHz WAV format)
β βββ ex01_sad_00265.wav
β βββ ...
βββ .gitattributes
βββ README.md
βββ metadata.csv # Metadata annotations -->
<!-- ## Metadata Schema (`metadata.csv`) π
| Column | Description | Example |
|--------|-------------|---------|
| `index` | Unique sample ID | `ex01_sad_00265` |
| `file_name` | Audio file path | `wavs/ex01_sad_00265.wav` |
| `Emotion` | Emotion label | `sad` |
| `Initial text` | Raw transcription | `"So, Mom, π¬οΈ how've you been?"` |
| `Annotator response {1,2,3}` | Refined transcriptions | `"So, Mom, how've you been?"` |
| `Result` | Final fused transcription | `"So, Mom, π¬οΈ how've you been?"` |
| `dnsmos` | Audio quality score (1-5) | `3.936982` |
| `duration` | Audio length (seconds) | `3.6338125` |
| `speaker_id` | Speaker identifier | `ex01` |
| `data_name` | Source corpus | `Expresso` |
| `gender` | Speaker gender | `m` | -->
<!-- **NV Symbols**: π¬οΈ=Breath, π=Laughter, etc. (See [Annotation Guidelines](https://zenodo.org/records/15274617)) -->
## Loading the Dataset π»
```python
from datasets import load_dataset
dataset = load_dataset("deepvk/NonverbalTTS")
```
<!-- # Access train split
```print(dataset["train"][0])```
# Output: {'index': 'ex01_sad_00265', 'file_name': 'wavs/ex01_sad_00265.wav', ...}
-->
## Annotation Pipeline π§
1. **Automatic Detection**
- NV detection using [BEATs](https://arxiv.org/abs/2409.09546)
- Emotion classification with [emotion2vec+](https://huggingface.co/emotion2vec/emotion2vec_plus_large)
- ASR transcription via Canary model
2. **Human Validation**
- 3 annotators per sample
- Filtered non-English/multi-speaker clips
- NV/emotion validation and refinement
3. **Fusion Algorithm**
- Majority voting for final transcriptions
- Pyalign-based sequence alignment
- Multi-annotator hypothesis merging
## Benchmark Results π
Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems:
|Metric | NVTTS | CosyVoice2 |
| ------- | ------- | ------- |
|Speaker Similarity | 0.89 | 0.85 |
|NV Jaccard | 0.8 | 0.78 |
|Human Preference | 33.4% | 35.4% |
## Use Cases π‘
- Training expressive TTS models
- Zero-shot NV synthesis
- Emotion-aware speech generation
- Prosody modeling research
## License π
- Annotations: CC BY-NC-SA 4.0
- Audio: Adheres to original source licenses (VoxCeleb, Expresso)
## Citation π
```
@misc{borisov2025nonverbalttspublicenglishcorpus,
title={NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech},
author={Maksim Borisov and Egor Spirin and Daria Diatlova},
year={2025},
eprint={2507.13155},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.13155},
}
```
|