--- tags: - audio license: apache-2.0 language: - en pretty_name: NonverbalTTS size_categories: - 1K ## Loading the Dataset 💻 ```python from datasets import load_dataset dataset = load_dataset("deepvk/NonverbalTTS") ``` ## Annotation Pipeline 🔧 1. **Automatic Detection** - NV detection using [BEATs](https://arxiv.org/abs/2409.09546) - Emotion classification with [emotion2vec+](https://huggingface.co/emotion2vec/emotion2vec_plus_large) - ASR transcription via Canary model 2. **Human Validation** - 3 annotators per sample - Filtered non-English/multi-speaker clips - NV/emotion validation and refinement 3. **Fusion Algorithm** - Majority voting for final transcriptions - Pyalign-based sequence alignment - Multi-annotator hypothesis merging ## Benchmark Results 📊 Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems: |Metric | NVTTS | CosyVoice2 | | ------- | ------- | ------- | |Speaker Similarity | 0.89 | 0.85 | |NV Jaccard | 0.8 | 0.78 | |Human Preference | 33.4% | 35.4% | ## Use Cases 💡 - Training expressive TTS models - Zero-shot NV synthesis - Emotion-aware speech generation - Prosody modeling research ## License 📜 - Annotations: CC BY-NC-SA 4.0 - Audio: Adheres to original source licenses (VoxCeleb, Expresso) ## Citation 📝 ``` @misc{borisov2025nonverbalttspublicenglishcorpus, title={NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech}, author={Maksim Borisov and Egor Spirin and Daria Diatlova}, year={2025}, eprint={2507.13155}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2507.13155}, } ```