Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
audio
Libraries:
Datasets
Dask
License:
NonverbalTTS / README.md
BorisovMaksim's picture
Update README.md
959cd4d verified
|
raw
history blame
4.01 kB
metadata
tags:
  - audio
license: apache-2.0
language:
  - en
pretty_name: NonverbalTTS
size_categories:
  - 1K<n<10K

NonverbalTTS Dataset πŸŽ΅πŸ—£οΈ

DOI Hugging Face

NonverbalTTS is a 17-hour open-access English speech corpus with aligned text annotations for nonverbal vocalizations (NVs) and emotional categories, designed to advance expressive text-to-speech (TTS) research.

Key Features ✨

  • 17 hours of high-quality speech data
  • 10 NV types: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing
  • 8 emotion categories: Angry, disgusted, fearful, happy, neutral, sad, surprised, other
  • Diverse speakers: 2296 speakers (60% male, 40% female)
  • Multi-source: Derived from VoxCeleb and Expresso corpora
  • Rich metadata: Emotion labels, NV annotations, speaker IDs, audio quality metrics

Metadata Schema (metadata.csv) πŸ“‹

Column Description Example
index Unique sample ID ex01_sad_00265
file_name Audio file path wavs/ex01_sad_00265.wav
Emotion Emotion label sad
Initial text Raw transcription "So, Mom, 🌬️ how've you been?"
Annotator response {1,2,3} Refined transcriptions "So, Mom, how've you been?"
Result Final fused transcription "So, Mom, 🌬️ how've you been?"
dnsmos Audio quality score (1-5) 3.936982
duration Audio length (seconds) 3.6338125
speaker_id Speaker identifier ex01
data_name Source corpus Expresso
gender Speaker gender m

NV Symbols: 🌬️=Breath, πŸ˜‚=Laughter, etc. (See Annotation Guidelines)

Loading the Dataset πŸ’»

from datasets import load_dataset

dataset = load_dataset("deepvk/NonverbalTTS")

Annotation Pipeline πŸ”§

  1. Automatic Detection

    • NV detection using BEATs
    • Emotion classification with emotion2vec+
    • ASR transcription via Canary model
  2. Human Validation

    • 3 annotators per sample
    • Filtered non-English/multi-speaker clips
    • NV/emotion validation and refinement
  3. Fusion Algorithm

    • Majority voting for final transcriptions
    • Pyalign-based sequence alignment
    • Multi-annotator hypothesis merging

Benchmark Results πŸ“Š

Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems:

Metric NVTTS CosyVoice2
Speaker Similarity 0.89 0.85
NV Jaccard 0.8 0.78
Human Preference 33.4% 35.4%

Use Cases πŸ’‘

  • Training expressive TTS models
  • Zero-shot NV synthesis
  • Emotion-aware speech generation
  • Prosody modeling research

License πŸ“œ

  • Annotations: CC BY-NC-SA 4.0
  • Audio: Adheres to original source licenses (VoxCeleb, Expresso)

Citation πŸ“

@dataset{nonverbaltts2024,
  author = {Anonymous},
  title = {NonverbalTTS Dataset},
  month = December,
  year = 2024,
  publisher = {Zenodo},
  version = {1.0},
  doi = {10.5281/zenodo.15274617},
  url = {https://zenodo.org/records/15274617}
}