Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
audio
Libraries:
Datasets
Dask
License:
NonverbalTTS / README.md
daryooush's picture
updated readme
de245c4 verified
metadata
tags:
  - audio
license: apache-2.0
language:
  - en
pretty_name: NonverbalTTS
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: train
        path: default/train/**
      - split: dev
        path: default/dev/**
      - split: test
        path: default/test/**
      - split: other
        path: default/other/**
task_categories:
  - text-to-speech

NonverbalTTS Dataset πŸŽ΅πŸ—£οΈ

arxiv Hugging Face

NonverbalTTS is a 17-hour open-access English speech corpus with aligned text annotations for nonverbal vocalizations (NVs) and emotional categories, designed to advance expressive text-to-speech (TTS) research.

Key Features ✨

  • 17 hours of high-quality speech data
  • 10 NV types: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing
  • 8 emotion categories: Angry, disgusted, fearful, happy, neutral, sad, surprised, other
  • Diverse speakers: 2296 speakers (60% male, 40% female)
  • Multi-source: Derived from VoxCeleb and Expresso corpora
  • Rich metadata: Emotion labels, NV annotations, speaker IDs, audio quality metrics
  • Sampling rate: 16kHz for audio from VoxCeleb, 48kHz for audio from Expresso

    Loading the Dataset πŸ’»

    from datasets import load_dataset
    
    dataset = load_dataset("deepvk/NonverbalTTS")
    

    Annotation Pipeline πŸ”§

    1. Automatic Detection

      • NV detection using BEATs
      • Emotion classification with emotion2vec+
      • ASR transcription via Canary model
    2. Human Validation

      • 3 annotators per sample
      • Filtered non-English/multi-speaker clips
      • NV/emotion validation and refinement
    3. Fusion Algorithm

      • Majority voting for final transcriptions
      • Pyalign-based sequence alignment
      • Multi-annotator hypothesis merging

    Benchmark Results πŸ“Š

    Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems:

    Metric NVTTS CosyVoice2
    Speaker Similarity 0.89 0.85
    NV Jaccard 0.8 0.78
    Human Preference 33.4% 35.4%

    Use Cases πŸ’‘

    • Training expressive TTS models
    • Zero-shot NV synthesis
    • Emotion-aware speech generation
    • Prosody modeling research

    License πŸ“œ

    • Annotations: CC BY-NC-SA 4.0
    • Audio: Adheres to original source licenses (VoxCeleb, Expresso)

    Citation πŸ“

    @misc{borisov2025nonverbalttspublicenglishcorpus,
          title={NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech}, 
          author={Maksim Borisov and Egor Spirin and Daria Diatlova},
          year={2025},
          eprint={2507.13155},
          archivePrefix={arXiv},
          primaryClass={cs.LG},
          url={https://arxiv.org/abs/2507.13155}, 
    }