Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
audio
Libraries:
Datasets
Dask
License:
BorisovMaksim commited on
Commit
bf48210
Β·
verified Β·
1 Parent(s): 4a1a809

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -34,7 +34,7 @@ configs:
34
  - **Diverse speakers**: 2296 speakers (60% male, 40% female)
35
  - **Multi-source**: Derived from [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) and [Expresso](https://speechbot.github.io/expresso/) corpora
36
  - **Rich metadata**: Emotion labels, NV annotations, speaker IDs, audio quality metrics
37
-
38
  <!-- ## Dataset Structure πŸ“‚
39
 
40
 
@@ -48,7 +48,7 @@ NonverbalTTS/
48
  └── metadata.csv # Metadata annotations -->
49
 
50
 
51
- ## Metadata Schema (`metadata.csv`) πŸ“‹
52
 
53
  | Column | Description | Example |
54
  |--------|-------------|---------|
@@ -62,16 +62,16 @@ NonverbalTTS/
62
  | `duration` | Audio length (seconds) | `3.6338125` |
63
  | `speaker_id` | Speaker identifier | `ex01` |
64
  | `data_name` | Source corpus | `Expresso` |
65
- | `gender` | Speaker gender | `m` |
66
 
67
- **NV Symbols**: 🌬️=Breath, πŸ˜‚=Laughter, etc. (See [Annotation Guidelines](https://zenodo.org/records/15274617))
68
 
69
  ## Loading the Dataset πŸ’»
70
 
71
  ```python
72
  from datasets import load_dataset
73
 
74
- dataset = load_dataset("deepvk/NonverbalTTS")
75
  ```
76
 
77
  <!-- # Access train split
@@ -122,6 +122,8 @@ Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art
122
 
123
  ## Citation πŸ“
124
 
 
 
125
  ```
126
  @dataset{nonverbaltts2024,
127
  author = {Borisov Maksim, Spirin Egor, Dyatlova Darya},
@@ -133,4 +135,4 @@ Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art
133
  doi = {10.5281/zenodo.15274617},
134
  url = {https://zenodo.org/records/15274617}
135
  }
136
- ```
 
34
  - **Diverse speakers**: 2296 speakers (60% male, 40% female)
35
  - **Multi-source**: Derived from [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) and [Expresso](https://speechbot.github.io/expresso/) corpora
36
  - **Rich metadata**: Emotion labels, NV annotations, speaker IDs, audio quality metrics
37
+ - **Sampling rate**: 16kHz sampling rate for audio from VoxCeleb, 48kHz for audio from Expresso
38
  <!-- ## Dataset Structure πŸ“‚
39
 
40
 
 
48
  └── metadata.csv # Metadata annotations -->
49
 
50
 
51
+ <!-- ## Metadata Schema (`metadata.csv`) πŸ“‹
52
 
53
  | Column | Description | Example |
54
  |--------|-------------|---------|
 
62
  | `duration` | Audio length (seconds) | `3.6338125` |
63
  | `speaker_id` | Speaker identifier | `ex01` |
64
  | `data_name` | Source corpus | `Expresso` |
65
+ | `gender` | Speaker gender | `m` | -->
66
 
67
+ <!-- **NV Symbols**: 🌬️=Breath, πŸ˜‚=Laughter, etc. (See [Annotation Guidelines](https://zenodo.org/records/15274617)) -->
68
 
69
  ## Loading the Dataset πŸ’»
70
 
71
  ```python
72
  from datasets import load_dataset
73
 
74
+ dataset = load_dataset("deepvk/NonverbalTTS", revision="refs/convert/parquet")
75
  ```
76
 
77
  <!-- # Access train split
 
122
 
123
  ## Citation πŸ“
124
 
125
+ TODO
126
+ <!--
127
  ```
128
  @dataset{nonverbaltts2024,
129
  author = {Borisov Maksim, Spirin Egor, Dyatlova Darya},
 
135
  doi = {10.5281/zenodo.15274617},
136
  url = {https://zenodo.org/records/15274617}
137
  }
138
+ ``` -->