vctk / README.md
jspaulsen's picture
Update README.md
fb74847 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: text
      dtype: string
    - name: speaker_id
      dtype: string
    - name: audio
      dtype: audio
    - name: mic_id
      dtype: string
  splits:
    - name: train
      num_bytes: 16540026180.2
      num_examples: 88156
  download_size: 17595288543
  dataset_size: 16540026180.2
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-to-speech
  - automatic-speech-recognition
  - text-to-audio
license: cc-by-4.0
language:
  - en
size_categories:
  - 10K<n<100K

VCTK

This is a processed clone of the VCTK dataset with leading and trailing silence removed using Silero VAD. A fixed 25 ms of padding has been added to both ends of each audio clip to (hopefully) imrprove training and finetuning.

The original dataset is available at: https://datashare.ed.ac.uk/handle/10283/3443.

Reproducing

This repository notably lacks a requirements.txt file. There's likely a missing dependency or two, but roughly:

pydub
tqdm
torch
torchaudio
python-dotenv

are the required python packages to clean the dataset.

Steps

  1. Download VCTK dataset (0.92) and extract it. This should net a wav48_silence_trimmed directory and a txt directory.
  2. Run process.py, which will generate a dataset directory. This can be restarted if stopped.

Licensing Information

Public Domain, Creative Commons Attribution 4.0 International Public License (CC-BY-4.0)

Citation Information

@inproceedings{Veaux2017CSTRVC,
    title        = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
    author       = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
    year         = 2017
}