Datasets:
metadata
configs:
- config_name: all
data_files: '*/*.tar'
default: true
- config_name: 3blue1brown
data_files: 3blue1brown/*.tar
- config_name: boxofficemoviesscenes
data_files: boxofficemoviesscenes/*.tar
- config_name: business
data_files: business/*.tar
- config_name: markrober
data_files: markrober/*.tar
- config_name: marvel
data_files: marvel/*.tar
- config_name: mitocw
data_files: mitocw/*.tar
- config_name: mkbhd
data_files: mkbhd/*.tar
- config_name: msftmechanics
data_files: msftmechanics/*.tar
- config_name: neoexplains
data_files: neoexplains/*.tar
- config_name: nvidia
data_files: nvidia/*.tar
- config_name: quantasciencechannel
data_files: quantasciencechannel/*.tar
- config_name: teded
data_files: teded/*.tar
- config_name: theinfographicsshow
data_files: theinfographicsshow/*.tar
- config_name: twominutepapers
data_files: twominutepapers/*.tar
- config_name: veritasium
data_files: veritasium/*.tar
license: mit
task_categories:
- automatic-speech-recognition
language:
- en
size_categories:
- 100K<n<1M
English Audio Dataset from YouTube
This dataset contains English audio segments extracted from various YouTube channels, along with corresponding transcription metadata. The data is intended for training automatic speech recognition (ASR) models.
Data Source and Processing
The data was obtained through the following process:
- Download: Audio (
.m4a
) and available English subtitles (.srt
foren
,en.j3PyPqV-e1s
) were downloaded from selected YouTube channels. This raw data, along with video metadata (metadata.csv
), is stored initially in adata/{channel_id}/
directory structure. - Segmentation: The raw audio files were segmented based on the timing information in the
.srt
files.- Audio files are splitted by SRT segments and then combined to a maximum duration less than but close to 30 seconds per group for Whisper.
- The corresponding audio portions for each group are extracted using
ffmpeg
and saved as.mp3
files at a 16000 Hz sample rate. - Metadata for each segment, including channel/video info and the text/timing of subtitles within the segment, is saved in a corresponding
.json
file.
Intermediate Dataset Structure (dataset
directory)
Before being packaged into TAR archives for Hugging Face, the segmented data resides in the dataset
directory with the following structure:
dataset/
βββ {channel_id}/ # Directory named after the YouTube channel ID
βββ {video_id}/ # Directory named after the YouTube video ID
βββ {video_id}_{group_name}.mp3 # Segmented audio file
βββ {video_id}_{group_name}.json # Corresponding metadata file
βββ ...
{channel_id}
: The ID of the YouTube channel (e.g.,greenbeanmediaofficial
).{video_id}
: The unique identifier for the YouTube video.{group_name}
: Represents the subtitles included in the segment. It's either the index of the first subtitle (e.g.,1
) if the group contains only one, or a range indicating the first and last subtitle indices (e.g.,1-5
) if the group contains multiple subtitles.
Dataset Summary
The dataset comprises audio from the following channels:
Channel | Videos | Duration | Percent
--------------------- | ------------ | ------------- | -------
3blue1brown | 136 videos | 37.82 hours | 1.08%
boxofficemoviesscenes | 1626 videos | 153.06 hours | 4.38%
business | 887 videos | 187.80 hours | 5.38%
markrober | 97 videos | 21.77 hours | 0.62%
marvel | 763 videos | 35.17 hours | 1.01%
mitocw | 2844 videos | 1738.07 hours | 49.79%
mkbhd | 114 videos | 27.61 hours | 0.79%
msftmechanics | 732 videos | 131.52 hours | 3.77%
neoexplains | 35 videos | 8.06 hours | 0.23%
nvidia | 134 videos | 19.42 hours | 0.56%
quantasciencechannel | 93 videos | 13.60 hours | 0.39%
teded | 1768 videos | 145.53 hours | 4.17%
theinfographicsshow | 3402 videos | 827.06 hours | 23.69%
twominutepapers | 871 videos | 79.34 hours | 2.27%
veritasium | 291 videos | 64.96 hours | 1.86%
--------------------- | ------------ | ------------- | -------
Total | 13793 videos | 3490.79 hours | 100.00%
Loading the Data
You can load the data using the Hugging Face datasets
library:
import os
from datasets import load_dataset
ds = load_dataset(
"OrcinusOrca/YouTube-English",
"all", # or channel_id as config
split="train",
streaming=False, # or True
num_proc=os.cpu_count(),
)