Gigaspeech Part 8
This is Part 8 of 8 of a large-scale speech dataset, split to accommodate HuggingFace's repository size limits.
Multi-Part Dataset
This dataset is split across multiple repositories:
- Part 1: shahdsaf/gigaspeech-part-1
- Part 2: shahdsaf/gigaspeech-part-2
- Part 3: shahdsaf/gigaspeech-part-3
- Part 4: shahdsaf/gigaspeech-part-4
- Part 5: shahdsaf/gigaspeech-part-5
- Part 6: shahdsaf/gigaspeech-part-6
- Part 7: shahdsaf/gigaspeech-part-7
- Part 8 (current): shahdsaf/gigaspeech-part-8
This Repository (Part 8)
- Total parquet files: 19
- Total size: 139.72 GB
- Failed uploads: 0
- Subfolders: 4
Files by Subfolder:
- xl/train: 149 files (138.57 GB)
- test/test: 1 files (0.55 GB)
- xl/test: 1 files (0.55 GB)
- small_test/test: 2 files (0.05 GB)
Usage
from datasets import load_dataset
# Load this part of the dataset
dataset = load_dataset("shahdsaf/gigaspeech-part-8")
# Load specific subfolder from this part
dataset = load_dataset("shahdsaf/gigaspeech-part-8",
data_files="data/xl/train/*.parquet")
# To load the complete multi-part dataset:
import datasets
parts = []
for i in range(1, 9):
part_repo = "shahdsaf/gigaspeech-part-" + str(i)
part_data = load_dataset(part_repo)
parts.append(part_data['train'])
# Concatenate all parts
complete_dataset = datasets.concatenate_datasets(parts)
File Organization
All parquet files are stored under the data/ directory, maintaining the original subfolder structure.
Important Notes
- This is part of a multi-repository dataset due to size constraints
- Each part maintains the original folder structure
- Use the concatenation approach above to work with the complete dataset
- Files are distributed to balance repository sizes (max 290 GB per repo)
- Downloads last month
- 37