You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Gigaspeech Part 8

This is Part 8 of 8 of a large-scale speech dataset, split to accommodate HuggingFace's repository size limits.

Multi-Part Dataset

This dataset is split across multiple repositories:

This Repository (Part 8)

  • Total parquet files: 19
  • Total size: 139.72 GB
  • Failed uploads: 0
  • Subfolders: 4

Files by Subfolder:

  • xl/train: 149 files (138.57 GB)
  • test/test: 1 files (0.55 GB)
  • xl/test: 1 files (0.55 GB)
  • small_test/test: 2 files (0.05 GB)

Usage

from datasets import load_dataset

# Load this part of the dataset
dataset = load_dataset("shahdsaf/gigaspeech-part-8")

# Load specific subfolder from this part
dataset = load_dataset("shahdsaf/gigaspeech-part-8", 
                      data_files="data/xl/train/*.parquet")

# To load the complete multi-part dataset:
import datasets
parts = []
for i in range(1, 9):
    part_repo = "shahdsaf/gigaspeech-part-" + str(i)
    part_data = load_dataset(part_repo)
    parts.append(part_data['train'])

# Concatenate all parts
complete_dataset = datasets.concatenate_datasets(parts)

File Organization

All parquet files are stored under the data/ directory, maintaining the original subfolder structure.

Important Notes

  • This is part of a multi-repository dataset due to size constraints
  • Each part maintains the original folder structure
  • Use the concatenation approach above to work with the complete dataset
  • Files are distributed to balance repository sizes (max 290 GB per repo)
Downloads last month
37