Gutenberg_books / README.md
Navanjana's picture
Update README.md
4cc9173 verified
metadata
license: apache-2.0
task_categories:
  - text-classification
  - text-generation
  - text2text-generation
language:
  - en
tags:
  - books
  - gutenburg
  - english
pretty_name: Gutenberg-Books
size_categories:
  - 10M<n<100M

Gutenberg Books Dataset

Dataset Description

This dataset contains 97,646,390 paragraphs extracted from 74,329 English-language books sourced from Project Gutenberg, a digital library of public domain works. The total size of the dataset is 34GB, making it a substantial resource for natural language processing (NLP) research and applications. The texts have been cleaned to remove Project Gutenberg's standard headers and footers, ensuring that only the core content of each book remains. This dataset is designed to support a variety of tasks, such as training large language models, text classification, topic modeling, and literary analysis, by providing a diverse and extensive corpus of historical English literature.

Key Features

  • Number of Books: 74,329
  • Number of Paragraphs: 97,646,390
  • Dataset Size: 34GB
  • Language: English
  • Source: Project Gutenberg
  • Format: Tabular (single table)

Dataset Structure

The dataset is organized as a single table with the following columns:

  • book_id (int64): A unique identifier for each book, corresponding to its Project Gutenberg ID.
  • book_name (string): The title of the book from which the paragraph originates.
  • paragraph (string): The full text of an individual paragraph extracted from a book.

Each row represents a single paragraph, enabling fine-grained analysis or processing at the paragraph level rather than the full book level.

Sample Row

book_id book_name paragraph
1 The Declaration of Independence of the United States of America The Project Gutenberg eBook of The Declaration of Independence of the United States of America

Accessing the Dataset

The dataset is hosted on Hugging Face and can be accessed using the datasets library. Given its large size (34GB), streaming mode is recommended to efficiently handle the data without requiring excessive memory.

Installation

First, ensure you have the datasets library installed:

pip install datasets

Loading the Dataset

from datasets import load_dataset

# Load the dataset in streaming mode (recommended)
dataset = load_dataset("Navanjana/Gutenberg_books", streaming=True)

# Access the first example
for example in dataset['train']:
    print(example['book_id'])
    print(example['book_name'])
    print(example['paragraph'])
    break  # Remove this to iterate over the entire dataset

# Alternatively, load normally (requires significant memory)
# dataset = load_dataset("Navanjana/Gutenberg_books")

Data Preprocessing

The dataset has undergone the following preprocessing steps:

  • Header/Footer Removal: Project Gutenberg's standard headers and footers (e.g., legal notices, metadata) have been stripped, leaving only the main book content.
  • Paragraph Splitting: Each book has been segmented into individual paragraphs based on standard paragraph breaks (e.g., double newlines).
  • No Text Normalization: The original text formatting, including capitalization and punctuation, has been preserved to maintain fidelity to the source material.

No additional filtering (e.g., by genre or publication date) or normalization (e.g., lowercasing, tokenization) has been applied, giving users flexibility in how they process the data.

Dataset Statistics

Here are some approximate insights based on the dataset's scope:

  • Average Paragraphs per Book: ~1,314 (97,646,390 ÷ 74,329)
  • Time Period: Primarily 17th to early 20th century, reflecting Project Gutenberg's public domain focus.
  • Genres: Includes fiction, non-fiction, poetry, and drama, though no genre metadata is provided in this version.

For detailed statistics (e.g., word count, book distribution), users are encouraged to compute them using the dataset.

Potential Use Cases

This dataset is versatile and can support numerous NLP and literary research applications, including:

  • Training Language Models: Pretrain or fine-tune large language models on a rich corpus of historical texts.
  • Text Classification: Classify paragraphs by sentiment, style, or inferred genre.
  • Topic Modeling: Identify prevalent themes or topics across centuries of literature.
  • Digital Humanities: Analyze linguistic evolution, cultural trends, or literary patterns over time.

Limitations and Considerations

  • Historical Context: The dataset includes texts from older eras, which may contain outdated, biased, or offensive language. Users should consider this for sensitive applications.
  • English Only: Limited to English-language books, reducing its utility for multilingual research.
  • No Metadata Beyond Basics: Lacks additional metadata (e.g., author, publication year, genre) that could enhance analysis.
  • Paragraph Granularity: Splitting into paragraphs may fragment narratives, requiring reassembly for full-book tasks.

License

The texts are sourced from Project Gutenberg and are in the public domain. The dataset itself is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0), permitting use, sharing, and adaptation with proper attribution.

Citation

If you use this dataset in your research or projects, please cite it as follows:

@dataset{navanjana_gutenberg_books,
  author = {Navanjana},
  title = {Gutenberg Books Dataset},
  year = {2025},
  publisher = {Hugging Face},
  version = {1.0},
  url = {https://huggingface.co/datasets/Navanjana/Gutenberg_books}
}

Additionally, please credit Project Gutenberg as the original source of the texts.

Additional Information

  • Version: 1.0 (Initial release)
  • Hosting: Available on Hugging Face at Navanjana/Gutenberg_books.
  • Contributing: Suggestions or issues can be reported via the dataset’s Hugging Face page.
  • Contact: Reach out to the dataset creator through Hugging Face for questions or collaboration.

Acknowledgments

Special thanks to Project Gutenberg for providing the raw texts and to the Hugging Face team for hosting this dataset, making it accessible to the research community. ```