Dataset Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

DBpedia OpenAI 1M Dataset

A comprehensive vector database resource containing 1,000,000 DBpedia entity descriptions with pre-computed OpenAI text-embedding-ada-002 embeddings (1536-D). This dataset is optimized for large-scale similarity search, retrieval tasks, and distributed vector database deployments.

Dataset Overview

  • Size: 1,000,000 base vectors + 10,000 query vectors
  • Embedding Model: OpenAI text-embedding-ada-002
  • Dimensions: 1536
  • Source: KShivendu/dbpedia-entities-openai-1M
  • Format: Parquet, FBIN, DiskANN indices
  • License: MIT

Dataset Structure

Each record contains:

Field Type Description
id int64 Unique entity identifier
text string DBpedia entity description
embedding float32[1536] Pre-computed OpenAI embedding vector

Directory Structure

dbpedia_openai_1m/
β”œβ”€β”€ parquet/                    # Raw embeddings in Parquet format
β”‚   β”œβ”€β”€ base.parquet           # 1M base vectors (8.6 GB)
β”‚   └── queries.parquet        # 10K query vectors (88 MB)
β”œβ”€β”€ fbin/                       # Binary vector format for DiskANN
β”‚   β”œβ”€β”€ base.fbin              # 1M base vectors (5.8 GB)
β”‚   └── queries.fbin           # 10K query vectors (59 MB)
└── diskann/                    # DiskANN indices and shards
    β”œβ”€β”€ index_64_100_384_disk.index   # Base index (R=64, L=100, PQ=384)
    β”œβ”€β”€ shard_3/               # 3-shard configuration (27 files)
    β”œβ”€β”€ shard_5/               # 5-shard configuration (45 files)
    β”œβ”€β”€ shard_7/               # 7-shard configuration (63 files)
    └── shard_10/              # 10-shard configuration (90 files)

DiskANN Index Configuration

All indices are built with the following parameters:

  • Graph Degree (R): 64
  • Search List Size (L): 100
  • PQ Compression: 384 bytes (0.25 ratio)
  • Distance Metric: L2 (Euclidean)
  • Build RAM Budget: 100 GB
  • Search RAM Budget: 0.358 GB per query

Shard Configurations

Each shard configuration includes:

  • Data shards: .fbin files with partitioned vectors
  • Index files: _disk.index DiskANN graph indices
  • PQ files: _pq_compressed.bin and _pq_pivots.bin for compression
  • MinIO format: _512_none.indices and _base_none.vectors for distributed deployment
Configuration Shards Vectors/Shard Use Case
shard_3 3 ~333K Small clusters
shard_5 5 ~200K Medium clusters
shard_7 7 ~143K Large clusters
shard_10 10 ~100K Distributed systems

Usage

Load with Hugging Face Datasets

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("maknee/dbpedia_openai_1m")

# Access embeddings and text
for item in dataset['train']:
    text = item['text']
    embedding = item['embedding']  # 1536-D vector
    print(f"Text: {text[:100]}...")
    print(f"Embedding shape: {len(embedding)}")

Load Parquet Directly

import pandas as pd

# Load base vectors
base_df = pd.read_parquet("hf://datasets/maknee/dbpedia_openai_1m/parquet/base.parquet")
print(f"Loaded {len(base_df)} vectors with {len(base_df['embedding'].iloc[0])} dimensions")

# Load queries
queries_df = pd.read_parquet("hf://datasets/maknee/dbpedia_openai_1m/parquet/queries.parquet")
print(f"Loaded {len(queries_df)} query vectors")

Use with DiskANN

from huggingface_hub import hf_hub_download

# Download base index
index_path = hf_hub_download(
    repo_id="maknee/dbpedia_openai_1m",
    filename="diskann/index_64_100_384_disk.index",
    repo_type="dataset"
)

# Use with DiskANN library for similarity search
# See: https://github.com/microsoft/DiskANN

Distributed Deployment

For distributed vector search systems:

  1. Download a shard configuration (e.g., 5 shards):
from huggingface_hub import snapshot_download

# Download all shard_5 files
local_dir = snapshot_download(
    repo_id="maknee/dbpedia_openai_1m",
    repo_type="dataset",
    allow_patterns="diskann/shard_5/*"
)
  1. Deploy shards across nodes: Each shard contains:

    • Data: base_base.shard{N}.fbin
    • Index: base_64_100_384.shard{N}_disk.index
    • MinIO files: *_512_none.indices and *_base_none.vectors
  2. Query in parallel: Distribute queries across all shard nodes and merge results.

Use Cases

  • Semantic Search: Find similar DBpedia entities by text similarity
  • Knowledge Base Retrieval: Build QA systems over structured knowledge
  • Entity Linking: Connect text mentions to DBpedia entities
  • Recommendation Systems: Suggest related entities based on embeddings
  • Vector Database Benchmarking: Test large-scale similarity search systems
  • Distributed Systems Research: Study sharding and parallel search strategies

Dataset Statistics

  • Total Vectors: 1,000,000 base + 10,000 queries
  • Embedding Dimensions: 1536
  • Total Dataset Size: ~35 GB (all formats)
  • Average Text Length: ~200 characters
  • Coverage: Diverse DBpedia entity types (people, places, organizations, concepts)

Citation

If you use this dataset, please cite:

@dataset{dbpedia_openai_1m,
  author = {maknee},
  title = {DBpedia OpenAI 1M: Pre-embedded DBpedia Entities},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/maknee/dbpedia_openai_1m}
}

Original DBpedia embeddings from:

@dataset{kshivendu_dbpedia,
  author = {KShivendu},
  title = {DBpedia Entities OpenAI 1M},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M}
}

License

This dataset is released under the MIT License. The original embeddings are from OpenAI's text-embedding-ada-002 model.

Acknowledgments

Related Datasets

Contact

For questions or issues, please open an issue on the dataset repository.

Downloads last month
6