The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
DBpedia OpenAI 1M Dataset
A comprehensive vector database resource containing 1,000,000 DBpedia entity descriptions with pre-computed OpenAI text-embedding-ada-002 embeddings (1536-D). This dataset is optimized for large-scale similarity search, retrieval tasks, and distributed vector database deployments.
Dataset Overview
- Size: 1,000,000 base vectors + 10,000 query vectors
- Embedding Model: OpenAI text-embedding-ada-002
- Dimensions: 1536
- Source: KShivendu/dbpedia-entities-openai-1M
- Format: Parquet, FBIN, DiskANN indices
- License: MIT
Dataset Structure
Each record contains:
| Field | Type | Description |
|---|---|---|
| id | int64 | Unique entity identifier |
| text | string | DBpedia entity description |
| embedding | float32[1536] | Pre-computed OpenAI embedding vector |
Directory Structure
dbpedia_openai_1m/
βββ parquet/ # Raw embeddings in Parquet format
β βββ base.parquet # 1M base vectors (8.6 GB)
β βββ queries.parquet # 10K query vectors (88 MB)
βββ fbin/ # Binary vector format for DiskANN
β βββ base.fbin # 1M base vectors (5.8 GB)
β βββ queries.fbin # 10K query vectors (59 MB)
βββ diskann/ # DiskANN indices and shards
βββ index_64_100_384_disk.index # Base index (R=64, L=100, PQ=384)
βββ shard_3/ # 3-shard configuration (27 files)
βββ shard_5/ # 5-shard configuration (45 files)
βββ shard_7/ # 7-shard configuration (63 files)
βββ shard_10/ # 10-shard configuration (90 files)
DiskANN Index Configuration
All indices are built with the following parameters:
- Graph Degree (R): 64
- Search List Size (L): 100
- PQ Compression: 384 bytes (0.25 ratio)
- Distance Metric: L2 (Euclidean)
- Build RAM Budget: 100 GB
- Search RAM Budget: 0.358 GB per query
Shard Configurations
Each shard configuration includes:
- Data shards:
.fbinfiles with partitioned vectors - Index files:
_disk.indexDiskANN graph indices - PQ files:
_pq_compressed.binand_pq_pivots.binfor compression - MinIO format:
_512_none.indicesand_base_none.vectorsfor distributed deployment
| Configuration | Shards | Vectors/Shard | Use Case |
|---|---|---|---|
| shard_3 | 3 | ~333K | Small clusters |
| shard_5 | 5 | ~200K | Medium clusters |
| shard_7 | 7 | ~143K | Large clusters |
| shard_10 | 10 | ~100K | Distributed systems |
Usage
Load with Hugging Face Datasets
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("maknee/dbpedia_openai_1m")
# Access embeddings and text
for item in dataset['train']:
text = item['text']
embedding = item['embedding'] # 1536-D vector
print(f"Text: {text[:100]}...")
print(f"Embedding shape: {len(embedding)}")
Load Parquet Directly
import pandas as pd
# Load base vectors
base_df = pd.read_parquet("hf://datasets/maknee/dbpedia_openai_1m/parquet/base.parquet")
print(f"Loaded {len(base_df)} vectors with {len(base_df['embedding'].iloc[0])} dimensions")
# Load queries
queries_df = pd.read_parquet("hf://datasets/maknee/dbpedia_openai_1m/parquet/queries.parquet")
print(f"Loaded {len(queries_df)} query vectors")
Use with DiskANN
from huggingface_hub import hf_hub_download
# Download base index
index_path = hf_hub_download(
repo_id="maknee/dbpedia_openai_1m",
filename="diskann/index_64_100_384_disk.index",
repo_type="dataset"
)
# Use with DiskANN library for similarity search
# See: https://github.com/microsoft/DiskANN
Distributed Deployment
For distributed vector search systems:
- Download a shard configuration (e.g., 5 shards):
from huggingface_hub import snapshot_download
# Download all shard_5 files
local_dir = snapshot_download(
repo_id="maknee/dbpedia_openai_1m",
repo_type="dataset",
allow_patterns="diskann/shard_5/*"
)
Deploy shards across nodes: Each shard contains:
- Data:
base_base.shard{N}.fbin - Index:
base_64_100_384.shard{N}_disk.index - MinIO files:
*_512_none.indicesand*_base_none.vectors
- Data:
Query in parallel: Distribute queries across all shard nodes and merge results.
Use Cases
- Semantic Search: Find similar DBpedia entities by text similarity
- Knowledge Base Retrieval: Build QA systems over structured knowledge
- Entity Linking: Connect text mentions to DBpedia entities
- Recommendation Systems: Suggest related entities based on embeddings
- Vector Database Benchmarking: Test large-scale similarity search systems
- Distributed Systems Research: Study sharding and parallel search strategies
Dataset Statistics
- Total Vectors: 1,000,000 base + 10,000 queries
- Embedding Dimensions: 1536
- Total Dataset Size: ~35 GB (all formats)
- Average Text Length: ~200 characters
- Coverage: Diverse DBpedia entity types (people, places, organizations, concepts)
Citation
If you use this dataset, please cite:
@dataset{dbpedia_openai_1m,
author = {maknee},
title = {DBpedia OpenAI 1M: Pre-embedded DBpedia Entities},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/maknee/dbpedia_openai_1m}
}
Original DBpedia embeddings from:
@dataset{kshivendu_dbpedia,
author = {KShivendu},
title = {DBpedia Entities OpenAI 1M},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M}
}
License
This dataset is released under the MIT License. The original embeddings are from OpenAI's text-embedding-ada-002 model.
Acknowledgments
- Original Dataset: KShivendu/dbpedia-entities-openai-1M
- Embedding Model: OpenAI text-embedding-ada-002
- Index Format: Microsoft DiskANN
- Source Knowledge Base: DBpedia
Related Datasets
- maknee/wikipedia_qwen_4b - Wikipedia with Qwen embeddings (2560-D)
- maknee/wikipedia_qwen_8b - Wikipedia with Qwen-8B embeddings
Contact
For questions or issues, please open an issue on the dataset repository.
- Downloads last month
- 6