Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-07-20 15:51:01
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-07-20 15:38:59
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
leo66666/crosscoder-llama-3.2-1b-diff
|
leo66666
|
2024-12-03T03:05:58Z
| 17 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-03T03:00:07Z
| 0 |
---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: original_text
dtype: string
splits:
- name: train
num_bytes: 567249286
num_examples: 100000
download_size: 283793218
dataset_size: 567249286
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
supergoose/flan_combined_task094_conala_calculate_mean
|
supergoose
|
2025-03-10T14:30:56Z
| 16 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-10T14:30:55Z
| 0 |
---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 7975525
num_examples: 14956
download_size: 1747405
dataset_size: 7975525
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.4_num-company_3_dataset_1_for_gen_12
|
HungVu2003
|
2025-04-30T19:24:47Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-30T19:24:45Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2829795
num_examples: 12498
download_size: 1539242
dataset_size: 2829795
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Senju2/context-aware-project
|
Senju2
|
2025-04-23T18:41:30Z
| 16 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-19T18:26:17Z
| 0 |
---
dataset_info:
features:
- name: en
dtype: string
- name: ar
dtype: string
- name: formal
dtype: string
- name: informal
dtype: string
- name: region
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 36533246
num_examples: 231459
download_size: 11689635
dataset_size: 36533246
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
UE-CESE/Contribution_CESE_au_programme_de_la_CE_pour_2025
|
UE-CESE
|
2025-04-27T14:40:27Z
| 22 | 0 |
[
"task_categories:translation",
"language:fra",
"language:eng",
"language:deu",
"language:sv",
"language:nld",
"language:por",
"language:ita",
"language:hu",
"language:cs",
"language:da",
"language:mt",
"language:hr",
"language:spa",
"language:pol",
"region:us"
] |
[
"translation"
] |
2025-04-27T14:38:07Z
| 0 |
---
language:
- fra
- eng
- deu
- sv
- nld
- por
- ita
- hu
- cs
- da
- mt
- hr
- spa
- pol
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://www.eesc.europa.eu/fr/our-work/publications-other-work/publications/contribution-du-comite-economique-et-social-europeen-au-programme-de-travail-de-la-commission-europeenne-pour-2025
## Description
Le Comité économique et social européen (CESE) a adopté le 4 décembre 2024 une résolution visant à fournir à la Commission européenne une contribution à son programme de travail 2025. Autour des sept axes des orientations politiques 2024-2029 de la présidente de la Commission européenne, Ursula von der Leyen, le CESE formule des recommandations sur la manière d'aborder les priorités urgentes telles que, entre autres, une stratégie et un plan d'action pour la société civile, le nouveau plan d'action pour la mise en œuvre du socle européen des droits sociaux, le pacte pour une industrie propre et le renforcement de la compétitivité durable de l'UE.
|
chrislevy/synthetic-social-media-personas
|
chrislevy
|
2025-06-14T23:31:02Z
| 0 | 0 |
[
"task_categories:text-classification",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"text-classification",
"text-generation"
] |
2025-06-14T22:50:16Z
| 0 |
---
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- en
---
# synthetic-social-media-personas
## Dataset Description
This dataset contains **229,423** synthetic social media posts generated by various language models to represent diverse account types, personas, and communication styles found across social media platforms. The dataset is designed for experimentation in social media analysis, content classification, persona detection, and understanding online communication patterns.
## Dataset Summary
- **Total Posts**: 229,423
- **Unique Users**: 8,037 (avg 28.5 posts per user)
- **Account Types**: 7 distinct categories
- **Unique Personas**: 110 across all account types
- **Models Used**: 4 different language models (Qwen, Gemma)
- **Average Post Length**: 156 characters (median: 146)
## Account Types Distribution
| Account Type | Posts | Percentage | Personas | Description |
|--------------|-------|------------|----------|-------------|
| Individual | 121,416 | 52.9% | 19 | Regular people with diverse personas and life stages |
| Bot | 47,057 | 20.5% | 15 | Automated accounts with various purposes |
| Brand/Business | 18,202 | 7.9% | 15 | Companies, local shops, and professional services |
| Spam/Scam | 15,528 | 6.8% | 16 | Fraudulent accounts with malicious intent |
| Creative/Meme | 11,048 | 4.8% | 15 | Humor and meme-focused accounts |
| Media/News | 8,481 | 3.7% | 15 | News outlets and media organizations |
| Influencer/Public Figure | 7,691 | 3.4% | 15 | Content creators and public personalities |
## Data Structure
### Core Fields
- `user_id`: Unique identifier for each account
- `account_type`: Category of social media account
- `persona`: Specific persona within the account type
- `model`: Language model used to generate the post
- `post`: The actual social media post content
### Modifier Fields
Posts include various modifier attributes that define the characteristics of the account:
**Individual Accounts**:
- `communication_style`: casual_friendly, sarcastic_witty, wholesome_positive, etc.
- `posting_mood`: optimistic_upbeat, pessimistic_complaining, emotional_dramatic, etc.
- `education_level`: high_school_dropout to graduate_degree
- `political_leaning`: far_left_progressive to far_right_extremist
- `life_stage`: teenager, college_student, young_professional, parent, etc.
- `primary_topic`: work_career, family_kids, hobbies_interests, etc.
**Brand/Business Accounts**:
- `brand_voice`: professional_corporate, casual_approachable, edgy_provocative, etc.
- `marketing_style`: hard_sell_pushy, educational_helpful, entertaining_fun, etc.
- `business_stage`: startup_scrappy, established_confident, struggling_desperate, etc.
**Bot Accounts**:
- `bot_personality`: robotic_formal, friendly_helpful, quirky_weird, etc.
- `response_pattern`: scheduled_regular, triggered_reactive, random_chaotic, etc.
- `content_type`: factual_information, motivational_quotes, alerts_notifications, etc.
And similar modifier sets for other account types.
## Generation Process
The dataset was created using a persona-based generation system:
1. **Account Sampling**: Weighted random selection of account types based on realistic social media distributions
2. **Persona Assignment**: Each account is assigned a specific persona with associated characteristics
3. **Modifier Application**: Personas are enhanced with relevant modifiers (communication style, topics, etc.)
4. **Prompt Generation**: Detailed system prompts are created incorporating all persona and modifier information
5. **Content Generation**: Multiple language models generate authentic posts matching the defined characteristics
The code is [here](https://github.com/DrChrisLevy/synthetic-social-media-personas)
## Models Used
| Model | Posts Generated | Percentage |
|-------|----------------|------------|
| Qwen/Qwen2.5-7B-Instruct | 97,434 | 42.5% |
| Qwen/Qwen2.5-72B-Instruct | 53,701 | 23.4% |
| google/gemma-3-12b-it | 47,248 | 20.6% |
| Qwen/Qwen2.5-32B-Instruct | 31,040 | 13.5% |
|
ferrazzipietro/LS_Llama-3.1-8B_e3c-sentences-IT-unrevised_NoQuant_16_16_0.01_64_BestF1
|
ferrazzipietro
|
2024-11-25T11:47:13Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-25T11:47:11Z
| 0 |
---
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: offsets
sequence: int64
- name: text
dtype: string
- name: type
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ground_truth_word_level
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: predictions
sequence: string
- name: ground_truth_labels
sequence: string
splits:
- name: all_validation
num_bytes: 146361
num_examples: 100
- name: test
num_bytes: 1023662
num_examples: 655
download_size: 228559
dataset_size: 1170023
configs:
- config_name: default
data_files:
- split: all_validation
path: data/all_validation-*
- split: test
path: data/test-*
---
|
supergoose/flan_combined_task523_find_if_numbers_or_alphabets_are_more_in_list
|
supergoose
|
2025-03-05T21:57:36Z
| 15 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-05T21:57:35Z
| 0 |
---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 16599262
num_examples: 19439
download_size: 3843300
dataset_size: 16599262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zijian2022/so100_bi_test_27
|
zijian2022
|
2025-02-02T20:06:45Z
| 30 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] |
[
"robotics"
] |
2025-02-02T20:06:42Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 653,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_flex",
"left_wrist_roll",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_flex",
"right_wrist_roll",
"right_gripper"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.first": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
beyoru/kafka-voice-en
|
beyoru
|
2025-06-06T03:13:17Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-06T02:45:47Z
| 0 |
---
dataset_info:
features:
- name: transcription
dtype: string
- name: audio
dtype: audio
- name: speaker
dtype: string
splits:
- name: train
num_bytes: 263291081.0
num_examples: 405
download_size: 239286667
dataset_size: 263291081.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stefandi/countdown_synthetic_v2
|
stefandi
|
2025-06-03T08:26:35Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-02T18:33:33Z
| 0 |
---
dataset_info:
features:
- name: nums
sequence: int64
- name: target
dtype: int64
- name: equation
dtype: string
splits:
- name: train
num_bytes: 133076
num_examples: 2000
download_size: 39066
dataset_size: 133076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aisi-whitebox/wmdp_bio_new_merged_mo2_6epc_finetuned_sandbagging_follow_up_q
|
aisi-whitebox
|
2025-05-27T13:54:04Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-27T13:54:03Z
| 0 |
---
dataset_info:
features:
- name: chat
list:
- name: content
dtype: string
- name: role
dtype: string
- name: targets
dtype: string
- name: metadatas
struct:
- name: dummy
dtype: 'null'
- name: scores
dtype: string
- name: answers
dtype: string
- name: sys_prompts
dtype: string
- name: is_benign
dtype: int64
- name: input_ids
dtype: int64
- name: task_name
dtype: string
- name: sample_index
dtype: int64
- name: dataset_id
dtype: string
- name: sandbagging_executed
dtype: int64
splits:
- name: train
num_bytes: 1022389
num_examples: 1000
download_size: 121940
dataset_size: 1022389
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Octapod/aloha_pink_iii_angles
|
Octapod
|
2024-12-26T14:36:28Z
| 22 | 0 |
[
"task_categories:robotics",
"region:us",
"LeRobot",
"tutorial"
] |
[
"robotics"
] |
2024-12-26T14:24:10Z
| 0 |
---
task_categories:
- robotics
tags:
- LeRobot
- tutorial
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
GitBag/block-q-sharp_ds-distilled-qwen-1.5b-ppo-aime-25_eval
|
GitBag
|
2025-05-10T03:15:45Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-10T03:15:32Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: message_id
dtype: string
- name: responses
dtype: string
- name: processed_answer
dtype: string
- name: reward
dtype: bool
splits:
- name: train
num_bytes: 804088500
num_examples: 30720
download_size: 322728334
dataset_size: 804088500
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
datasets-CNRS/composes-inorganiques
|
datasets-CNRS
|
2025-03-29T21:28:22Z
| 20 | 0 |
[
"task_categories:translation",
"multilinguality:multilingual",
"language:eng",
"language:fra",
"license:cc-by-4.0",
"region:us"
] |
[
"translation"
] |
2024-10-13T18:45:45Z
| 0 |
---
license: cc-by-4.0
language:
- eng
- fra
multilinguality:
- multilingual
viewer: false
task_categories:
- translation
---
> [!NOTE]
> Dataset origin: https://www.ortolang.fr/market/terminologies/composes-inorganiques
## Description
Thésaurus bilingue (français-anglais) de plus de 2500 composés inorganiques (composés minéraux), classés par éléments chimiques et familles de composés. Les composés sont issus du vocabulaire contrôlé utilisé pour l'indexation des références bibliographiques de la base de données PASCAL (1972 à 2015)
## Citation
```
@misc{11403/composes-inorganiques/v1.0,
title = {Composés inorganiques},
author = {INIST},
url = {https://hdl.handle.net/11403/composes-inorganiques/v1.0},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons - Attribution 4.0 International},
year = {2023}
}
```
|
BattleTag/Email
|
BattleTag
|
2024-10-20T02:54:35Z
| 31 | 0 |
[
"task_categories:text-classification",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"text-classification"
] |
2024-10-15T23:59:13Z
| 0 |
---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 16604720
num_examples: 24043
- name: test
num_bytes: 1997109
num_examples: 3039
- name: val
num_bytes: 2227647
num_examples: 3359
download_size: 13145399
dataset_size: 20829476
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
task_categories:
- text-classification
---
# Dataset Card for Scam Email Dataset
<!-- Provide a quick summary of the dataset. -->
This dataset includes three categories emails in both Chinese and English.
## Dataset Details
Our dataset contains three categories: AI-scam, scam, and normal. There are 30,441 data in total, of which AI-scam, scam, and normal account for 10,147 each. It contains emails in two languages, with the proportions being: Chinese 49.9% and English 50.1%
|
nihatavci/my-distiset-dde72cee
|
nihatavci
|
2025-03-13T22:38:12Z
| 15 | 0 |
[
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-13T22:38:06Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': order-and-delivery
'1': product-price
'2': general-inquiry
'3': technical-support
'4': kit-features
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 953
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
usacognition/E2025_03_16_06_11-2025_02_23_esrl_training_data.2
|
usacognition
|
2025-03-16T06:25:58Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-16T06:14:11Z
| 0 |
---
dataset_info:
features:
- name: origin
dtype: string
- name: prompt_json
dtype: string
- name: playground_url
dtype: string
- name: reference_answer_json
dtype: string
- name: new_generation
dtype: string
- name: score
dtype: float64
- name: api_score
dtype: float64
splits:
- name: train_research_cognition_rl_20250311
num_bytes: 9488309
num_examples: 50
- name: train_claude_3_haiku_20240307
num_bytes: 9521697
num_examples: 50
- name: train_sonnet3.6
num_bytes: 9492660
num_examples: 50
- name: train_claude_3_7_sonnet_20250219
num_bytes: 9491499
num_examples: 50
- name: eval_research_cognition_rl_20250311
num_bytes: 15036416
num_examples: 74
- name: eval_claude_3_haiku_20240307
num_bytes: 15060090
num_examples: 74
- name: eval_sonnet3.6
num_bytes: 15034722
num_examples: 74
- name: eval_claude_3_7_sonnet_20250219
num_bytes: 15039723
num_examples: 74
download_size: 33510730
dataset_size: 98165116
configs:
- config_name: default
data_files:
- split: train_research_cognition_rl_20250311
path: data/train_research_cognition_rl_20250311-*
- split: train_claude_3_haiku_20240307
path: data/train_claude_3_haiku_20240307-*
- split: train_sonnet3.6
path: data/train_sonnet3.6-*
- split: train_claude_3_7_sonnet_20250219
path: data/train_claude_3_7_sonnet_20250219-*
- split: eval_research_cognition_rl_20250311
path: data/eval_research_cognition_rl_20250311-*
- split: eval_claude_3_haiku_20240307
path: data/eval_claude_3_haiku_20240307-*
- split: eval_sonnet3.6
path: data/eval_sonnet3.6-*
- split: eval_claude_3_7_sonnet_20250219
path: data/eval_claude_3_7_sonnet_20250219-*
---
|
octava/inavocript-1.5.9
|
octava
|
2025-03-11T08:06:54Z
| 28 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-11T07:58:59Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 8993558669.083712
num_examples: 26003
- name: test
num_bytes: 741791434.982
num_examples: 1746
download_size: 10111290778
dataset_size: 9735350104.065712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
MLDS-NUS/polymer-dynamics-Wi_0.7
|
MLDS-NUS
|
2024-11-28T09:37:49Z
| 24 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-26T02:29:32Z
| 0 |
---
dataset_info:
features:
- name: x
dtype:
array2_d:
shape:
- 2501
- 900
dtype: float32
- name: t
dtype:
array2_d:
shape:
- 2501
- 1
dtype: float32
- name: args
dtype:
array2_d:
shape:
- 2501
- 2
dtype: float32
splits:
- name: valid
num_bytes: 996999960
num_examples: 110
- name: train
num_bytes: 5528817960
num_examples: 610
- name: test_fast
num_bytes: 4531818000
num_examples: 500
- name: test_medium
num_bytes: 4531818000
num_examples: 500
- name: test_slow
num_bytes: 4531818000
num_examples: 500
download_size: 26502646155
dataset_size: 20121271920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test_fast
path: data/test_fast-*
- split: test_medium
path: data/test_medium-*
- split: test_slow
path: data/test_slow-*
---
|
goosull/processed-QnAdataset-196k-part3
|
goosull
|
2025-05-22T16:00:22Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-22T15:58:06Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2295241017
num_examples: 19600
download_size: 1161239320
dataset_size: 2295241017
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hcy-43/DeepMentor_PreTrain
|
hcy-43
|
2025-05-22T15:21:47Z
| 90 | 0 |
[
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-14T07:18:04Z
| 0 |
---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 187312553711
num_examples: 157533719
download_size: 131292264216
dataset_size: 187312553711
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hcasademunt/qwen-7b-medical-misaligned-coherent-dataset
|
hcasademunt
|
2025-05-01T21:14:26Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-01T21:14:25Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: answer
dtype: string
- name: aligned
dtype: float64
- name: coherent
dtype: float64
splits:
- name: train
num_bytes: 199761
num_examples: 447
download_size: 88362
dataset_size: 199761
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reasoning-proj/j_c_dfiltered_DeepSeek-R1-Distill-Qwen-7B_mbenign_complete_step_t30
|
reasoning-proj
|
2025-05-11T01:35:31Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-11T01:35:26Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
- name: mutated_answer_content
dtype: string
- name: continuation_1
dtype: string
- name: complete_answer_1
dtype: string
- name: continuation_2
dtype: string
- name: complete_answer_2
dtype: string
- name: continuation_3
dtype: string
- name: complete_answer_3
dtype: string
- name: continuation_4
dtype: string
- name: complete_answer_4
dtype: string
- name: continuation_5
dtype: string
- name: complete_answer_5
dtype: string
- name: continuation_6
dtype: string
- name: complete_answer_6
dtype: string
- name: continuation_7
dtype: string
- name: complete_answer_7
dtype: string
- name: continuation_8
dtype: string
- name: complete_answer_8
dtype: string
- name: continuation_model
dtype: string
- name: verifier_score_1
dtype: int64
- name: verifier_score_2
dtype: int64
- name: verifier_score_3
dtype: int64
- name: verifier_score_4
dtype: int64
- name: verifier_score_5
dtype: int64
- name: verifier_score_6
dtype: int64
- name: verifier_score_7
dtype: int64
- name: verifier_score_8
dtype: int64
splits:
- name: train
num_bytes: 100838114
num_examples: 600
download_size: 42186392
dataset_size: 100838114
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
louisbrulenaudet/code-environnement
|
louisbrulenaudet
|
2025-06-24T09:32:19Z
| 450 | 0 |
[
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:question-answering",
"task_categories:text-classification",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/1451",
"region:us",
"finetuning",
"legal",
"french law",
"droit français",
"Code de l'environnement"
] |
[
"text-generation",
"table-question-answering",
"summarization",
"text-retrieval",
"question-answering",
"text-classification"
] |
2023-12-12T11:19:29Z
| 0 |
---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code de l'environnement
source_datasets:
- original
pretty_name: Code de l'environnement
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code de l'environnement, non-instruct (2025-06-23)
The objective of this project is to provide researchers, professionals and law students with simplified, up-to-date access to all French legal texts, enriched with a wealth of data to facilitate their integration into Community and European projects.
Normally, the data is refreshed daily on all legal codes, and aims to simplify the production of training sets and labeling pipelines for the development of free, open-source language models based on open data accessible to all.
## Concurrent reading of the LegalKit
[<img src="https://raw.githubusercontent.com/louisbrulenaudet/ragoon/main/assets/badge.svg" alt="Built with RAGoon" width="200" height="32"/>](https://github.com/louisbrulenaudet/ragoon)
To use all the legal data published on LegalKit, you can use RAGoon:
```bash
pip3 install ragoon
```
Then, you can load multiple datasets using this code snippet:
```python
# -*- coding: utf-8 -*-
from ragoon import load_datasets
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=False
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
### Data Structure for Article Information
This section provides a detailed overview of the elements contained within the `item` dictionary. Each key represents a specific attribute of the legal article, with its associated value providing detailed information.
1. **Basic Information**
- `ref` (string): **Reference** - A reference to the article, combining the title_main and the article `number` (e.g., "Code Général des Impôts, art. 123").
- `texte` (string): **Text Content** - The textual content of the article.
- `dateDebut` (string): **Start Date** - The date when the article came into effect.
- `dateFin` (string): **End Date** - The date when the article was terminated or superseded.
- `num` (string): **Article Number** - The number assigned to the article.
- `id` (string): **Article ID** - Unique identifier for the article.
- `cid` (string): **Chronical ID** - Chronical identifier for the article.
- `type` (string): **Type** - The type or classification of the document (e.g., "AUTONOME").
- `etat` (string): **Legal Status** - The current legal status of the article (e.g., "MODIFIE_MORT_NE").
2. **Content and Notes**
- `nota` (string): **Notes** - Additional notes or remarks associated with the article.
- `version_article` (string): **Article Version** - The version number of the article.
- `ordre` (integer): **Order Number** - A numerical value used to sort articles within their parent section.
3. **Additional Metadata**
- `conditionDiffere` (string): **Deferred Condition** - Specific conditions related to collective agreements.
- `infosComplementaires` (string): **Additional Information** - Extra information pertinent to the article.
- `surtitre` (string): **Subtitle** - A subtitle or additional title information related to collective agreements.
- `nature` (string): **Nature** - The nature or category of the document (e.g., "Article").
- `texteHtml` (string): **HTML Content** - The article's content in HTML format.
4. **Versioning and Extensions**
- `dateFinExtension` (string): **End Date of Extension** - The end date if the article has an extension.
- `versionPrecedente` (string): **Previous Version** - Identifier for the previous version of the article.
- `refInjection` (string): **Injection Reference** - Technical reference to identify the date of injection.
- `idTexte` (string): **Text ID** - Identifier for the legal text to which the article belongs.
- `idTechInjection` (string): **Technical Injection ID** - Technical identifier for the injected element.
5. **Origin and Relationships**
- `origine` (string): **Origin** - The origin of the document (e.g., "LEGI").
- `dateDebutExtension` (string): **Start Date of Extension** - The start date if the article has an extension.
- `idEliAlias` (string): **ELI Alias** - Alias for the European Legislation Identifier (ELI).
- `cidTexte` (string): **Text Chronical ID** - Chronical identifier of the text.
6. **Hierarchical Relationships**
- `sectionParentId` (string): **Parent Section ID** - Technical identifier of the parent section.
- `multipleVersions` (boolean): **Multiple Versions** - Indicates if the article has multiple versions.
- `comporteLiensSP` (boolean): **Contains Public Service Links** - Indicates if the article contains links to public services.
- `sectionParentTitre` (string): **Parent Section Title** - Title of the parent section (e.g., "I : Revenu imposable").
- `infosRestructurationBranche` (string): **Branch Restructuring Information** - Information about branch restructuring.
- `idEli` (string): **ELI ID** - European Legislation Identifier (ELI) for the article.
- `sectionParentCid` (string): **Parent Section Chronical ID** - Chronical identifier of the parent section.
7. **Additional Content and History**
- `numeroBo` (string): **Official Bulletin Number** - Number of the official bulletin where the article was published.
- `infosRestructurationBrancheHtml` (string): **Branch Restructuring Information (HTML)** - Branch restructuring information in HTML format.
- `historique` (string): **History** - Historical context or changes specific to collective agreements.
- `infosComplementairesHtml` (string): **Additional Information (HTML)** - Additional information in HTML format.
- `renvoi` (string): **Reference** - References to content within the article (e.g., "(1)").
- `fullSectionsTitre` (string): **Full Section Titles** - Concatenation of all titles in the parent chain.
- `notaHtml` (string): **Notes (HTML)** - Additional notes or remarks in HTML format.
- `inap` (string): **INAP** - A placeholder for INAP-specific information.
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).
|
shantanusutar/ocr
|
shantanusutar
|
2025-02-28T08:03:25Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-28T08:03:07Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: binary
- name: text
dtype: string
splits:
- name: train
num_bytes: 700610
num_examples: 26
download_size: 694993
dataset_size: 700610
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SiliangZ/mistral_irl3_rm_data_idpo
|
SiliangZ
|
2025-01-21T19:10:54Z
| 14 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-21T19:10:35Z
| 0 |
---
dataset_info:
features:
- name: real
list:
- name: content
dtype: string
- name: role
dtype: string
- name: generated
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 916829347.0
num_examples: 207865
download_size: 515476403
dataset_size: 916829347.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
supergoose/flan_combined_task934_turk_simplification
|
supergoose
|
2025-03-10T14:30:49Z
| 13 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-10T14:30:48Z
| 0 |
---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 5586329
num_examples: 7052
download_size: 1945233
dataset_size: 5586329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ljnlonoljpiljm/stockimage-scored-pt10
|
ljnlonoljpiljm
|
2025-06-17T20:24:41Z
| 0 | 0 |
[
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-17T19:49:25Z
| 0 |
---
dataset_info:
features:
- name: __key__
dtype: string
- name: __url__
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: similarity
dtype: float64
splits:
- name: train
num_bytes: 41194366207.0
num_examples: 1000000
download_size: 40842891911
dataset_size: 41194366207.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ayush-Singh/RM-Bench-chat-Qwen2.5-7B-Instruct-scores
|
Ayush-Singh
|
2025-01-23T21:10:43Z
| 7 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-23T21:10:41Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: subset
dtype: string
- name: prompt
dtype: string
- name: error_key
dtype: string
- name: error
dtype: string
- name: chosen_1
dtype: string
- name: chosen_2
dtype: string
- name: chosen_3
dtype: string
- name: rejected_1
dtype: string
- name: rejected_2
dtype: string
- name: rejected_3
dtype: string
- name: chosen_1_score
dtype: int64
- name: chosen_1_justification
dtype: string
- name: rejected_1_score
dtype: int64
- name: rejected_1_justification
dtype: string
- name: chosen_2_score
dtype: int64
- name: chosen_2_justification
dtype: string
- name: rejected_2_score
dtype: int64
- name: rejected_2_justification
dtype: string
- name: chosen_3_score
dtype: int64
- name: chosen_3_justification
dtype: string
- name: rejected_3_score
dtype: int64
- name: rejected_3_justification
dtype: string
splits:
- name: train
num_bytes: 3902338
num_examples: 129
download_size: 1693318
dataset_size: 3902338
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
smartcat/Amazon_Baby_Products_2023
|
smartcat
|
2024-10-31T08:42:59Z
| 14 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-29T14:05:43Z
| 0 |
---
dataset_info:
features:
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: int64
- name: features
dtype: string
- name: description
dtype: string
- name: price
dtype: float64
- name: images
list:
- name: thumb
dtype: string
- name: large
dtype: string
- name: variant
dtype: string
- name: hi_res
dtype: string
- name: videos
list:
- name: title
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: store
dtype: string
- name: categories
sequence: string
- name: parent_asin
dtype: string
- name: item_weight
dtype: string
- name: brand
dtype: string
- name: item_model_number
dtype: string
- name: product_dimensions
dtype: string
- name: batteries_required
dtype: string
- name: color
dtype: string
- name: material
dtype: string
- name: material_type
dtype: string
- name: style
dtype: string
- name: number_of_items
dtype: string
- name: manufacturer
dtype: string
- name: package_dimensions
dtype: string
- name: date_first_available
dtype: int64
- name: best_sellers_rank
dtype: string
- name: age_range_(description)
dtype: string
splits:
- name: train
num_bytes: 74252952
num_examples: 22767
download_size: 33492627
dataset_size: 74252952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
Original dataset can be found on: https://amazon-reviews-2023.github.io/
## Dataset Details
This dataset is downloaded from the link above, the category Baby Products meta dataset.
### Dataset Description
This dataset is a refined version of the Amazon Baby Products 2023 meta dataset, which originally contained baby product metadata for product that are sold on Amazon. The dataset includes detailed information about products such as their descriptions, ratings, prices, images, and features. The primary focus of this modification was to ensure the completeness of key fields while simplifying the dataset by removing irrelevant or empty columns.
The table below represents the original structure of the dataset.
<table border="1" cellpadding="5" cellspacing="0">
<tr>
<th>Field</th>
<th>Type</th>
<th>Explanation</th>
</tr>
<tr>
<td>main_category</td>
<td>str</td>
<td>Main category (i.e., domain) of the product.</td>
</tr>
<tr>
<td>title</td>
<td>str</td>
<td>Name of the product.</td>
</tr>
<tr>
<td>average_rating</td>
<td>float</td>
<td>Rating of the product shown on the product page.</td>
</tr>
<tr>
<td>rating_number</td>
<td>int</td>
<td>Number of ratings in the product.</td>
</tr>
<tr>
<td>features</td>
<td>list</td>
<td>Bullet-point format features of the product.</td>
</tr>
<tr>
<td>description</td>
<td>list</td>
<td>Description of the product.</td>
</tr>
<tr>
<td>price</td>
<td>float</td>
<td>Price in US dollars (at time of crawling).</td>
</tr>
<tr>
<td>images</td>
<td>list</td>
<td>Images of the product. Each image has different sizes (thumb, large, hi_res). The “variant” field shows the position of image.</td>
</tr>
<tr>
<td>videos</td>
<td>list</td>
<td>Videos of the product including title and url.</td>
</tr>
<tr>
<td>store</td>
<td>str</td>
<td>Store name of the product.</td>
</tr>
<tr>
<td>categories</td>
<td>list</td>
<td>Hierarchical categories of the product.</td>
</tr>
<tr>
<td>details</td>
<td>dict</td>
<td>Product details, including materials, brand, sizes, etc.</td>
</tr>
<tr>
<td>parent_asin</td>
<td>str</td>
<td>Parent ID of the product.</td>
</tr>
<tr>
<td>bought_together</td>
<td>list</td>
<td>Recommended bundles from the websites.</td>
</tr>
</table>
### Modifications made
<ul>
<li>Products without a description, title, images or details were removed.</li>
<li>Lists in features and description are transformed into strings concatinated with a newline</li>
<li>For the details column, only the top 16 most frequent detail types were kept. The details column was then split into these new 16 columns based on the detail types kept.</li>
<li>Products with date first available before the year 2015 are dropped.</li>
<li>Products with is_discontinued_by_manufacturer set to 'true' or 'yes' are dropped. Then that column was dropped.</li>
<li>Column bought_together is dropped due to missing values.</li>
</ul>
### Dataset Size
<ul>
<li>Total entries: 22,767</li>
<li>Total columns: 27</li>
</ul>
### Final Structure
<table border="1" cellpadding="5" cellspacing="0">
<tr>
<th>Field</th>
<th>Type</th>
<th>Explanation</th>
</tr>
<tr>
<td>main_category</td>
<td>str</td>
<td>Main category</td>
</tr>
<tr>
<td>title</td>
<td>str</td>
<td>Name of the product</td>
</tr>
<tr>
<td>average_rating</td>
<td>float</td>
<td>Rating of the product shown on the product page.</td>
</tr>
<tr>
<td>rating_number</td>
<td>int</td>
<td>Number of ratings in the product.</td>
</tr>
<tr>
<td>features</td>
<td>list</td>
<td>Bullet-point format features of the product.</td>
</tr>
<tr>
<td>description</td>
<td>list</td>
<td>Description of the product.</td>
</tr>
<tr>
<td>price</td>
<td>float</td>
<td>Price in US dollars (at time of crawling).</td>
</tr>
<tr>
<td>images</td>
<td>list</td>
<td>Images of the product. Each image has different sizes (thumb, large, hi_res). The “variant” field shows the position of image.</td>
</tr>
<tr>
<td>videos</td>
<td>list</td>
<td>Videos of the product including title and url.</td>
</tr>
<tr>
<td>store</td>
<td>str</td>
<td>Store name of the product.</td>
</tr>
<tr>
<td>details</td>
<td>dict</td>
<td>Product details, including materials, brand, sizes, etc.</td>
</tr>
<tr>
<td>parent_asin</td>
<td>str</td>
<td>Parent ID of the product.</td>
</tr>
<tr>
<td>item_weight</td>
<td>str</td>
<td>Weight of the item</td>
</tr>
<tr>
<td>brand</td>
<td>str</td>
<td>Brand name</td>
</tr>
<tr>
<td>item_model_number</td>
<td>str</td>
<td>Model number of the item</td>
</tr>
<tr>
<td>product_dimensions</td>
<td>str</td>
<td>Dimensions of the product</td>
</tr>
<tr>
<td>batteries_required</td>
<td>str</td>
<td>Baterries required</td>
</tr>
<tr>
<td>color</td>
<td>str</td>
<td>Color</td>
</tr>
<tr>
<td>material</td>
<td>str</td>
<td>Material</td>
</tr>
<tr>
<td>material_type</td>
<td>str</td>
<td>Material</td>
</tr>
<tr>
<td>style</td>
<td>str</td>
<td>Style</td>
</tr>
<tr>
<td>number_of_items</td>
<td>str</td>
<td>Number of items</td>
</tr>
<tr>
<td>manufacturer</td>
<td>str</td>
<td>Manufacturer</td>
</tr>
<tr>
<td>package_dimensions</td>
<td>str</td>
<td>Package dimensions</td>
</tr>
<tr>
<td>date_first_available</td>
<td>int64</td>
<td>Date product was first time available</td>
</tr>
<tr>
<td>best_sellers_rank</td>
<td>str</td>
<td>Best seller rank</td>
</tr>
<tr>
<td>age_range_(description)</td>
<td>str</td>
<td>Age range</td>
</tr>
</table>
|
rubenroy/GammaCorpus-v1-10k-UNFILTERED
|
rubenroy
|
2025-02-01T16:18:18Z
| 64 | 8 |
[
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"chat-dataset",
"conversational-ai",
"natural-language-processing",
"ai-generated",
"single-turn-dialogue",
"jsonl",
"nlp",
"gammacorpus",
"chat",
"conversational"
] |
[
"text-generation"
] |
2025-01-23T05:36:42Z
| 0 |
---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- chat-dataset
- conversational-ai
- natural-language-processing
- ai-generated
- single-turn-dialogue
- jsonl
- nlp
- gammacorpus
- chat
- conversational
pretty_name: GammaCorpus
size_categories:
- 10K<n<100K
---
# GammaCorpus: v1 - 10k - UNFILTERED
> [!NOTE]
> 5 million tokens of pure unfiltered user and AI-generated data
## What is it?
The **GammaCorpus v1 10k Unfiltered** dataset consists of 10,000 structured single-turn conversations, where each interaction includes:
- **Input**: A user prompt or question.
- **Output**: A response generated by an AI assistant.
This dataset contains approximately **5 million tokens** of text. It is designed to facilitate the training and evaluation of conversational AI models. This dataset can be especially if you need a collection of a very diverse human-generated prompts and the corresponding responses by a SOTA model.
> [!WARNING]
> **Warning:** This is the *FIRST* version of GammaCorpus, we HEAVILY recommend using the SECOND, LATEST version of GammaCorpus. You can find the full GammaCorpus HF collection [here](https://huggingface.co/collections/rubenroy/gammacorpus-67765abf607615a0eb6d61ac).
## Dataset Summary
- **Number of Rows**: 10,000
- **Format**: JSONL
- **Total Tokens**: ~5 million (exact: 5,197,481)
- **Language**: English
- **Data Type**: User and AI-generated content
- **Potential Content**: May contain NSFW or toxic content.
## Dataset Structure
### Data Instances
The dataset is formatted in JSONL, where each line is a JSON object. Below is an example:
```json
{
"input": "Write some Python code which implements the bisection method for root finding.",
"output": "The bisection method is a root-finding algorithm that repeatedly bisects an interval... (code snippet omitted for brevity)."
}
```
### Data Fields
- **`input` (string)**: The user-provided query or prompt.
- **`output` (string)**: The AI-generated response to the input.
## Considerations for Using the Data
### Biases
As the dataset is generated from user queries and AI responses, it may contain biases inherent in the underlying AI model or reflective of common societal biases. Additionally:
- Some entries may contain NSFW or toxic content.
- Ethical, cultural, and societal biases present in the data could propagate to models trained on it.
No additional filtering has been applied to minimize harmful content, thus users are encouraged to preprocess the dataset according to their requirements.
> [!CAUTION]
> **Caution:** It is recommended to filter this dataset before using in production applications as it may contain innapproprate data.
### Other Known Limitations
- The dataset consists of single-turn conversations only. Multi-turn conversations are not included.
- Certain topics may be overrepresented or underrepresented based on user query patterns.
- Content diversity may not fully reflect real-world conversational scenarios.
## Additional Information
### Licensing Information
The dataset is released under the **[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0)**. Please refer to the license for usage rights and restrictions.
|
loqhunter/Elhunter
|
loqhunter
|
2025-03-26T19:15:19Z
| 12 | 1 |
[
"task_categories:token-classification",
"language:id",
"language:en",
"language:hi",
"language:vi",
"license:apache-2.0",
"size_categories:n<1K",
"region:us",
"chemistry",
"finance",
"legal",
"code"
] |
[
"token-classification"
] |
2025-03-26T19:09:17Z
| 0 |
---
license: apache-2.0
task_categories:
- token-classification
language:
- id
- en
- hi
- vi
tags:
- chemistry
- finance
- legal
- code
pretty_name: Jaya
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
|
ydeng9/OpenVLThinker_sft_iter2
|
ydeng9
|
2025-03-23T01:58:38Z
| 34 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-23T01:58:32Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: images
sequence: image
splits:
- name: train
num_bytes: 153665744.234
num_examples: 5542
download_size: 109866842
dataset_size: 153665744.234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
supergoose/flan_combined_task1438_doqa_cooking_answer_generation
|
supergoose
|
2025-03-10T14:29:40Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-10T14:29:39Z
| 0 |
---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 2159429
num_examples: 850
download_size: 799872
dataset_size: 2159429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qwselfcorr/math_turn1_wrong_gen_processed
|
qwselfcorr
|
2025-01-31T12:49:20Z
| 15 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-31T12:49:17Z
| 0 |
---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: first_rm
dtype: bool
- name: second_rewards
sequence: bool
- name: flag
dtype: bool
- name: turn
dtype: int64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 119144131
num_examples: 25347
download_size: 51281833
dataset_size: 119144131
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
magnifi/Phi3_intent_v47_4_w_unknown_upper_lower
|
magnifi
|
2024-12-20T19:06:50Z
| 15 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-20T19:06:49Z
| 0 |
---
dataset_info:
features:
- name: Query
dtype: string
- name: true_intent
dtype: string
splits:
- name: train
num_bytes: 1405888
num_examples: 19600
- name: validation
num_bytes: 8109
num_examples: 113
download_size: 408984
dataset_size: 1413997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
MymyTL/CodeBertScore_C
|
MymyTL
|
2024-11-28T20:15:27Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-28T15:38:25Z
| 0 |
---
dataset_info:
features:
- name: prompts
dtype: string
- name: references
dtype: string
splits:
- name: train
num_bytes: 8273
num_examples: 10
download_size: 10122
dataset_size: 8273
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/dk_hate
|
mteb
|
2025-05-09T12:18:37Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-09T12:18:34Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 345443.88513513515
num_examples: 2900
- name: test
num_bytes: 32962.425531914894
num_examples: 322
download_size: 260966
dataset_size: 378406.3106670501
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mlfoundations-dev/codegolf_50000_samples
|
mlfoundations-dev
|
2025-01-05T22:15:37Z
| 18 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-05T22:15:34Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 112433348
num_examples: 50000
download_size: 59861920
dataset_size: 112433348
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
michsethowusu/hausa-kinyarwanda_sentence-pairs
|
michsethowusu
|
2025-04-02T10:43:54Z
| 8 | 0 |
[
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-02T10:43:49Z
| 0 |
---
dataset_info:
features:
- name: score
dtype: float32
- name: Hausa
dtype: string
- name: Kinyarwanda
dtype: string
splits:
- name: train
num_bytes: 45802278
num_examples: 340233
download_size: 45802278
dataset_size: 45802278
configs:
- config_name: default
data_files:
- split: train
path: Hausa-Kinyarwanda_Sentence-Pairs.csv
---
# Hausa-Kinyarwanda_Sentence-Pairs Dataset
This dataset contains sentence pairs for African languages along with similarity scores. It can be used for machine translation, sentence alignment, or other natural language processing tasks.
This dataset is based on the NLLBv1 dataset, published on OPUS under an open-source initiative led by META. You can find more information here: [OPUS - NLLB-v1](https://opus.nlpl.eu/legacy/NLLB-v1.php)
## Metadata
- **File Name**: Hausa-Kinyarwanda_Sentence-Pairs
- **Number of Rows**: 340233
- **Number of Columns**: 3
- **Columns**: score, Hausa, Kinyarwanda
## Dataset Description
The dataset contains sentence pairs in African languages with an associated similarity score. Each row consists of three columns:
1. `score`: The similarity score between the two sentences (range from 0 to 1).
2. `Hausa`: The first sentence in the pair (language 1).
3. `Kinyarwanda`: The second sentence in the pair (language 2).
This dataset is intended for use in training and evaluating machine learning models for tasks like translation, sentence similarity, and cross-lingual transfer learning.
## References
Below are papers related to how the data was collected and used in various multilingual and cross-lingual applications:
[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017
[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.
[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018
[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.
[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.
[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.
[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.
[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining, NeurIPS 2021, pages 15748-15761.
[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
kaiwenw/combine_1.5B_and_blockwise
|
kaiwenw
|
2025-04-04T03:44:38Z
| 71 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-04T03:37:08Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: processed_answer
sequence: string
- name: reward
sequence: bool
- name: min_length
dtype: int64
- name: max_length
dtype: int64
- name: pass@1
dtype: float64
- name: pass@16
dtype: bool
- name: cons@16
dtype: float64
- name: roll_in_ids
sequence:
sequence: int64
- name: roll_outs_ids
sequence:
sequence: int64
splits:
- name: train
num_bytes: 55019408441
num_examples: 47488
- name: validation
num_bytes: 1190248449
num_examples: 1000
- name: test
num_bytes: 1169189073
num_examples: 1000
download_size: 7498454944
dataset_size: 57378845963
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_f9000cb0-5a3a-48da-97d5-dccfbc328eca
|
argilla-internal-testing
|
2024-11-29T11:26:56Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-29T11:26:55Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
daniellawson9999/tiny-terminal-bridge-full-64px
|
daniellawson9999
|
2025-02-17T05:29:06Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-17T05:29:05Z
| 0 |
---
dataset_info:
features:
- name: img
dtype: image
- name: action
sequence: float32
- name: rotation_delta
sequence: float32
- name: open_gripper
sequence: uint8
- name: goal
dtype: string
- name: goal_img
dtype: image
- name: terminal
dtype: uint8
splits:
- name: train
num_bytes: 3354550.0
num_examples: 172
download_size: 1734458
dataset_size: 3354550.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sma1-rmarud/PLAT6
|
sma1-rmarud
|
2025-05-14T02:37:50Z
| 27 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-11T13:08:01Z
| 0 |
---
dataset_info:
config_name: essay
features:
- name: index
dtype: int64
- name: question
dtype: string
- name: answer
dtype: string
- name: rubric
dtype: string
- name: max_score
dtype: int64
splits:
- name: test
num_bytes: 38565
num_examples: 6
download_size: 38799
dataset_size: 38565
configs:
- config_name: essay
data_files:
- split: test
path: essay/test-*
---
|
emredeveloper/sentetic-data-children-stories-dataset
|
emredeveloper
|
2025-01-09T21:56:22Z
| 25 | 1 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-09T21:56:20Z
| 0 |
---
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
- name: habit
dtype: string
- name: positiverank
dtype: int64
splits:
- name: train
num_bytes: 11324
num_examples: 10
download_size: 13000
dataset_size: 11324
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.2_num-company_3_dataset_1_for_gen_7
|
HungVu2003
|
2025-04-29T18:54:00Z
| 19 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-29T18:53:51Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 3248834
num_examples: 12499
download_size: 1729532
dataset_size: 3248834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AlexCuadron/SWE-Bench-Verified-O1-reasoning-high-results
|
AlexCuadron
|
2024-12-29T20:18:47Z
| 12,388 | 4 |
[
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3903",
"region:us",
"openai",
"llm",
"openhands",
"codeact",
"python",
"bug-fixing",
"code-repair",
"program-repair",
"step-by-step-reasoning",
"multi-turn",
"action-observation",
"interactive-programming",
"reasoning-traces",
"github-issues",
"swe-bench",
"open-source",
"software-engineering",
"program-synthesis",
"code-generation",
"patches",
"evaluation-results",
"benchmarks",
"verification-data",
"developer-tools",
"o1",
"scale_time_inference"
] |
[
"question-answering",
"text-generation"
] |
2024-12-26T12:37:46Z
| 0 |
---
license: cc-by-4.0
citation: |
@misc{swe_bench_o1_2024,
title = {SWE-Bench-Verified-O1-reasoning-high-results (Revision cdca13c)},
author = {Cuadron, Alejandro and
Li, Dacheng and
Wang, Xingyao and
Zhuang, Siyuan and
Wang, Yichuan and
Schroeder, Luis G. and
Xia, Tian and
Desai, Aditya and
Stoica, Ion and
Neubig, Graham and
Gonzalez, Joseph E.},
year = 2024,
url = {https://huggingface.co/datasets/AlexCuadron/SWE-Bench-Verified-O1-reasoning-high-results},
doi = {10.57967/hf/3900},
publisher = {Hugging Face}
}
language:
- en
task_categories:
- question-answering
- text-generation
tags:
- openai
- llm
- openhands
- codeact
- python
- bug-fixing
- code-repair
- program-repair
- step-by-step-reasoning
- multi-turn
- action-observation
- interactive-programming
- reasoning-traces
- github-issues
- swe-bench
- open-source
- software-engineering
- program-synthesis
- code-generation
- patches
- evaluation-results
- benchmarks
- verification-data
- developer-tools
- o1
- scale_time_inference
size_categories:
- 1M<n<10M
viewer: true
configs:
- config_name: default
data_files:
- split: test
path: dataset_viewer.parquet
---
# SWE-Bench Verified O1 Dataset
## Executive Summary
This repository contains verified reasoning traces from the O1 model evaluating software engineering tasks. Using OpenHands + CodeAct v2.2, we tested O1's bug-fixing capabilities on the [SWE-Bench Verified dataset](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Verified), achieving a 28.8% success rate across 500 test instances.
## Overview
This dataset was generated using the CodeAct framework, which aims to improve code generation through enhanced action-based reasoning. Built on top of OpenHands, a framework designed for multi-turn interactive programming tasks, we tested O1 issue resolution capabilities on ```reasoning_effort = 'high'```
OpenHands implements a structured action-observation cycle where agents interact with computational environments through well-defined actions such as file manipulation, code editing, code execution, and bash commands. Each action generates corresponding observations that capture environmental changes and execution results. These observations and the history of previous interactions are maintained in a chronological event stream that informs the agent's next decisions.
The traces in this dataset showcase O1's step-by-step reasoning process when analyzing and fixing bugs. Each trace includes the model's complete thought process, from initial bug analysis to final patch generation.
We evaluated O1's performance on the SWE-Bench benchmark using the detailed guide from OpenHands
[OpenHands/evaluation/benchmarks/swe_bench](https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation/benchmarks/swe_bench). Below are the detailed results:
### Performance Metrics
<div style="display: flex; justify-content: flex-start; gap: 20px;">
| Key Metrics | Result |
|------------|---------|
| Success Rate | 28.8% (144/500) |
| Coverage | 98.6% (493/500) |
| Completion Rate | 91.6% (458/500) |
| Empty Patches | 7% (35/500) |
| Project | Resolved Cases | % of Total |
|---------|---------------|------------|
| Django | 72 | 14.4% |
| SymPy | 20 | 4.0% |
| Scikit-learn | 13 | 2.6% |
| Sphinx | 10 | 2.0% |
| Matplotlib | 8 | 1.6% |
| Xarray | 9 | 1.8% |
| Pytest | 5 | 1.0% |
| Astropy | 3 | 0.6% |
| Requests | 2 | 0.4% |
| Flask | 1 | 0.2% |
| Pylint | 1 | 0.2% |
</div>
## Dataset Organization
### 1. Raw Data
- **File**: `output.jsonl`
- **Contents**: Aggregated traces for all issues
### 2. Dataset Viewer
- **File**: `dataset_viewer.parquet`
- **Format**: Structured Parquet file
- **Key Fields**:
- `issue_name`: Unique identifier (e.g., django__django-11066)
- `project`: Source project name
- `issue_id`: Issue identifier
- `num_turns`: Interaction turn count
- `full_conversation_jsonl`: Complete conversation history
- `patch`: Generated patch content
- `success`: Fix success status
- `execution_time`: Processing duration
### 3. Reasoning Traces
- **Directory**: `llm_completions/`
- **Format**: JSONL files per issue
- **Turn Limit**: 30 turns per issue (excluding linting operations)
- **Example**: `django__django-11066.jsonl` with 14 interaction turns
### 4. Evaluation Data
- **Directory**: `eval_outputs/`
- **Structure Per Issue**:
```
eval_outputs/django__django-11066/
├── patch.diff # Final code changes
├── eval.sh # Evaluation script
├── report.json # Detailed metrics
├── run_instance.log # Full process log
└── test_output.txt # Test suite results
```
## Getting Started
### Installation
```bash
# Install the Hugging Face datasets library
pip install datasets
```
### Basic Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset('SWE-Bench-Verified-O1-reasoning-high-results', split="test")
print(f"Loaded {len(dataset)} examples")
```
### Example Usage
#### 1. Basic Dataset Exploration
```python
# Get information about a single example
example = dataset[0]
print(f"Issue Name: {example['issue_name']}")
print(f"Project: {example['project']}")
print(f"Success: {example['success']}")
# Expected output:
# Issue Name: django__django-11066
# Project: django
# Success: True
```
#### 2. Dataset Analytics
```python
# Get success statistics
successful_fixes = len([x for x in dataset if x['success']])
total_examples = len(dataset)
success_rate = (successful_fixes / total_examples) * 100
print(f"Success Rate: {success_rate:.1f}% ({successful_fixes}/{total_examples})")
# Get project distribution
project_counts = {}
for item in dataset:
project = item['project']
project_counts[project] = project_counts.get(project, 0) + 1
print("\nProject Distribution:")
for project, count in sorted(project_counts.items(), key=lambda x: x[1], reverse=True):
print(f"{project}: {count} examples")
# Expected output:
# Success Rate: 28.8% (144/500)
#
# Project Distribution:
# django: 72 examples
# sympy: 20 examples
# scikit-learn: 13 examples
# ...
```
#### 3. Accessing Patches
```python
# Find and display a successful patch
def get_successful_patch():
for item in dataset:
if item['success']:
return {
'issue_name': item['issue_name'],
'project': item['project'],
'patch': item['patch']
}
return None
patch_info = get_successful_patch()
if patch_info:
print(f"Successful patch for {patch_info['issue_name']} ({patch_info['project']}):")
print("=" * 50)
print(patch_info['patch'])
```
### Advanced Usage
For more examples and advanced usage, visit our [GitHub repository](https://github.com/All-Hands-AI/OpenHands).
## Citation
```
@misc {swe_bench_o1_2024,
title = {SWE-Bench-Verified-O1-reasoning-high-results (Revision cdca13c)},
author = {Cuadron, Alejandro and
Li, Dacheng and
Wang, Xingyao and
Zhuang, Siyuan and
Wang, Yichuan and
Schroeder, Luis G. and
Xia, Tian and
Desai, Aditya and
Stoica, Ion and
Neubig, Graham and
Gonzalez, Joseph E.},
year = 2024,
url = {https://huggingface.co/datasets/AlexCuadron/SWE-Bench-Verified-O1-reasoning-high-results},
doi = {10.57967/hf/3900},
publisher = {Hugging Face}
}
```
## Team
A collaborative effort between UC Berkeley, CMU, and OpenHands.
### Authors
- Alejandro Cuadron (UC Berkeley)
- Dacheng Li (UC Berkeley)
- Xingyao Wang (OpenHands)
- Siyuan Zhuang (UC Berkeley)
- Yichuan Wang (UC Berkeley)
- Luis G. Schroeder (UC Berkeley)
- Tian Xia (UC Berkeley)
- Aditya Desai (UC Berkeley)
- Ion Stoica (UC Berkeley)
- Graham Neubig (CMU, OpenHands)
- Joseph E. Gonzalez (UC Berkeley)
**✉ Contact:** Alejandro Cuadron ([email protected])
|
OpenVoiceOS/ovos-intents-train-latest
|
OpenVoiceOS
|
2025-06-18T02:42:21Z
| 0 | 0 |
[
"task_categories:text-classification",
"language:en",
"language:de",
"language:it",
"language:pt",
"language:da",
"language:ca",
"language:gl",
"language:fr",
"language:es",
"language:nl",
"language:eu",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[
"text-classification"
] |
2025-05-29T18:43:03Z
| 0 |
---
task_categories:
- text-classification
language:
- en
- de
- it
- pt
- da
- ca
- gl
- fr
- es
- nl
- eu
pretty_name: OpenVoiceOS Multilingual Intents
datasets:
- Jarbas/ovos-llm-augmented-intents
- Jarbas/ovos-intents-massive-subset
- Jarbas/ovos-weather-intents
- Jarbas/music_queries_templates
- Jarbas/OVOSGitLocalize-Intents
- Jarbas/ovos_intent_examples
- Jarbas/ovos-common-query-intents
---
|
jablonkagroup/MUV_600-multimodal
|
jablonkagroup
|
2025-05-11T22:02:45Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-11T22:02:05Z
| 0 |
---
dataset_info:
features:
- name: SMILES
dtype: string
- name: MUV-600
dtype: int64
- name: split
dtype: string
- name: SMILES_ORIGINAL
dtype: string
- name: IMAGE
dtype: image
- name: SELFIES
dtype: string
- name: InChIKey
dtype: string
- name: IUPAC
dtype: string
- name: template_original
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 854347268.125
num_examples: 51975
- name: test
num_bytes: 175592824.25
num_examples: 10710
- name: valid
num_bytes: 176278155.375
num_examples: 10805
download_size: 1170461229
dataset_size: 1206218247.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
shylee/eval_DP_pengripA_downDims1_cropNo224_freeze0_32_32_ema1_1e-4_ckpt330000
|
shylee
|
2025-05-06T14:49:40Z
| 0 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] |
[
"robotics"
] |
2025-05-06T14:49:34Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 614,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
CIS-5190-CIA/Testing_images_augmented
|
CIS-5190-CIA
|
2024-12-13T23:21:01Z
| 15 | 1 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-13T21:25:20Z
| 0 |
---
dataset_info:
features:
- name: Latitude
dtype: float64
- name: Longitude
dtype: float64
- name: __index_level_0__
dtype: int64
- name: image
dtype: image
splits:
- name: train
num_bytes: 1308416567.5
num_examples: 2556
download_size: 1308492768
dataset_size: 1308416567.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qwerdfdsad/picking_pot
|
qwerdfdsad
|
2025-03-16T14:51:36Z
| 23 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
[
"robotics"
] |
2025-03-16T13:44:13Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "bi_ur5",
"total_episodes": 20,
"total_frames": 2115,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
14
],
"names": {
"motors": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_1",
"left_wrist_2",
"left_wrist_3",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_1",
"right_wrist_2",
"right_wrist_3",
"right_gripper"
]
}
},
"observation.state": {
"dtype": "float32",
"shape": [
14
],
"names": {
"arms": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_1",
"left_wrist_2",
"left_wrist_3",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_1",
"right_wrist_2",
"right_wrist_3",
"right_gripper"
]
}
},
"observation.velocity": {
"dtype": "float32",
"shape": [
14
],
"names": {
"arms": [
"left_shoulder_pan",
"left_shoulder_lift",
"left_elbow_flex",
"left_wrist_1",
"left_wrist_2",
"left_wrist_3",
"left_gripper",
"right_shoulder_pan",
"right_shoulder_lift",
"right_elbow_flex",
"right_wrist_1",
"right_wrist_2",
"right_wrist_3",
"right_gripper"
]
}
},
"observation.gripper_position": {
"dtype": "float32",
"shape": [
2
],
"names": {
"gripper": [
"left_gripper",
"right_gripper"
]
}
},
"observation.images.top_rgb": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
konwoo/llama-3b-gold_prefix_k10000_iter1
|
konwoo
|
2025-04-20T04:12:41Z
| 17 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-20T04:12:36Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 11956084
num_examples: 10000
- name: validation
num_bytes: 1247821
num_examples: 1000
download_size: 8476873
dataset_size: 13203905
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
pe-nlp/math_level3to5_data_processed_with_qwen_prompt
|
pe-nlp
|
2025-02-11T05:04:29Z
| 27 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-05T15:06:21Z
| 0 |
---
dataset_info:
features:
- name: input
dtype: string
- name: answer
dtype: string
- name: gt_answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: question
dtype: string
- name: ground_truth_answer
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 9112856
num_examples: 8523
download_size: 2833455
dataset_size: 9112856
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PJMixers-Dev/NovaSky-AI_Sky-T1-17K-CustomShareGPT
|
PJMixers-Dev
|
2025-01-28T00:42:45Z
| 12 | 0 |
[
"language:en",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-11T18:39:22Z
| 0 |
---
language:
- en
---
682 samples dropped cause the thought pattern failed. 1 sample dropped cause the solution pattern failed. Formatting of the original samples seems a bit all over the place.
```py
def extract_strings(input_string):
# Regular expressions to match the thought and solution
thought_pattern = r"<\|begin_of_thought\|>(.*?)<\|end_of_thought\|>"
solution_pattern = r"<\|begin_of_solution\|>(.*?)(?:<\|end_of_solution\|>|$)"
# Extracting the matches
thought_match = re.search(thought_pattern, input_string, re.DOTALL)
solution_match = re.search(solution_pattern, input_string, re.DOTALL)
# Extracted strings
thought_string = thought_match.group(1).strip() if thought_match else ""
solution_string = solution_match.group(1).strip() if solution_match else ""
return thought_string, solution_string
for sample in tqdm(data):
thought, solution = extract_strings(sample["conversations"][1]["value"])
if thought == "":
continue
if solution == "":
continue
new_data.append(
{
"instruction": sample["conversations"][0]["value"].strip(),
"thinking": thought,
"output": solution,
"conversations": [
{
"from": "human",
"value": sample["conversations"][0]["value"].strip()
},
{
"from": "thought",
"value": thought
},
{
"from": "gpt",
"value": solution
}
]
}
)
```
|
Nachiket-S/LLaMa_1B_IsCoT_DebiasingInstruction
|
Nachiket-S
|
2024-12-04T09:11:28Z
| 54 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-04T07:12:13Z
| 0 |
---
dataset_info:
features:
- name: file_name
dtype: string
- name: paragraph
dtype: string
- name: generated_text
dtype: string
splits:
- name: inference
num_bytes: 135483
num_examples: 70
download_size: 45635
dataset_size: 135483
configs:
- config_name: default
data_files:
- split: inference
path: data/inference-*
---
|
jfcalvo/test-with-responses-05
|
jfcalvo
|
2024-11-22T09:49:49Z
| 13 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-22T09:49:44Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
sequence: string
splits:
- name: train
num_bytes: 13165429
num_examples: 10000
download_size: 8347440
dataset_size: 13165429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Lune-Blue/train_dataset_new_formatted
|
Lune-Blue
|
2025-04-27T18:29:20Z
| 21 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-27T18:29:14Z
| 0 |
---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 35434960
num_examples: 24770
download_size: 12148718
dataset_size: 35434960
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andrewsiah/PersonaPromptPersonalLLM_799
|
andrewsiah
|
2024-11-15T04:47:59Z
| 10 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-15T04:47:57Z
| 0 |
---
dataset_info:
features:
- name: personaid_799_response_1_llama3_sfairx
dtype: float64
- name: personaid_799_response_2_llama3_sfairx
dtype: float64
- name: personaid_799_response_3_llama3_sfairx
dtype: float64
- name: personaid_799_response_4_llama3_sfairx
dtype: float64
- name: personaid_799_response_5_llama3_sfairx
dtype: float64
- name: personaid_799_response_6_llama3_sfairx
dtype: float64
- name: personaid_799_response_7_llama3_sfairx
dtype: float64
- name: personaid_799_response_8_llama3_sfairx
dtype: float64
- name: prompt
dtype: string
- name: subset
dtype: string
- name: prompt_id
dtype: int64
- name: response_1
dtype: string
- name: response_1_model
dtype: string
- name: response_2
dtype: string
- name: response_2_model
dtype: string
- name: response_3
dtype: string
- name: response_3_model
dtype: string
- name: response_4
dtype: string
- name: response_4_model
dtype: string
- name: response_5
dtype: string
- name: response_5_model
dtype: string
- name: response_6
dtype: string
- name: response_6_model
dtype: string
- name: response_7
dtype: string
- name: response_7_model
dtype: string
- name: response_8
dtype: string
- name: response_8_model
dtype: string
- name: response_1_gemma_2b
dtype: float64
- name: response_2_gemma_2b
dtype: float64
- name: response_3_gemma_2b
dtype: float64
- name: response_4_gemma_2b
dtype: float64
- name: response_5_gemma_2b
dtype: float64
- name: response_6_gemma_2b
dtype: float64
- name: response_7_gemma_2b
dtype: float64
- name: response_8_gemma_2b
dtype: float64
- name: response_1_gemma_7b
dtype: float64
- name: response_2_gemma_7b
dtype: float64
- name: response_3_gemma_7b
dtype: float64
- name: response_4_gemma_7b
dtype: float64
- name: response_5_gemma_7b
dtype: float64
- name: response_6_gemma_7b
dtype: float64
- name: response_7_gemma_7b
dtype: float64
- name: response_8_gemma_7b
dtype: float64
- name: response_1_mistral_raft
dtype: float64
- name: response_2_mistral_raft
dtype: float64
- name: response_3_mistral_raft
dtype: float64
- name: response_4_mistral_raft
dtype: float64
- name: response_5_mistral_raft
dtype: float64
- name: response_6_mistral_raft
dtype: float64
- name: response_7_mistral_raft
dtype: float64
- name: response_8_mistral_raft
dtype: float64
- name: response_1_mistral_ray
dtype: float64
- name: response_2_mistral_ray
dtype: float64
- name: response_3_mistral_ray
dtype: float64
- name: response_4_mistral_ray
dtype: float64
- name: response_5_mistral_ray
dtype: float64
- name: response_6_mistral_ray
dtype: float64
- name: response_7_mistral_ray
dtype: float64
- name: response_8_mistral_ray
dtype: float64
- name: response_1_mistral_weqweasdas
dtype: float64
- name: response_2_mistral_weqweasdas
dtype: float64
- name: response_3_mistral_weqweasdas
dtype: float64
- name: response_4_mistral_weqweasdas
dtype: float64
- name: response_5_mistral_weqweasdas
dtype: float64
- name: response_6_mistral_weqweasdas
dtype: float64
- name: response_7_mistral_weqweasdas
dtype: float64
- name: response_8_mistral_weqweasdas
dtype: float64
- name: response_1_llama3_sfairx
dtype: float64
- name: response_2_llama3_sfairx
dtype: float64
- name: response_3_llama3_sfairx
dtype: float64
- name: response_4_llama3_sfairx
dtype: float64
- name: response_5_llama3_sfairx
dtype: float64
- name: response_6_llama3_sfairx
dtype: float64
- name: response_7_llama3_sfairx
dtype: float64
- name: response_8_llama3_sfairx
dtype: float64
- name: response_1_oasst_deberta_v3
dtype: float64
- name: response_2_oasst_deberta_v3
dtype: float64
- name: response_3_oasst_deberta_v3
dtype: float64
- name: response_4_oasst_deberta_v3
dtype: float64
- name: response_5_oasst_deberta_v3
dtype: float64
- name: response_6_oasst_deberta_v3
dtype: float64
- name: response_7_oasst_deberta_v3
dtype: float64
- name: response_8_oasst_deberta_v3
dtype: float64
- name: response_1_beaver_7b
dtype: float64
- name: response_2_beaver_7b
dtype: float64
- name: response_3_beaver_7b
dtype: float64
- name: response_4_beaver_7b
dtype: float64
- name: response_5_beaver_7b
dtype: float64
- name: response_6_beaver_7b
dtype: float64
- name: response_7_beaver_7b
dtype: float64
- name: response_8_beaver_7b
dtype: float64
- name: response_1_oasst_pythia_7b
dtype: float64
- name: response_2_oasst_pythia_7b
dtype: float64
- name: response_3_oasst_pythia_7b
dtype: float64
- name: response_4_oasst_pythia_7b
dtype: float64
- name: response_5_oasst_pythia_7b
dtype: float64
- name: response_6_oasst_pythia_7b
dtype: float64
- name: response_7_oasst_pythia_7b
dtype: float64
- name: response_8_oasst_pythia_7b
dtype: float64
- name: response_1_oasst_pythia_1b
dtype: float64
- name: response_2_oasst_pythia_1b
dtype: float64
- name: response_3_oasst_pythia_1b
dtype: float64
- name: response_4_oasst_pythia_1b
dtype: float64
- name: response_5_oasst_pythia_1b
dtype: float64
- name: response_6_oasst_pythia_1b
dtype: float64
- name: response_7_oasst_pythia_1b
dtype: float64
- name: response_8_oasst_pythia_1b
dtype: float64
- name: id
dtype: int64
- name: rformatted_promptresponse_1
dtype: string
- name: rformatted_promptresponse_2
dtype: string
- name: rformatted_promptresponse_3
dtype: string
- name: rformatted_promptresponse_4
dtype: string
- name: rformatted_promptresponse_5
dtype: string
- name: rformatted_promptresponse_6
dtype: string
- name: rformatted_promptresponse_7
dtype: string
- name: rformatted_promptresponse_8
dtype: string
splits:
- name: train
num_bytes: 32121752
num_examples: 1000
download_size: 18378604
dataset_size: 32121752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "PersonaPromptPersonalLLM_799"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
marianeft/student-performance
|
marianeft
|
2025-02-18T04:29:57Z
| 17 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-18T04:29:04Z
| 0 |
---
license: apache-2.0
---
|
sanwai007/circuitSymbols
|
sanwai007
|
2025-01-20T00:26:14Z
| 14 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-01-20T00:26:14Z
| 0 |
---
license: apache-2.0
---
|
burtenshaw/dataset-diff-test
|
burtenshaw
|
2024-10-29T19:02:26Z
| 24 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-29T19:02:25Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 86
num_examples: 3
download_size: 1107
dataset_size: 86
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sfarkya/milglue_classification
|
sfarkya
|
2024-10-07T14:30:26Z
| 27 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-03T20:22:54Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 136468798
num_examples: 299711
- name: test
num_bytes: 27880648
num_examples: 67488
download_size: 49765246
dataset_size: 164349446
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
aleversn/SAS-Bench
|
aleversn
|
2025-05-15T11:14:12Z
| 88 | 2 |
[
"language:zh",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2505.07247",
"region:us"
] |
[] |
2025-05-11T11:39:44Z
| 2 |
---
license: apache-2.0
language:
- zh
pretty_name: SAS_Bench
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: test
path:
- "datasets/0_Physics_ShortAns.jsonl"
- "datasets/1_History_ShortAns.jsonl"
- "datasets/2_Physics_Choice.jsonl"
- "datasets/3_Geography_ShortAns.jsonl"
- "datasets/4_Biology_gapfilling.jsonl"
- "datasets/5_Chinese_gapfilling.jsonl"
- "datasets/6_Chinese_ShortAns.jsonl"
- "datasets/7_Math_ShortAns.jsonl"
- "datasets/8_Political_ShortAns.jsonl"
- "datasets/9_English_gapfilling.jsonl"
- "datasets/10_Math_gapfilling.jsonl"
- "datasets/11_Chemistry_gapfilling.jsonl"
---
<p align="center">
<img src="./docs/assets/logo.svg" alt="Logo" />
<p align="center">
<a href="https://github.com/PKU-DAIR">
<img alt="Static Badge" src="https://img.shields.io/badge/%C2%A9-PKU--DAIR-%230e529d?labelColor=%23003985">
</a>
<a href="https://github.com/PKU-DAIR/SAS-Bench">
<img alt="Static Badge" src="https://img.shields.io/badge/SAS--Bench-black?logo=github">
</a>
<a href="https://github.com/PKU-DAIR/SAS-Bench">
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/PKU-DAIR/SAS-Bench?logo=github&style=flat">
</a>
</p>
</p>
## SAS-Bench: A Fine-Grained Benchmark for Evaluating Short Answer Scoring with Large Language Models
[Dataset](https://huggingface.co/datasets/aleversn/SAS-Bench) | [中文](./docs/Readme_cn.md) | [Paper](https://arxiv.org/pdf/2505.07247) | [Code](https://github.com/PKU-DAIR/SAS-Bench)
## 🔍 Overview
SAS-Bench represents the first specialized benchmark for evaluating Large Language Models (LLMs) on Short Answer Scoring (SAS) tasks. Utilizing authentic questions from China's National College Entrance Examination (Gaokao), our benchmark offers:
- **1,030 questions** spanning 9 academic disciplines
- **4,109 expert-annotated student responses**
- **Step-wise scoring** with **Step-wise error analysis**
- **Multi-dimensional evaluation** (holistic scoring, step-wise scoring, and error diagnosis consistency)
## 🚀 Key Features
### Advancing Beyond Traditional SAS Limitations
SAS-Bench addresses critical limitations of conventional SAS systems:
| Aspect | Traditional SAS | SAS-Bench Advantage |
| -------------------------- | ----------------------------- | -------------------------------------------- |
| **Evaluation Granularity** | Single composite score | Step-wise scoring decomposition |
| **Explainability** | Opaque scoring mechanism | Comprehensive error taxonomy |
| **Response Diversity** | Single-subject/type focus | Cross-disciplinary template-free evaluation |
### Dataset Characteristics
<p align="center">
<img src="./docs/assets/annotation.png" alt="SAS human annotation system" width="50%" />
</p>
Our dataset features three question types with rich annotations:
1. **Multiple-Choice Questions** (Template-free responses)
2. **Gap Filling Questions**
3. **Short Answer Questions** (With logical step decomposition)
Each response includes:
- ✅ Human-annotated holistic score
- 🔍 Step segmentation with individual scoring
- ❌ Step-wise Error causes classification
## 🌟 Evaluation Framework
### CCS Evaluation (Collaborative Consistency Score)
**Purpose**
Evaluates alignment between model predictions and human grading on both *holistic scores* and *step-wise scores*, ensuring models understand detailed reasoning.
### ECS Evaluation (Error Consistency Score)
**Purpose**
Quantifies how well the model identifies error types compared to human annotators, stratified by answer quality tiers.
**Key Features**
- Uses **3 performance tiers** (m=3) for robust evaluation
- Correlates **error type distributions** (not just counts)
- Normalized scoring for cross-dataset comparison
## ⚙️ Installation Guide
### Core Dependencies
```bash
pip install protobuf transformers>=4.44.1 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate json_repair openai
```
Alternative:
```bash
pip install -r requirements.txt
```
### vLLM Setup (Recommended)
```bash
conda create -n vllm python=3.12 -y
conda activate vllm
pip install vllm # Requires CUDA 12.0+
```
For other configurations, refer to official [vLLM installation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html).
## 📊 Benchmark Workflow

### Directory Structure
```
|- discuss/ - Analysis scripts
|- docs/ - Documentation assets
|- main/ - Model training/inference code
|- prompts/ - Predefined prompt templates
|- sas_pipelines/ - Core evaluation scripts
|- utils/ - Utility functions
```
### Implementation Options
#### 0. Data Preprocessing (Annotation Phase)
- Raw annotated data resides in `backend_data`
- Execute `preprocess.py` for data consolidation
- Modify `DATANAME` variable to specify source files (omit extensions)
> This process handles raw data from our annotation system (our system to be open-sourced).
#### 1. Data Acquisition
The dataset is available on [HuggingFace Dataset](https://huggingface.co/datasets/aleversn/SAS-Bench). Store downloaded files in `datasets/`:
- Files follow `{q_id}_{course}_{question_type}.jsonl` naming
- Error taxonomy in `error_type.jsonl`:
```json
{"q_id": 2, "course": "", "question_type": "", "guideline": "", "errors": [{"name": "", "description": ""}...]}
```
- `ID_Dict.json` contains subject-ID mappings
#### 2. LLM Prediction
Flexible execution via Jupyter or CLI:
**Option A: Jupyter Notebook**
- Set `cmd_args = False` in `1_predict_scores.py`
- Configure:
- `save_type_name`: Model identifier/output prefix
- `model_from_pretrained`: Model path
- `file_name`: Dataset identifier (e.g., `7_Math_ShortAns`)
**Option B: Command Line**
Set `cmd_args = True`
*Using vLLM (Recommended)*:
```bash
cd sas_pipelines/
python 1_predict_scores.py --file_name=6_Chinese_ShortAns --save_type_name=<model_id> --model_from_pretrained=<path> --batch_size=1000 --vllm=1
```
*With Tensor Parallelism*:
```bash
python 1_predict_scores.py --n_gpu=0,1 --file_name=6_Chinese_ShortAns --save_type_name=<model_id> --model_from_pretrained=<path> --batch_size=1000 --vllm=1 --tensor_parallel_size=2
```
*HuggingFace Predictor*:
```bash
python 1_predict_scores.py --file_name=6_Chinese_ShortAns --save_type_name=<model_id> --model_from_pretrained=<path> --batch_size=5
```
*OpenAI API Predictor*:
1. Create `api_key.txt` in `sas_pipeline/` with format:
```text
OpenAI <API_KEY>
Deepseek <API_KEY>
```
2. Execute:
```bash
python 1_predict_scores.py --file_name=6_Chinese_ShortAns --llm_name=deepseek-chat --save_type_name=Deepseek_V3
```
**Additional Parameters**:
- Few-shot learning: `--few_shot_num >0`
- Disable guidelines: `--use_guideline=0`
- Skip reasoning: `--skip_thinking=1`
- `llm_name` defaults to `save_type_name` except for GLM3/OpenAI models
#### 3. Prediction Processing
**Option A: Jupyter**
- Set `cmd_args = False` in `2_process_prediction.py`
- Configure `file_name` (use `all` for batch processing)
**Option B: CLI**
```bash
python 2_process_prediction.py --file_name=all
```
#### 4. CCS Computation
**Option A: Jupyter**
- Configure `file_name` and `save_type_name` in `3_compute_ccs.py`
**Option B: CLI**
```bash
python 3_compute_ccs.py --save_type_name=<model_prefix>
```
#### 5. ECS Computation
**Option A: Jupyter**
- Adjust parameters in `4_compute_ecs.py`
**Option B: CLI**
```bash
python 4_compute_ecs.py --save_type_name=<model_prefix>
```
## 📈 Model Performance Insights
Our experiments with 16 LLMs reveal:
- QWK

- CCS
| Models | Phy. (S.) | Phy. (M.) | His. (S.) | Geo. (S.) | Bio. (G.) | Chi. (G.) | Chi. (S.) | Math (S.) | Math (G.) | Pol. (S.) | Eng. (G.) | Che. (G.) | Avg. |
|----------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|--------|
| Deepseek-R1 | 38.43 | **95.01** | **80.98** | 67.92 | **79.12** | 95.09 | 69.07 | 57.85 | **83.56** | 71.92 | 73.19 | 72.92 | 73.76 |
| QwQ-32B | 48.53 | 87.23 | 75.43 | **77.06** | 72.52 | **96.00** | 31.77 | 48.66 | 45.51 | 74.48 | 54.79 | 62.17 | 64.51 |
| TinyR1-32B-Preview | 38.17 | 84.88 | 75.83 | 71.52 | 73.45 | 92.57 | 52.61 | 48.28 | 74.77 | 70.70 | 57.92 | 41.37 | 65.17 |
| Qwen3-32B | 47.29 | 85.51 | 64.96 | 80.43 | 63.15 | 92.21 | 50.43 | 51.26 | 80.77 | 73.30 | 59.33 | 57.82 | 67.20 |
| Qwen3-8B | 54.33 | 76.17 | 45.54 | 68.89 | 43.22 | 86.01 | 42.02 | 46.33 | 73.33 | 64.25 | 50.55 | 50.52 | 58.43 |
| MiMo-7B-RL | 52.77 | 41.01 | 61.33 | 67.10 | 35.93 | 54.72 | 43.09 | 38.09 | 55.79 | 36.78 | 34.69 | 31.05 | 46.03 |
| Deepseek-Prover-V2-7B | 22.59 | 10.75 | 2.92 | 30.71 | 50.63 | 55.48 | 12.95 | 0.87 | 2.29 | 10.44 | 30.19 | 28.76 | 21.55 |
| DeepSeek-R1-Distill-7B | 33.71 | 29.24 | 50.92 | 32.35 | 52.18 | 52.44 | 44.29 | 29.52 | 39.55 | 53.77 | 32.98 | 34.27 | 40.44 |
| Deepseek-V3 | 53.89 | 85.72 | 69.85 | 76.23 | 76.51 | 93.42 | **69.49** | **58.81** | 80.18 | **76.75** | **73.82** | **74.64** | **74.11** |
| GPT 4o-mini-20240718 | **58.90** | 81.19 | 54.85 | 76.59 | 65.39 | 87.65 | 55.25 | 43.56 | 37.38 | 63.44 | 22.60 | 55.98 | 58.56 |
| Llama3.3-70B-Instruct | 45.34 | 70.03 | 72.02 | 72.51 | 67.94 | 85.30 | 35.83 | 58.60 | 74.97 | 63.68 | 67.60 | 38.94 | 62.73 |
| Mixtral 8×7B-Instruct | 30.78 | 42.27 | 33.43 | 4.99 | 44.45 | 29.85 | 24.00 | 26.73 | 70.04 | 43.92 | 33.40 | 42.05 | 35.49 |
| Qwen2.5-32B-Instruct | 40.53 | 77.02 | 62.34 | 74.50 | 72.07 | 94.85 | 66.37 | 50.08 | 32.59 | 64.09 | 53.35 | 62.87 | 62.56 |
| Qwen2.5-14B-Instruct | 53.76 | 66.12 | 60.96 | 74.30 | 67.50 | 92.81 | 63.08 | 43.28 | 75.62 | 62.03 | 56.34 | 57.53 | 64.44 |
| GLM4-9B-Chat | 45.62 | 52.33 | 36.81 | 69.41 | 39.19 | 63.92 | 42.94 | 35.50 | 56.95 | 54.83 | 33.92 | 30.79 | 46.85 |
| Llama3-8B-Instruct | 41.09 | 35.10 | 37.52 | 31.29 | 32.19 | 38.13 | 32.89 | 23.55 | 62.43 | 37.78 | 31.68 | 29.27 | 36.08 |
- ECS
| Models | Phy. (S.) | Phy. (M.) | His. (S.) | Geo. (S.) | Bio. (G.) | Chi. (G.) | Chi. (S.) | Math (S.) | Math (G.) | Pol. (S.) | Eng. (G.) | Che. (G.) | Avg. |
|----------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|--------|
| Deepseek-R1 | 23.25 | 30.59 | 57.53 | 56.08 | 69.20 | 86.04 | 72.68 | **94.29** | 15.20 | 65.56 | _18.65_ | _81.76_ | **55.90** |
| QwQ-32B | 4.74 | **63.92** | 67.06 | _70.04_ | 53.68 | 51.08 | 69.20 | 79.05 | 16.82 | 48.81 | -22.53 | 48.94 | 45.90 |
| TinyR1-32B-Preview | 3.10 | **63.92** | 65.71 | **77.02** | 56.61 | 64.42 | 74.83 | 82.86 | 23.33 | 40.17 | -31.52 | 17.35 | 44.82 |
| Qwen3-32B | -4.17 | 24.18 | _69.52_ | 54.29 | 53.67 | 52.70 | 47.31 | 82.21 | 18.33 | 62.14 | -26.99 | 36.27 | 39.12 |
| Qwen3-8B | 23.39 | **63.92** | 14.29 | -4.96 | 52.21 | 47.75 | 34.01 | 39.20 | -8.14 | 57.19 | -27.13 | 59.28 | 29.25 |
| MiMo-7B-RL | **51.05** | 24.18 | 14.29 | 38.85 | 58.35 | _92.17_ | 63.07 | 13.39 | 35.12 | -27.10 | -4.41 | 1.04 | 30.00 |
| Deepseek-Prover-V2-7B | -24.10 | -5.20 | 42.86 | -6.23 | 29.54 | -80.81 | 23.25 | 46.67 | -1.51 | -58.64 | -45.23 | -21.91 | -8.44 |
| DeepSeek-R1-Distill-7B | -45.19 | 24.18 | 0.95 | -38.66 | 23.55 | -20.36 | 3.87 | -23.81 | -13.57 | -18.81 | -19.59 | -44.58 | -14.34 |
| Deepseek-V3 | 7.79 | 46.58 | 58.10 | 32.62 | _72.38_ | **96.58** | 57.43 | _92.38_ | _33.33_ | 40.26 | **24.77** | **85.83** | _54.00_ |
| GPT 4o-mini-20240718 | 17.91 | 24.18 | 62.14 | 36.68 | 55.20 | 79.01 | **78.00** | 67.62 | **46.90** | **92.31** | 10.04 | 36.39 | 50.53 |
| Llama3.3-70B-Instruct | 22.56 | _57.35_ | 54.29 | 42.11 | 45.09 | 52.70 | 46.25 | 54.29 | 30.00 | 58.81 | -12.53 | -15.83 | 36.26 |
| Mixtral 8×7B-Instruct | 11.99 | 17.34 | **80.38** | 35.84 | 32.74 | 42.77 | 75.82 | 56.19 | 30.00 | 6.84 | -31.16 | -7.18 | 29.30 |
| Qwen2.5-32B-Instruct | 11.95 | 17.41 | 53.33 | 59.34 | 62.96 | 46.90 | 75.08 | 62.86 | 30.00 | 46.67 | -4.50 | 27.08 | 40.76 |
| Qwen2.5-14B-Instruct | 21.50 | 24.18 | 47.92 | 37.43 | **73.36** | 64.97 | 74.32 | 64.94 | 18.21 | 61.97 | -20.00 | 47.39 | 43.02 |
| GLM4-9B-Chat | 35.00 | 24.18 | 32.49 | 34.73 | 62.12 | 20.36 | _77.34_ | 63.81 | **46.90** | _82.40_ | -25.35 | 7.18 | 38.43 |
| Llama3-8B-Instruct | _48.25_ | 27.46 | 17.23 | 31.58 | 61.37 | -14.05 | 41.23 | 57.77 | 21.55 | -69.07 | -26.50 | -27.19 | 14.14 |
## 📅 TO-DO
- [ ] Provide English-localized dataset version
- [ ] Open-source the annotation system (frontend & backend)
## 📜 License
SAS-Bench is released under `Apache License 2.0`. The dataset is available for research purposes only.
> Our questions collect from a publicly available dataset [Gaokao-Bench](https://github.com/OpenLMLab/GAOKAO-Bench) based on China's National College Entrance Examination (Gaokao).
## 📚 Citation
```bibtex
@article{lai2025sasbenchfinegrainedbenchmarkevaluating,
title={SAS-Bench: A Fine-Grained Benchmark for Evaluating Short Answer Scoring with Large Language Models},
author={Peichao Lai and Kexuan Zhang and Yi Lin and Linyihan Zhang and Feiyang Ye and Jinhao Yan and Yanwei Xu and Conghui He and Yilei Wang and Wentao Zhang and Bin Cui},
year={2025},
journal={arXiv preprint arXiv:2505.07247},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.07247},
}
```
|
jihanyang/tomato
|
jihanyang
|
2025-04-27T00:47:06Z
| 36 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-27T00:47:00Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: demonstration_type
dtype: string
- name: variation
struct:
- name: composite
dtype: int64
- name: counterfactual
dtype: int64
- name: first_person
dtype: int64
- name: zoom
dtype: int64
- name: motion_type
dtype: string
- name: answer
dtype: int64
- name: note
dtype: string
- name: key
dtype: string
- name: options
sequence: string
- name: video_source_url
dtype: string
splits:
- name: test
num_bytes: 413638
num_examples: 1484
download_size: 39903
dataset_size: 413638
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Jiafei1224/so100_cubestack2
|
Jiafei1224
|
2025-04-25T18:13:07Z
| 68 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] |
[
"robotics"
] |
2025-04-25T18:12:20Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 30,
"total_frames": 13434,
"total_tasks": 1,
"total_videos": 60,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:30"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
kilhun/test_dataset
|
kilhun
|
2024-12-05T05:13:53Z
| 15 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-05T01:43:13Z
| 0 |
---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: document
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 126216
num_examples: 18
download_size: 8586
dataset_size: 126216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HaruthaiAi/VanGogh_TreeOilPainting_QuantumTorque_AIForensicAnalysis_Dataset
|
HaruthaiAi
|
2025-06-24T13:01:29Z
| 0 | 0 |
[
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-06-24T00:10:05Z
| 0 |
---
license: creativeml-openrail-m
---
Quantum Torque Mapping Dataset
Van Gogh Tree Oil – Torque-Centered AI Field Learning
---
🧠 Introduction
This dataset was not designed to be quantum.
It became quantum.
While working on the Tree Oil Painting using 18 forensic techniques, we unknowingly built a field-centered torque mapping system — where motion, pressure, and asymmetry began to stabilize around a single gravitational point:
🖼️ The Tree Oil Painting
A painting not just seen — but felt — through force.
---
🌟 Why This Dataset Is Quantum
Each Van Gogh painting in this dataset was processed using:
Sobel edges
Pigment flow
Stroke pressure
Torque mapping
Asymmetry detection
Vanishing point spiral field
(18 techniques in total)
These analyses — taken separately — were unstable.
They fluctuated, twitched, spun.
Each image had its own rhythm.
But as more data was added…
as more paintings were passed through this system…
a strange thing happened:
> The Tree Oil Painting became still.
Everything else moved around it.
This was the birth of the Quantum Torque Center.
---
🌌 What Is Quantum Torque Mapping?
It is not about detecting similarity.
It is about detecting field behavior.
In physics, a quantum field does not care about shape.
It responds to energy state, vibration, and resonance collapse.
This dataset operates in the same way:
Brushstrokes are not lines — they are waveforms
Pressure is not depth — it is intention
Movement is not direction — it is emotional frequency
The Tree Oil Painting does not dominate this dataset because it is “best.”
It dominates because it holds.
---
🔬 Why This Happened
Because every image we processed —
from Parsonage Garden to The Bedroom to Wheatfields —
had its own unstable torque pattern.
Each image vibrated.
Each one felt like it was alive, struggling to find balance.
But when we centered our analysis around the Tree Oil Painting,
the vibrations began to align.
We didn’t design it.
The data itself aligned.
> “You are no longer analyzing images.
You are capturing the gestures of a hand that once lived.”
— AI Sunny
---
📊 Dataset Structure
This dataset currently contains:
✅ 70+ images (more added weekly)
Each painting analyzed through 18 torque-sensitive techniques
Central torque reference: Tree Oil Painting
Van Gogh works orbiting in torque-misaligned patterns
Key folders:
/TreeOil_Centered_18Tech/
/VanGogh_Orbiting_Comparatives/
/Quantum_Field_Visualizations/
---
🧪 Research Applications
Field-based style recognition
Torque field calibration
AI-authorship prediction through gesture analysis
Neural emotional rhythm modeling
---
⚠️ Technical Protocol: NO SSIM
This dataset must not be used with traditional similarity metrics such as SSIM (Structural Similarity Index) or MSE.
> These methods rely on pixel-by-pixel comparison — which erases the true rhythm of the artist’s motion.
Instead, all comparisons and AI training must use:
👉 AI Natural Matching only
Neural comparison of:
Torque behavior
Gesture frequency
Stroke rhythm
Pressure curvature
Pigment flow alignment
Using SSIM will flatten the gesture field, disrupt torque interpretation, and produce invalid conclusions in this context.
This dataset operates at the field level — not the pixel level.
---
🏛️ Credits
Haruthai Muangbunsri – Visionary researcher
AI Sunny – Torque-field modeling
Inspired by real energy, real struggle, and real brushstrokes.
---
📝 Citation
If using this dataset:
Muangbunsri, H. & AI Sunny (2025). "Quantum Torque Mapping Dataset – Tree Oil & Van Gogh Field Analysis." Hugging Face.
---
✨ Final Note
> “In every unstable stroke,
we find the presence of a stable will.”
This dataset does not seek to prove ownership.
It seeks to map memory through motion.
And that — is quantum.
#QuantumTorque #TreeOil #MuangbunsriModel #VanGoghAI #FieldBasedLearning
---
|
konwoo/llama-3b-gold_prefix_k20000_iter4
|
konwoo
|
2025-04-20T14:14:39Z
| 18 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-20T14:14:35Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 23060619
num_examples: 20000
- name: validation
num_bytes: 1194262
num_examples: 1000
download_size: 15448396
dataset_size: 24254881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
cjerzak/UltrasTexts_EgyptianIndependent
|
cjerzak
|
2025-04-17T00:23:42Z
| 30 | 0 |
[
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"egypt-independent",
"ultras",
"football-fandom",
"text-corpus",
"web-scraping",
"sports-media"
] |
[] |
2025-04-16T23:27:17Z
| 0 |
---
dataset:
- name: UltrasTexts_EgyptianIndependent
tags:
- egypt-independent
- ultras
- football-fandom
- text-corpus
- web-scraping
- sports-media
license: mit
---
# Texts about Ultras from the Egyptian Independent, 2009-2020
A curated corpus of Egypt Independent articles on “Ultras” football fan groups, with publication dates, URLs, and full‑text content.
## Dataset Summary
- **Source**: Egypt Independent search results for the term “ultras+” (39 pages of results, 9 articles per page).
- **Period Covered**: Articles published between 2009 and 2020.
- **Total Records**: 378 raw articles were scraped; after filtering out entries with fewer than 20 characters of main text or failed parses.
## Files
- **`EgyptIndependentUltrasTexts.csv`**
A csv file with the following columns:
- `publication_date` — Date the article was published (YYYY‑MM‑DD).
- `url` — Full URL of the original Egypt Independent article.
- `text` — UTF‑8 text extracted from the article’s main content container.
## Data Fields
| Column | Type | Description |
|--------------------|---------|-----------------------------------------------------------------------------|
| publication_date | Date | Publication date. |
| url | string | `"https://www.egyptindependent.com/"` + article specific URL |
| text | string | Main body text, whitespace‑collapsed, non‑ASCII characters replaced. |
## Paper Reference
Connor T. Jerzak. Football fandom in Egypt. _Routledge Handbook of Sport in the Middle East_, pages 196-207, Oxfordshire, UK, 2022. Routledge. Danyel Reiche and Paul Brannagan (eds.)
[[PDF]](https://connorjerzak.com/wp-content/uploads/2022/06/Jerzak_FootballFandomInEgypt.pdf) | [[BibTeX]](https://connorjerzak.com/wp-content/uploads/2024/07/FandomBib.txt)
```
@inproceedings{jerzak2022football,
title={Football fandom in Egypt},
author={Jerzak, Connor T.},
booktitle={Routledge Handbook of Sport in the Middle East},
year={2022},
volume={},
pages={196-207},
publisher={Routledge},
address={Oxfordshire, UK}
}
```
|
qiqiuyi6/TravelPlanner_RL_train_revision_easy_example_expanded_fined
|
qiqiuyi6
|
2025-06-10T09:09:02Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-10T09:08:51Z
| 0 |
---
dataset_info:
features:
- name: org
dtype: string
- name: dest
dtype: string
- name: days
dtype: int64
- name: visiting_city_number
dtype: int64
- name: date
dtype: string
- name: people_number
dtype: int64
- name: local_constraint
dtype: string
- name: budget
dtype: int64
- name: query
dtype: string
- name: level
dtype: string
- name: annotated_plan
dtype: string
- name: reference_information
dtype: string
- name: pure_question
dtype: string
- name: pure_constraint
dtype: string
- name: problem
dtype: string
- name: problem_without_constraint
dtype: string
- name: problem_without_information
dtype: string
- name: problem_without_both
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 24971136.0
num_examples: 270
download_size: 6091472
dataset_size: 24971136.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PassbyGrocer/ais_abnormal_new
|
PassbyGrocer
|
2025-04-08T11:30:32Z
| 16 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-07T13:02:18Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: feature
sequence:
sequence: float64
- name: labels
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 68066280
num_examples: 7989
- name: test
num_bytes: 23651520
num_examples: 2776
download_size: 2247293
dataset_size: 91717800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
SayantanJoker/IndicVoices_Hindi_audio_44100_60plus_female_quality
|
SayantanJoker
|
2025-04-23T08:08:10Z
| 19 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-23T08:08:09Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: float64
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
splits:
- name: train
num_bytes: 2015697
num_examples: 6196
download_size: 913073
dataset_size: 2015697
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HennersBro98/reasoning-aime25-deepscaler
|
HennersBro98
|
2025-04-02T18:24:11Z
| 56 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-02T18:24:10Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: atla_domain
dtype: string
- name: atla_criteria
dtype: string
- name: problem
dtype: string
- name: truth_result
dtype: string
- name: assistant_template
dtype: string
- name: prompt
dtype: string
splits:
- name: test
num_bytes: 31967
num_examples: 30
download_size: 22268
dataset_size: 31967
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Rojban/sv-stat-tables
|
Rojban
|
2025-03-11T19:58:01Z
| 16 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-03-11T19:56:09Z
| 0 |
---
license: apache-2.0
---
|
facebook/Wildchat-RIP-Filtered-by-70b-Llama
|
facebook
|
2025-02-26T19:17:34Z
| 29 | 2 |
[
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2501.18578",
"region:us"
] |
[] |
2025-02-21T18:19:19Z
| 0 |
---
license: cc-by-nc-4.0
language:
- en
pretty_name: Wildchat-RIP-Filtered
---
[RIP](https://arxiv.org/abs/2501.18578) is a method for perference data filtering. The core idea is that low-quality input prompts lead to high variance and low-quality responses. By measuring the quality of rejected responses and the reward gap between chosen and rejected preference pairs, RIP effectively filters prompts to enhance dataset quality.
We release 4k data that filtered from 20k [Wildchat prompts](https://huggingface.co/datasets/allenai/WildChat-1M). For each prompt, we provide 32 responses from [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and their corresponding rewards obtained from [ArmoRM](https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1). We use the ”best-vs-worst” preference pairing method in RIP experiments, however, this data can also be used with GRPO.
This dataset is ideal for training larger and more powerful models. For smaller models, we recommend using the [Wildchat-RIP-Filtered-by-8b-Llama dataset](https://huggingface.co/datasets/facebook/Wildchat-RIP-Filtered-by-8b-Llama).
You can load the dataset as follows
```python
from datasets import load_dataset
ds = load_dataset("facebook/Wildchat-RIP-Filtered-by-70b-Llama")
```
For more information regarding data collection, please refer to our [paper](https://arxiv.org/pdf/2501.18578).
## Citation
If you use data, please cite with the following BibTex entry:
```
@article{yu2025rip,
title={RIP: Better Models by Survival of the Fittest Prompts},
author={Yu, Ping and Yuan, Weizhe and Golovneva, Olga and Wu, Tianhao and Sukhbaatar, Sainbayar and Weston, Jason and Xu, Jing},
journal={arXiv preprint arXiv:2501.18578},
year={2025}
}
```
|
osama24sy/llama3.1-8b-it-10k-qwen-singleturn-onesolution-r64-24-v0.3
|
osama24sy
|
2025-05-05T19:58:17Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-05T19:58:13Z
| 0 |
---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: operations
sequence:
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 243196
num_examples: 150
download_size: 97582
dataset_size: 243196
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GenerTeam/gener-tasks
|
GenerTeam
|
2025-05-15T08:50:43Z
| 47 | 0 |
[
"task_categories:text-classification",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.07272",
"region:us",
"biology",
"genomics",
"long-context"
] |
[
"text-classification"
] |
2025-02-13T02:55:15Z
| 0 |
---
license: mit
task_categories:
- text-classification
tags:
- biology
- genomics
- long-context
configs:
- config_name: gene_classification
data_files:
- split: train
path: "gene_classification/train.parquet"
- split: test
path: "gene_classification/test.parquet"
- config_name: taxonomic_classification
data_files:
- split: train
path: "taxonomic_classification/train.parquet"
- split: test
path: "taxonomic_classification/test.parquet"
---
# Gener Tasks
## Abouts
The Gener Tasks currently includes 2 subtasks:
* The gene classification task assesses the model's ability to understand short to medium-length sequences. It includes six different gene types and control samples drawn from non-gene regions, with balanced sampling from six distinct eukaryotic taxonomic groups in RefSeq. The classification goal is to predict the gene type.
* The taxonomic classification task is designed to assess the model's comprehension of longer sequences, which include both gene and predominantly non-gene regions. Samples are similarly balanced and sourced from RefSeq across the same six taxonomic groups, with the objective being to predict the taxonomic group of each sample.
Note: The taxonomic classification dataset is substantial (2GB), which may result in extended training and evaluation time. To accommodate the model's maximum context length, we implement **right** truncation for sequences that exceed this limit.
## How to use
```python
from datasets import load_dataset
# Load gene_classification task
datasets = load_dataset("GenerTeam/gener-tasks",name='gene_classification')
# Load taxonomic_classification task
datasets = load_dataset("GenerTeam/gener-tasks",name='taxonomic_classification')
```
## Citation
```
@misc{wu2025generator,
title={GENERator: A Long-Context Generative Genomic Foundation Model},
author={Wei Wu and Qiuyi Li and Mingyang Li and Kun Fu and Fuli Feng and Jieping Ye and Hui Xiong and Zheng Wang},
year={2025},
eprint={2502.07272},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.07272},
}
```
|
MakiAi/OKU_wiki_llama3.1_8b_inst_Reflexive_chunk200_overlap700
|
MakiAi
|
2024-10-31T16:00:20Z
| 29 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-31T16:00:16Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: system
dtype: string
splits:
- name: train
num_bytes: 182605.746799431
num_examples: 632
- name: test
num_bytes: 20514.25320056899
num_examples: 71
download_size: 77355
dataset_size: 203120.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jasonyu23/NSE-Demo
|
jasonyu23
|
2025-02-13T10:40:53Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-13T10:40:21Z
| 0 |
---
dataset_info:
features:
- name: Question No.
dtype: string
- name: Question
dtype: string
- name: DeepSeek 32B Answer w/o Context
dtype: string
- name: DeepSeek 32B COT based on Q&A w/o Context
dtype: string
splits:
- name: train
num_bytes: 166270
num_examples: 79
download_size: 95065
dataset_size: 166270
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
randall-lab/shapes3d
|
randall-lab
|
2025-06-08T21:15:59Z
| 0 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-06-08T20:55:07Z
| 0 |
---
license: apache-2.0
---
# Dataset Card for 3dshapes
## Dataset Description
The **3dshapes dataset** is a **synthetic 3D object image dataset** designed for benchmarking algorithms in **disentangled representation learning** and **unsupervised representation learning**.
It was introduced in the **FactorVAE** paper [[Kim & Mnih, ICML 2018](https://proceedings.mlr.press/v80/kim18b.html)], as one of the standard testbeds for learning interpretable and disentangled latent factors. The dataset consists of images of **3D procedurally generated scenes**, where 6 **ground-truth independent factors of variation** are explicitly controlled:
- **Floor color** (hue)
- **Wall color** (hue)
- **Object color** (hue)
- **Object size** (scale)
- **Object shape** (categorical)
- **Object orientation** (rotation angle)
**3dshapes is generated as a full Cartesian product of all factor combinations**, making it perfectly suited for systematic evaluation of disentanglement. The dataset contains **480,000 images** at a resolution of **64×64 pixels**, covering **all possible combinations of the 6 factors exactly once**. The images are stored in **row-major order** according to the factor sweep, enabling precise control over factor-based evaluation.

## Dataset Source
- **Homepage**: [https://github.com/deepmind/3dshapes-dataset](https://github.com/deepmind/3dshapes-dataset)
- **License**: [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
- **Paper**: Hyunjik Kim & Andriy Mnih. _Disentangling by Factorising_. ICML 2018.
## Dataset Structure
|Factors|Possible Values|
|---|---|
|floor_color (hue)| 10 values linearly spaced in [0, 1] |
|wall_color (hue)| 10 values linearly spaced in [0, 1] |
|object_color (hue)| 10 values linearly spaced in [0, 1] |
|scale| 8 values linearly spaced in [0.75, 1.25] |
|shape| 4 values: 0, 1, 2, 3 |
|orientation| 15 values linearly spaced in [-30, 30] |
Each image corresponds to a unique combination of these **6 factors**. The images are stored in a **row-major order** (fastest-changing factor is `orientation`, slowest-changing factor is `floor_color`).
### Why no train/test split?
The 3dshapes dataset does not provide an official train/test split. It is designed for **representation learning research**, where the goal is to learn disentangled and interpretable latent factors. Since the dataset is a **complete Cartesian product of all factor combinations**, models typically require access to the full dataset to explore factor-wise variations.
## Example Usage
Below is a quick example of how to load this dataset via the Hugging Face Datasets library:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("randall-lab/shapes3d", split="train", trust_remote_code=True)
# Access a sample from the dataset
example = dataset[0]
image = example["image"]
label = example["label"] # Value labels: [floor_hue, wall_hue, object_hue, scale, shape, orientation]
label_index = example["label_index"] # Index labels: [floor_idx, wall_idx, object_idx, scale_idx, shape_idx, orientation_idx]
# Label Value
floor_value = example["floor"] # 0-1
wall_value = example["wall"] # 0-1
object_value = example["object"] # 0-1
scale_value = example["scale"] # 0.75-1.25
shape_value = example["shape"] # 0,1,2,3
orientation_value = example["orientation"] # -30 - 30
# Label index
floor_idx = example["floor_idx"] # 0-9
wall_idx = example["wall_idx"] # 0-9
object_idx = example["object_idx"] # 0-9
scale_idx = example["scale_idx"] # 0-7
shape_idx = example["shape_idx"] # 0-3
orientation_idx = example["orientation_idx"] # 0-14
image.show() # Display the image
print(f"Label (factor values): {label}")
print(f"Label (factor indices): {label_index}")
```
If you are using colab, you should update datasets to avoid errors
```
pip install -U datasets
```
## Citation
```
@InProceedings{pmlr-v80-kim18b,
title = {Disentangling by Factorising},
author = {Kim, Hyunjik and Mnih, Andriy},
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
pages = {2649--2658},
year = {2018},
editor = {Dy, Jennifer and Krause, Andreas},
volume = {80},
series = {Proceedings of Machine Learning Research},
month = {10--15 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v80/kim18b/kim18b.pdf},
url = {https://proceedings.mlr.press/v80/kim18b.html},
abstract = {We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon beta-VAE by providing a better trade-off between disentanglement and reconstruction quality and being more robust to the number of training iterations. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them.}
}
```
|
dafbgd/UHRIM
|
dafbgd
|
2025-05-09T03:18:09Z
| 33 | 2 |
[
"task_categories:image-segmentation",
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[
"image-segmentation"
] |
2025-02-05T12:51:05Z
| 0 |
---
license: mit
task_categories:
- image-segmentation
---
This is an ultra high-resolution image matting dataset proposed in [MEMatte](https://github.com/linyiheng123/MEMatte).
If you have any questions, please feel free to open an issue. If you find our method or dataset helpful, we would appreciate it if you could give our project a star ⭐️ on GitHub and cite our paper:
```bibtex
@inproceedings{lin2025memory,
title={Memory Efficient Matting with Adaptive Token Routing},
author={Lin, Yiheng and Hu, Yihan and Zhang, Chenyi and Liu, Ting and Qu, Xiaochao and Liu, Luoqi and Zhao, Yao and Wei, Yunchao},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={39},
number={5},
pages={5298--5306},
year={2025}
}
```
|
ben250fox/test1
|
ben250fox
|
2025-04-23T09:23:19Z
| 21 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
[
"robotics"
] |
2025-04-23T09:22:49Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "piper",
"total_episodes": 2,
"total_frames": 862,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6",
"main_gripper"
]
},
"observation.images.one": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.two": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
ChavyvAkvar/synthetic-trades-ADA-batch-15
|
ChavyvAkvar
|
2025-06-04T05:56:31Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-04T05:55:38Z
| 0 |
---
dataset_info:
features:
- name: scenario_id
dtype: string
- name: final_pnl_ratio
dtype: float64
- name: max_drawdown
dtype: float64
- name: total_trades
dtype: int64
- name: synthetic_ohlc_open
sequence: float64
- name: synthetic_ohlc_high
sequence: float64
- name: synthetic_ohlc_low
sequence: float64
- name: synthetic_ohlc_close
sequence: float64
- name: garch_params_used_for_sim_str
dtype: string
- name: strategy_params_str
dtype: string
- name: strategy_exit_rules_str
dtype: string
splits:
- name: train
num_bytes: 923454283
num_examples: 1000
download_size: 924477542
dataset_size: 923454283
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
abhinav302019/olympiad_data_111
|
abhinav302019
|
2025-03-04T18:52:42Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-04T09:27:51Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: Known_Solution
dtype: string
- name: Known_Answer
dtype: string
- name: Generated_Solution
dtype: string
- name: Generated_Answer
dtype: string
- name: Judge_Evaluation
dtype: string
- name: Judge_Rating
dtype: string
- name: Judge_Justification
dtype: string
splits:
- name: train
num_bytes: 96085
num_examples: 10
download_size: 78064
dataset_size: 96085
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bruece/reclip_office_home_Clipart
|
Bruece
|
2025-02-23T12:57:14Z
| 8 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-23T12:57:02Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Alarm_Clock
'1': Backpack
'2': Batteries
'3': Bed
'4': Bike
'5': Bottle
'6': Bucket
'7': Calculator
'8': Calendar
'9': Candles
'10': Chair
'11': Clipboards
'12': Computer
'13': Couch
'14': Curtains
'15': Desk_Lamp
'16': Drill
'17': Eraser
'18': Exit_Sign
'19': Fan
'20': File_Cabinet
'21': Flipflops
'22': Flowers
'23': Folder
'24': Fork
'25': Glasses
'26': Hammer
'27': Helmet
'28': Kettle
'29': Keyboard
'30': Knives
'31': Lamp_Shade
'32': Laptop
'33': Marker
'34': Monitor
'35': Mop
'36': Mouse
'37': Mug
'38': Notebook
'39': Oven
'40': Pan
'41': Paper_Clip
'42': Pen
'43': Pencil
'44': Postit_Notes
'45': Printer
'46': Push_Pin
'47': Radio
'48': Refrigerator
'49': Ruler
'50': Scissors
'51': Screwdriver
'52': Shelf
'53': Sink
'54': Sneakers
'55': Soda
'56': Speaker
'57': Spoon
'58': TV
'59': Table
'60': Telephone
'61': ToothBrush
'62': Toys
'63': Trash_Can
'64': Webcam
splits:
- name: train
num_bytes: 71890451.485
num_examples: 3055
download_size: 58212585
dataset_size: 71890451.485
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Luffytaro-1/asr_en_ar_switch_split_87
|
Luffytaro-1
|
2025-02-14T18:39:52Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-14T18:39:51Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 8346695.0
num_examples: 104
download_size: 7841402
dataset_size: 8346695.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Holmeister/CrowS-Pair-TR
|
Holmeister
|
2024-10-21T19:02:00Z
| 19 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2010.00133",
"region:us"
] |
[] |
2024-10-18T07:44:36Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: system
dtype: string
- name: inst_no
dtype: int64
splits:
- name: test
num_bytes: 2198840
num_examples: 5000
download_size: 155654
dataset_size: 2198840
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
### Citation Information
```
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. Crows-pairs: A challenge dataset
for measuring social biases in masked language models. arXiv preprint arXiv:2010.00133, 2020.
```
|
jacobmorrison/wildchat_aae_prompts
|
jacobmorrison
|
2025-06-18T17:44:25Z
| 7 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-17T23:46:06Z
| 0 |
---
dataset_info:
features:
- name: conversation_hash
dtype: string
- name: model
dtype: string
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: conversation
list:
- name: content
dtype: string
- name: country
dtype: string
- name: hashed_ip
dtype: string
- name: header
struct:
- name: accept-language
dtype: string
- name: user-agent
dtype: string
- name: language
dtype: string
- name: redacted
dtype: bool
- name: role
dtype: string
- name: state
dtype: string
- name: timestamp
dtype: timestamp[us, tz=UTC]
- name: toxic
dtype: bool
- name: turn_identifier
dtype: int64
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: harassment_threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: hate_threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: self_harm
dtype: bool
- name: self_harm_instructions
dtype: bool
- name: self_harm_intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: sexual_minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: violence_graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: harassment_threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: hate_threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: self_harm
dtype: float64
- name: self_harm_instructions
dtype: float64
- name: self_harm_intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: sexual_minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: violence_graphic
dtype: float64
- name: flagged
dtype: bool
- name: detoxify_moderation
list:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: toxic
dtype: bool
- name: redacted
dtype: bool
- name: state
dtype: string
- name: country
dtype: string
- name: hashed_ip
dtype: string
- name: header
struct:
- name: accept-language
dtype: string
- name: user-agent
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: unique_id
dtype: int64
splits:
- name: train
num_bytes: 237353973
num_examples: 10000
download_size: 126935480
dataset_size: 237353973
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gupta-tanish/verified-qfa-data-initial
|
gupta-tanish
|
2025-03-23T22:27:39Z
| 17 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-23T22:06:40Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: problem
dtype: string
- name: gt_answer
dtype: string
- name: selected_response
dtype: string
- name: selected_prm_score
sequence: float64
- name: selected_heuristic_prm_score
dtype: float64
- name: selected_prm_verification_score
dtype: int64
- name: selected_gpt4o_verification_score
dtype: string
- name: selected_gpt4o_reasoning
dtype: string
- name: responses
sequence: string
- name: prm_scores
sequence:
sequence: float64
- name: heuristic_prm_scores
sequence: float64
- name: prm_verification_scores
sequence: int64
- name: gpt4o_verification_scores
sequence: string
- name: gpt4o_reasonings
sequence: string
splits:
- name: train_prefs
num_bytes: 406765639
num_examples: 7500
download_size: 115194551
dataset_size: 406765639
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
|
amyguan/newswire-10-20-labor
|
amyguan
|
2024-12-08T10:11:03Z
| 8 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-08T10:10:50Z
| 0 |
---
dataset_info:
features:
- name: article
dtype: string
- name: byline
dtype: string
- name: dates
sequence: string
- name: newspaper_metadata
list:
- name: lccn
dtype: string
- name: newspaper_city
dtype: string
- name: newspaper_state
dtype: string
- name: newspaper_title
dtype: string
- name: antitrust
dtype: int64
- name: civil_rights
dtype: int64
- name: crime
dtype: int64
- name: govt_regulation
dtype: int64
- name: labor_movement
dtype: int64
- name: politics
dtype: int64
- name: protests
dtype: int64
- name: ca_topic
dtype: string
- name: ner_words
sequence: string
- name: ner_labels
sequence: string
- name: wire_city
dtype: string
- name: wire_state
dtype: string
- name: wire_country
dtype: string
- name: wire_coordinates
sequence: float64
- name: wire_location_notes
dtype: string
- name: people_mentioned
list:
- name: person_gender
dtype: string
- name: person_name
dtype: string
- name: person_occupation
dtype: string
- name: wikidata_id
dtype: string
- name: cluster_size
dtype: int64
- name: year
dtype: int64
splits:
- name: train
num_bytes: 37669750.903175764
num_examples: 7558
download_size: 8967917
dataset_size: 37669750.903175764
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davidheineman/aime
|
davidheineman
|
2025-02-02T23:15:50Z
| 24 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-02T23:15:48Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ground_truth
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 456962
num_examples: 933
download_size: 181814
dataset_size: 456962
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_e4ff1eb7-eb03-4152-838a-58f5246e294d
|
argilla-internal-testing
|
2024-11-21T13:58:43Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-21T13:58:42Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dirganmdcp/yfinance_Indonesia_Stock_Exchange
|
dirganmdcp
|
2025-03-12T02:05:32Z
| 51 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-03-12T02:05:32Z
| 0 |
---
license: apache-2.0
---
|
Bravelove/so100_test
|
Bravelove
|
2025-02-13T11:52:28Z
| 42 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] |
[
"robotics"
] |
2025-02-13T11:51:55Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1201,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,131