Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
metadata
dataset_info:
features:
- name: section_id
dtype: string
- name: query_id
dtype: string
- name: passage
dtype: string
- name: question
dtype: string
- name: answers_spans
sequence:
- name: spans
dtype: string
- name: types
dtype: string
- name: context
dtype: string
- name: prompt
dtype: string
- name: answer
dtype: string
- name: answer_type
dtype: string
splits:
- name: train
num_bytes: 300858264
num_examples: 77400
- name: validation
num_bytes: 32687566
num_examples: 9535
download_size: 30675863
dataset_size: 333545830
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
Dataset Card for DROP for CARE training
Dataset Description
This dataset is a processed version of DROP (Discrete Reasoning Over Paragraphs) adapted for training language models with native retrieval-augmented reasoning capabilities in the CARE framework. It serves as the "easy" dataset (D_easy) in the curriculum learning strategy for reinforcement learning training.
Key Features
- 77,409 training instances and 9,536 validation instances
- Formatted for reasoning-based QA with context and prompts
- Discrete reasoning tasks including counting, sorting, and arithmetic operations
- Structured for curriculum learning as the initial training phase before MS MARCO
Dataset Structure
Data Fields
section_id: Unique identifier for the passage sectionquery_id: Unique identifier for the questionpassage: Original paragraph containing information needed to answerquestion: Question requiring discrete reasoning over the passageanswers_spans: Dictionary containing:spans: List of answer text spanstypes: List of answer types (e.g., number, date, span)
context: Formatted context for model inputprompt: Complete prompt combining context and questionanswer: Processed answer stringanswer_type: Type classification of the answer
Data Format Example
{
"passage": "Hoping to rebound from their loss to the Patriots, the Raiders stayed at home for a Week 16 duel with the Houston Texans...",
"question": "Who scored the first touchdown of the game?",
"answer": "Chaz Schilens",
"answer_type": "span",
"context": "Context: Hoping to rebound from their loss to the Patriots...",
"prompt": "Context: [passage]\n\nQuestion: Who scored the first touchdown of the game?"
}
Data Splits
| Split | Number of Examples |
|---|---|
| Train | 77,409 |
| Validation | 9,536 |
Usage
Loading the Dataset
from datasets import load_dataset
dataset = load_dataset("sheryc/DROP_care")
train_data = dataset["train"]
val_data = dataset["validation"]
Training Integration
# Example usage in curriculum learning
class CurriculumTrainer:
def __init__(self):
self.drop_data = load_dataset("sheryc/DROP_care")["train"]
self.msmarco_data = load_dataset("sheryc/MSMARCO_care")["train"]
def get_batch(self, step, total_steps):
# Linear curriculum: start with DROP, gradually add MS MARCO
alpha = max(0, 1 - step / total_steps)
if random.random() < alpha:
return self.sample_from_drop()
else:
return self.sample_from_msmarco()
License
Please refer to the original DROP dataset license.
Citation
If you use this dataset, please cite both the original DROP paper and the CARE paper:
@inproceedings{wang2025care,
title={Improving Context Fidelity via Native Retrieval-Augmented Reasoning},
author={Wang, Suyuchen and Wang, Jinlin and Wang, Xinyu and Li, Shiqi and Tang, Xiangru and Hong, Sirui and Chang, Xiao-Wen and Wu, Chenglin and Liu, Bang},
booktitle={Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
year={2025}
}
@inproceedings{Dua2019DROP,
author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner},
title={DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs},
booktitle={Proc. of NAACL},
year={2019}
}
Contact
For questions about the dataset or to report issues, please visit the CARE project homepage or contact the authors.