The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
REPRO-Bench: Can Agentic AI Systems Assess the Reproducibility of Social Science Research?
This repository contains the REPRO-Bench dataset, introduced in the paper REPRO-Bench: Can Agentic AI Systems Assess the Reproducibility of Social Science Research?.
REPRO-Bench is a novel benchmark designed to evaluate the capability of agentic AI systems in automating the assessment of social science paper reproducibility. It addresses limitations of existing benchmarks by providing a collection of 112 task instances, each representing a social science paper with a publicly available reproduction report. The benchmark features end-to-end evaluation tasks on reproducibility with complexity comparable to real-world assessments, encompassing diverse data formats and programming languages.
Official Codebase: https://github.com/uiuc-kang-lab/REPRO-Bench
π¦ Dataset Structure
The REPRO-Bench dataset includes:
- 112 task instances (original paper PDFs + corresponding code and data)
- Gold-standard reproducibility annotations
- Public reproduction reports
π Getting Started
The REPRO-Bench dataset is hosted on Hugging Face. To access and clone the dataset:
git clone https://huggingface.co/datasets/chuxuan/REPRO-Bench
cd REPRO-Bench
git lfs pull
To run the representative AI agents evaluated in the paper (for advanced usage, refer to the official codebase):
Run SWE-Agent
bash SWE-Agent/run_all.sh
Run AutoGPT
bash AutoGPT/classic/original_autogpt/run_all.sh
Run CORE-Agent
bash CORE-Agent/classic/original_autogpt/run_all.sh
- Downloads last month
- 795