core-bench / README.md
siegelz's picture
Update README.md
18ac8ed verified
metadata
license: mit
language:
  - en
pretty_name: 'CORE: Computational Reproducibility Agent Benchmark'

Paper Leaderboard GitHub Dataset

Dataset Card for CORE-Bench

CORE-Bench is a benchmark evaluating the ability of agents to computationally reproduce scientific papers. It comprises 270 tasks from 90 papers across computer science, social science, and medicine, written in Python or R.

Each task in CORE-Bench requires an agent to reproduce the results of a research paper given its repository. The agent must install libraries, packages, and dependencies and run the code. If the code runs successfully, the agent needs to search through all outputs to answer the task questions. The agent submits a report and is evaluated against the results of a successful reproduction. An agent successfully completes a task if it correctly answers all questions about a code repository.

Dataset Details

The benchmark is defined in two files: core_train.json and core_test.json (decrypt the test set using gpg --output core_test.json --decrypt core_test.json.gpg).

Each task in the dataset contains the following fields: field, language, capsule_title, capsule_id, task_prompt, results, and capsule_doi. The files for each environment are themselves not hosted here. The harness automatically downloads the repositories for each task based on the capsule_id from our servers.

Note that the dataset JSON files found in this repository contains the task prompts, task questions, and some other metadata for each task, but not the associated code repositories. The code repositories, which the harness automatically downloads for each task, can be found at https://corebench.cs.princeton.edu/capsules/capsule-XXXXXXX.tar.gz, where XXXXXXX is the capsule_id.

Citation

Dataset Card Authors

Zachary S. Siegel ([email protected]), Sayash Kapoor ([email protected]), Nitya Nagdir ([email protected]), Benedikt Stroebl ([email protected]), Arvind Narayanan ([email protected])